Master's Series on Field Research A series of interviews with major figures in field research conducted in the early 1980s by Peter Blanck Transcript of interview with Phil Stone Peter Blanck: What's your idea of field research? Why is it appropriate? How should it be done from a psychological as opposed to a business point of view? General issues like that. Then, we'll move into specifics related to content analysis, and non-obtrusive long-time stuff, and survey research. Phil Stone: OK - I guess I would describe the reasons for having a social science approach to field research, and contrast it to a journalistic approach, or business approach, is to be able to draw tools of generalization, to be able to make a finer investigation into underlying factors than either the journalist or the case-study person is likely to do. In the first case, the journalist is likely to stop with an analysis of issues that don't look at this particular case in the context of some broader social dynamics, and, therefore, there's also a set of comparative tools, methodological tools, that may complement or shed light on the kinds of insights that a particular case study or journalistic approach might uncover. So, I see them as rather complementary set of approaches, rather than mutually exclusive or oppositional, in any sense of the word. And, I think that a good balance of case studies combined with broader approaches, comparative approaches using many cases in a sort of quantitative investigation balance each other out in this way. Peter Blanck: In the typical psychology degree program, like this one, what kind of - learning techniques are available to the typical graduate student within the department, or do you usually have to go outside of the department to get...? Phil Stone: I don't think our field research training is as strong as it might be, but there is a good basis, at least, in observational techniques, in content analysis techniques, in quasi-experimental designs, or experimental designs here. What we're lacking is - we don't - we can not compete with a group like Michigan in a strong research program and so on. But our students are prepared enough so that they can draw upon those kinds of resources when they come to the appropriate research problem, and I think they're trained to recognize when they have a kind of problem that may need those particular resources. We have Jim Davis here who's the ex-Director of the National Opinion Research Council in Chicago. We have a number of resources which tie us into these kinds of special facilities, even though we do not maintain such facilities ourselves here at Harvard. Peter Blanck: I would like you to talk now about - this will be the bulk of the interview - talk about, in very specific terms so somebody who is not familiar with content analysis at all, can understand how you developed it, why it was interesting to develop, how it's related to field research, and how it's a tool to be used by field researchers. So, take your time, a very general, as you were explaining it to a Psych 101 class - what is content analysis, how you came about to develop the general inquiry, and how it's useful in archival data with field researchers, and so forth. Phil Stone: Yes. A content analysis is a generalized procedure, or set of procedures, for making inferences from textual materials about the general patterning of values, or motives, or other characteristics of the text, usually by comparing one group with another group. So, you can say that this group shows more of a particular characteristic than the reference group or less of the characteristic. The purpose of content analysis can be perhaps illustrated by contrasting to other things, or related things that content analysis is not. On the one hand, there are the information retrieval kinds of systems, where you're looking for a particular document, and you want to cue in and on some index words or whatever, to retrieve that document. In a sense, it is a content analysis because you are searching texts for specified characteristics. But, your objective is to make one retrieval, or find the document you're looking for. Another related field, which might be considered content analysis, but is not the main thrust of what we usually do, concerns the identification of particular authors who wrote a particular document, like, for example, who wrote the Federalist Papers. There, again you make a search of text characteristics, but they are characteristics which are usually very idiosyncratic and do not reflect underlying values or motives. For example, in separating The Federalist Papers, the word "while" versus "whilst" which became a major tool for separating who wrote which paper, simply because one author preferred one term, and one author habitually used the other, and that this made for a powerful basis of separation. An example of a kind of content analysis that we do in the field would be illustrated by a case recently in the Mississippi courts. There is a standard textbook of Mississippi history, written by a Professor Bettersworth at the University of Mississippi at Oxford which has been adapted by the state school board of Mississippi for teaching state history, and this book had a point of view regarding the position of blacks in the society which some people were not particularly happy with, and a group at Tupelo University endeavored to write a contrary state history. Now, the state board in Mississippi can adopt up to three different books giving this teacher in the school room a choice, but the board refused to adopt also this alternative book, and a case was made that the treatment particularly of blacks in this alternative book was quite different from the way that Mr. Bettersworth had, and we were asked to content analyze - to compare the existing textbook, Mr. Bettersworth's with this alternative textbook to see what differences we could document. Now, this is a curious kind of problem because there many differences between the textbook, and just to mention blacks more, is not, in a sense, an example of really giving better attention to blacks, or treating blacks better. So, we had to look at characteristics that this legal system would identify as relevant to the case. And, we looked for something fairly simple because it's not a very good idea to deluge a court trial with tables of statistics. So, we took a very simple case which seemed to illustrate the point - namely, we went through both books, and looked at every instance of reference to the work "Mississippian". We based this on the point that the state history board, school board, had, in their criteria, emphasized that the textbook should give the young student what we might call a sense of identification with Mississippi, make the student proud of being a Mississippian. Well, given this, then it should be that a reference to "Mississippian" in the book should be a reference to a person that in today's context would include the student or the student's family, or, in earlier historical contexts include the ancestors of that student, so the student could be proud to have an ancestry that were Mississippians, assuming the student's a native from the state. So, we examined every "reference to Mississippian", and we separated off those that explicitly referred to white Mississippians, or black Mississippians, as a subset, or Mississippian lawyers, or other subgroups of Mississippians, and just took the general term, not amplified in further detail. We had several hundred instances in each book, and we found, generally, that the student reading the old textbook could read the word "Mississippian" and quite rightly conclude that this couldn't possibly mean either himself, his parents, or his ancestors, it excluded them. One example statement is, "Mississippians were very disappointed and angry about the 1954 Supreme Court decision." Well, how can a black Mississippian read that and say "that's my parents" kind of thing. Well, counting it up was rather conclusive evidence that "Mississippian" was in fact excluding blacks in Bettersworth's book, but including blacks in the other book, even though there were, in the alternative books, a number of cases where the term - even though the book was written mostly by blacks, a term inadvertently referred to white "Mississippians" - could not really include that other group such as the Mississippians going to serve in a certain army in American history. There were two cases in the alternative book where the term "Mississippian" only would refer to black Mississippians, it was a reverse kind of misclassification, and we pointed that out. When we do a content analysis, we generally prepare the entire body of text for the computer - not that the computer does the analysis, but the computer is an aide in the information handling - to find trends, or patterns in the data. And, in some cases the computer is only sort of culling out the instances which we will make a decision about whether to classify them one way or another. In other instances, we are able to use dictionaries, or other look-up procedures to have the computer automatically assign descriptors. So, this case of "Mississippian", Mississippi textbook history, was rather interesting because we did this at the request of the plaintiff in the case, and we ran the entire text through, and, so long as we had it on the computer, we looked at various characteristics of both texts, taking comparable chapters of each period of history, the civil war, after Civil War era, etc., comparing them chapter by chapter, and I noticed the computer is very good identifying occurrences of things, but it's almost better than humans at noticing the absence of things. We see so many things everyday we tend not to notice the absence of it. We may see a newspaper story, for example, that's going on for some days, but the newspaper story fades from the news, and we don't really notice that it disappeared, and somebody will say whatever happened to that story; the computer would pick that up - the omission - as well as the more salient inclusion in the text. So, I noticed right away that the computer had signaled that Bettersworth was different from this alternative book in noting the occurrences - Bettersworth had more occurrences, or references, to women. If you looked at the book, it was clear that Bettersworth was spending more time on what you might call Mississippi culture, and including folk culture. And, there was quite a treatment of what you might call the Magnolia era woman in Mississippi history in this lovely gentile fashion, which the alternative book, in its concern for political history, had ignored, and focused primarily on political history which was primarily carried out by men in Mississippi. So, I called down to Mississippi, this was relatively early in the analysis, and said that we - the analysis was very interesting and supportive in terms - in terms of treatment of blacks, but really that their book was rather scant in treatment of women. And this was some years ago, and the author sort of laughed, and he said, "It's funny that you should mention that - the women in my office are protesting right now. They'd finally realized that it wasn't a good treatment, and we promised them a major revision, to bring more women into the book." The computer had signaled it, where you and I might have just not noticed. OK. So, these are the kinds of multidirectional comparisons that our techniques have developed for looking at different kinds of treatment of issues in text. In other field settings, in business, we'd been using this for many years. I was on the board of a company called Simulmatics Corporation, and we used this to study product imagery, and one of the companies that uses has used our system as a societal analysis division of General Motors, where they have been, for example, looking at people's sensitivity to safety issues regarding automobiles and driving attitudes - these sort of things, as they appear in a raw and indifferent press in the country and producing reports to sort of monitor changes in public attitudes. These things can be an adjunct in ways which are rather remarkable, helping people to digest much more information on occasion than one might expect. For example, this General Motors project, which is reported in Public Opinion Quarterly, gathered all the information from a newspaper by essentially capturing the newspaper as it was put to press. With these modern computer systems, the reporters enter their stories right into a computer, and they are edited in a computer before the are sent out, in the case of a Detroit paper, to a printing press, some miles from the editorial office. We just put a computer in that, essentially, was a second printing plant and captured the newspaper. [WHAT?] Another kind of study, which is now quite feasible, is to compare documents going back a long period of time. It should be evident that one of the advantages of content analysis over asking people questions is that we can go back well before anybody ever thought of the idea of doing survey research. For example, we've done studies of letters written by people over most of their lives. In another study, we have recently used a new optical reading system, a spin-off technology from the development of machines that read for the blind. Instead of pronouncing what the machine reads, it stores the text optically read on tape, and we have captured this way all the American party platforms going back to when they first appeared and detailed time trend analyses of the changes of American values in these documents - one illustration of the kinds of investigations that take place. Yeah? Peter Blanck: That was interesting. I want to probe more on any specific business-related approaches you've used, if there are others, and how a typical field researcher might find them useful. It's not a very familiar technique to many people outside of a psychology program. Do you think people at the business school or at business schools are aware? Phil Stone: Most of the research in content analysis is not done in psychology departments, but in schools of communications. There is a thesis done some years ago, by a man named Barkus, who was at the time over at the Boston University School of Communications, called "The Content Analysis of Content Analysis", in which it reviewed and summarized over 1200 studies that were done. There have been content analyses, for example, of violence on television, detailed studies going on down at the Annenberg School of Communications in Philadelphia on television impact, under Gerbner and his colleagues. There are many studies now going on in Scandinavia for political comparisons between the Scandinavian countries of documents - ideologies expressed in the press. Content analysis within the social sciences has tended to wax and wane, I think, in response to the role of the extent that people are willing to look at ideology as a causal factor into how people behave. I was trained with people who endorse the idea of what people think is being important to their behavior, how they construe the world. People like Carl Rogers when I was at Chicago, or Bales here, and therefore it was important to develop these procedures in ways that would not be intrusive and would not sort of distort their ideology. Now, there's a fundamental question or problem here, because if you're doing any kind of classification, let's say you read James Reston in the New York Times, and you're a regular reader, and you read several editorials and you say, "Well I think he's sort of gone soft on, or he's gotten rather conservative about...", you're doing a miniature content analysis. You're essentially taking words and phrases that he wrote, and saying, "Gee whiz, that's a softer line", or, "That's a more conservative approach than similar words and phrases that I remember reading in past articles." So you're doing an intuitive content analysis as it were, and you're interpreting at the same time what you think is important about Bresten's writing, in terms of classifying it as being more liberal or conservative or what have you. And these kinds of judgments have led to a number of criticisms that content analysis is easily swayed by the particular dictionaries or classification systems that are used. There've been some interesting studies by Robert Weber recently, where he has, using multivariate analysis procedures, employed several different analytic schemes to the same data, and found that he was able to come out with remarkably similar conclusions, even though he used completely different dictionaries and other tools within the framework of our system. We, for example, have dictionaries that are derived partly from the work of Parsons, or dictionaries that are derived from the work of Osgood, using the semantic deferential, or dictionaries that are derived from the work of Harold Laswell at Yale University. In the case of this comparison that Weber made, he was comparing the Laswell approach to the approach of the Harvard dictionaries in contrast to the Yale group. And, in these cases, it seems that because so many classifications cross-cut each other, similar conclusions are likely to be derived. On the other hand, very specific scoring systems, such for the - we have a computer procedure for scoring Need-Achievement in MacClellan's class - motive scoring procedure. Obviously, the kinds of scores derived in that would not relate to a particular businessman setting safety imagery in the dictionaries all classifying a concept like safety belts and speed and this sort of thing, completely different, non-overlapping scoring. So, these are some of the issues that have come across. Now, it's interesting to follow the role or changes regarding ideology in the social sciences, and compare these changes in the United States versus Europe. For the past few years, it seems to me that ideology has gotten a very suspicious treatment in the United States; people are - some people claim that ideology is an epiphenomenon that just reflects social structural factors, and there are literally rationalizations for structural situation. I disagree with that, obviously, I think that people can or have the intelligence enough to construe the world in ways beyond mere determinism or in terms of the social situation which they are raised or in which they work. And it can go beyond these constraints in exciting ways. It's that aspect of human beings that I'm more interested in studying. Peter Blanck: Content analysis doesn't seem relatively accessible to the typical field researcher, at least at the scale you're talking about. Let's discuss future directions now. Is it going to become more accessible? How are people going to know more about it? How can you set it up in an interesting way so it would be useful to more of the General Motors type projects? Do you get more national recognition? Phil Stone: Well, there have been several problems. One is in having machine-readable text. This has been rapidly expanding, partly because more people are putting text on line. You can now, if you have a home computer, call and get the New York Times or The Washington Post summarized on your computer screen through any one of a series of resources. These kinds of developments are resulting in a much greater repository of material is being stored on computer form, as well as in printed form. Things like the wire services are now completely archived in computer form. Some wire services been out there for a number of years; others are more recently archived, and they can be bought or analyzed in this way. So, it's only been in certain recent years that people have been able to overcome the basic problem that computers do not read books like you and I do. Even the optimal reading systems devised today are limited in what they can handle. For example, the system for the blind people is still not able to identify columns in newspapers, and separate this feature. Now, it turns out to be technically quite difficult I gather. And it's important both for people who can not see the newspaper to have the computer identify the column boundaries, and not skip over to the next column, and it's important for us in order to get the story cohesively together. Nor can these machines read older books. We have a collection, I believe of 35,000 school textbooks here at Harvard, going back to about 1810 over at the School of Education. I've looked at these and sampled pages of them, and it's quite evident that, when we get back to about, turn-of-the-century that it's very hard for the computer to pick this up. And, when we get back before 1830 when the spelling characteristics were different, the computer is completely buffaloed. So, while we like to look at long-term trends, we're better off looking at things like, for example, the party platforms were reprinted in modern paper, that we can read, but to read any kind of document, ordinary newspaper, an old document, is just not a matter of course, still. The other thing that's happening, of course, is the computer costs are coming down. When we started this project years ago machines were relatively slow, the size of the dictionaries that the machines could handle, with any reasonable speed, were relatively modest and so on. Today, we have much more sophisticated procedures, often processing text at the rate of ten to twenty-thousand words a minute, doing identification of word senses as it goes, so it can distinguish the board that's a piece of lumber, from the board of people sitting around a table - all these kinds of resolutions are made as the computer processes text. Classification systems that include dictionaries up to the size of anything you want, because the computers are so large. So, rather than be technology-bound, in terms of processing, as opposed to data acquisition, we are more bound in terms of the amount of effort we can put into building large dictionaries, or transcribing various theories into operational coding rules that the dictionary can understand. Peter Blanck: How do how do you see fifteen to twenty years from now, what will the role of this technique in relation to other field techniques? Phil Stone: Well, there's so much cause for concern. I was one of those who protested very much the government getting into business of developing machines that can take speech and translate it into computer storable form as text. I think that if a person writes something down and chooses not to burn it, that person takes a risk, and, in some sense, it's somewhat fair game for people in my trade and that unless they put it in a safe or something, or otherwise encrypt it, that this is a historical document and what's fair game to the historian would be fair game for our purposes. However, if you have a conversation in a private room, where you have to worry about the room being bugged, and it's 1984, one of the concerns was that the time saving that keeps, for example, the totalitarian government from bugging every room in a large hotel is that they can't afford to have people sitting listening to all the conversations in all the different rooms. However, if the government has developed a system which can stand by and listen, and you can build thousands of little computers, and do this, and immediately identify anything that sounds suspicious, I think you've got a problem on your hands, and I'm not so sure that's a very good idea. So, there are these ethical problems that make me wonder how much we really want to push this technology. What we've been doing is so many years of difference between, light years away, I should say, from what this large-volume processing would represent. There is nothing in our procedure which would make this imminent, in any way or form, but it's certainly - there are some, you might say, pilot study lessons as to the kinds of inferences people can draw from text, and the kinds of signatures that people leave, directly or indirectly, that they might not want to have recorded. But, the government continues on with this, everything from voice print to where people can talk into a microphone and sign something, literally, being identified as literally as they had to signed it. There were voice prints I - you may remember in that movie 2001, I believe - to the actual transcription of what is being said, so that it can be looked at for the kinds of ideology expressed. Peter Blanck: OK. I think we have enough on the content analysis. I wanted to shift gears a little into survey research, and you've spoken on that and you've done some work in that area. A general description of survey research and how it can be used as a field research technique. Phil Stone: OK. Content analysis can be somewhat analogized to archaeology. People leave traces around, and we pick them up and look at them for the patterns of inference we can make, either about the writer, the situation in which the person was writing, the consequences they were trying to produce, and so on. Survey research is the first level, you might say, of intrusiveness, in the sense that you're going and rapping on someone's door, and asking them to take some time out and answer some questions. Now, these questions, survey research can therefore only be prospect, if you have to think of the idea of the survey before you can tap in, and you can't say, "What did people think about...", a long time ago. If you're lucky, there was a survey done that taps those questions - what they call secondary analysis - and brings several surveys together to show some time trends. But, for the most part, it's prospective; it's based on making a minor intrusion into people's lives and asking them some questions. I emphasize the intrusiveness aspect because I think that the longer kind of survey creates some problems. It has a higher refusal rate in recent years, and, partly in urban environments, etc., there's a difficulty in making generalizations when so many of the people have turned you down. This has been replaced much by the telephone kind of interview which, especially with long distance dialing and etc., is done - in - considerable less cost than a face to face interview. I've been impressed that a number of agencies now do interviews much longer than people thought were possible a few years ago. There have been several books published on telephone interviewing. The types of surveys that ask questions can be classified in two ways. In one case, you ask attitudinal questions, "What do you think about X?" These are often in the form of consumer research, or where they liked a certain product. The other is a behavioral question, whether they used the product, what their expenses were in a certain field; whether they bought a car recently. There are some very famous studies that continue on this basis: a panel study on income dynamics. In between, you have some intentional kinds of questions: "Do you intend to buy a car in the next three weeks?", "Have you been looking for a car?", and, "Do you intend to make the purchase?" - this sort of thing. But anyway, this broad kind of separation has been a major split in people describing quality of life indices.... Inferences concerning the quality of life have mainly been from two sources of the survey research questionnaire. One source is questions asking whether people are satisfied or happy with their lives, where they have their sorrows, and where they find their joys. The other is to ask them for evidence, or behavioral reports concerning the way they spend their money, the way they allocate their time - these kinds of demonstrations, as to perhaps whether their life is changing in its quality or its organization, etc. I have been involved with a large study of time allocation procedures in this behavioral mode; we go to people and we ask them to keep a diary for a specified amount of time as to who they're seeing, or how they're allocating their time, or where they spend their time, and so forth. This can be used for rather practical reasons; for example, the Atomic Energy Commission was interested in using some of our data to find out where Americans were during different times of the day relevant to building bomb shelter type things. The data, however, has been a tradition that has not - was not founded in this country, but stems from work done mainly in the Soviet Union. One sociologist who is an immigrant from the Soviet Union, named Sorokin, brought over this technique, and did the first major time budget studies in this country during the depression, as part of, I believe, an administration project. The kinds of inferences that you draw from these surveys can be rather indicative of allocations of resources and, pardon me, let me say that the kinds of inferences that you draw from these surveys are sensitive to minor changes in allocations of resources, allocations of time patterning, in ways that complement a great deal of other data. For example, we know, in the Nielson ratings how many hours a day a television set is turned on, and what channels it's going to. This is done by, essentially, hooking a wire to the television set that the Nielson people can use to contact the household involved and record during the twenty-four hours of the day what they're tapping. One of the things that makes the use of behavioral measures particularly interesting is an area where there can be agreement among people coming from quite different backgrounds and quite different approaches. The time budget work we did was initiated on a cooperative international basis by the socialist countries through the Vienna Center for Cooperation between East and West, and we've been able to complete a fairly large number of studies involving about 15 countries or so, half are from the socialist, half from the capitalist countries. And there are major kinds of insights that can be derived, especially as these studies accumulate over time. For example, we can show quite clearly that the Revolution in Russia, which is supposed to benefit the people, has been actually quite poor at benefiting women who are working in raising a family. These people are, by any criteria, under quite a bit of duress in just meeting their daily schedules and getting enough sleep, and having a minimum amount of time for leisure. The state of these women has not improved that much over about a fifty year period in the Soviet Union. Similarly, we can look at class differences in this country or the family compositions within this country to examine changes that may occur from work. Another example of major changes is in television viewing, which continues to increase, in flexible scheduling, four day work weeks, a number of these other topics have been a point of investigation. We can investigate the impact of the arrival of new technology, other kinds of shifts which allow us, essentially, to monitor behavior. Now, there is a problem here in the sense that what we gather are usually self reports, or how people spent their time - they keep a diary for the days that they're investigating - and that this kind of a procedure is subject to a certain amount of rationalization, a certain amount of clarification. This is a problem of these kinds of approaches. They have been verified with other kinds of investigation, observational techniques, etc, etc. We know, roughly, what their level of accuracy is. We know the ways in which they tend to be rationalized. But they're an important adjunct to the kinds of work that a person may do, and will help the investigation of a variety of problems in different kinds of applied research settings. I think that the main issue regarding the bridging of applied research such as in business and some of the more general social science work that is done in traditional academic departments is not so much a matter of carrying the methodological tools from one side to the other - these tools can be acquired fairly rapidly, and, I think, with more interest in making the bridge among students and faculty that this kind of transfer will take place. There is an awareness that some interesting examples have been done with multivariate techniques or with other kinds of survey procedures and people are interested in picking these up. The point of immediate problem is, I think, in the terms of conceptualization of theory. Essentially, a consultant to business, or government, or other groups, is often going to make their reputation on the basis of what I would call an "awareness expansion". They go to their client and they listen to what the client has done, and then they say, "Ah, but you didn't consider these factors," and they reel off X, Y, and Z. And, the client is satisfied at that point in the sense that the client says, "Gee, I really hadn't thought of these," and he goes back and incorporates those additional ideas to the extent that they fit into the particular problem in the client's own view. In other words, consulting is often in the matter of the awareness expansion kind of role - to try to get the client to consider more factors than those that have gone into the decision-making to that point. This formulation of problems does not seem to lend itself as well to the kinds of ways that people doing traditional research designs like to think. There's a need to essentially conceptualize problems at somewhat different level, there's a need to make formulations that draw contrasts into - the usual question is, "Do you see more of it in this situation, or do you see in that situation?", rather than, "Have you also considered this?" And, by means of these levels of abstraction, or these different kinds of ways, the variables are brought together in a larger system to suit the purposes in one culture vs. the other. The problem is if their formulation continues in a way that is not operational in the other culture - that tends to be a very strong difficulty in bridging this gap. The level of theoretical conceptualization that I think the major need now exists for more interdisciplinary endeavors. I don't think this is necessarily at the level of having conferences and saying, "Here are our terms, and here are our your terms", but to take a look at some real research problems and say, "What concepts would you bring to bear on them, how would you formulate them together, what theories would you use, etc.," and bring the two cultures together to see if there can be a common set of terms for analyzing particular cases or looking at general research projects where a number of comparisons are being made. I think that one of the key issues in terms of how business will be relating to social science is to participate in some of the data gathering and coordination that must be occurring during the forthcoming years. For example, the new time budget study that we are doing of the national panel - just the United States and not the other countries - will cost about two million dollars. It's not something that a company is likely to go out and do themselves. Instead, what is probably going to be developing are various subscription services where an amalgamated or general project will be done which will be of interest to a number of corporations. Now, several survey companies are already marketing this kind of thing on a subscription basis - the same people who did the surveys for Carter, for example - have a periodic subscription survey where there was a bit of flak because some of their subscribers were the Arab countries and the question was, "How much should a foreign agency know about the thoughts of the American people on a variety of topics?" But, these questions aside, given the large cost of doing a time analysis covering the seasons of the year, the different days of the week, the different members of the household, it is well now beyond the power of such agencies that I involved with the National Science Foundation. It is well beyond the ability of our committee to fund time budgets, income dynamic studies, these kinds of things on a continuing basis, and what is going to have to emerge is some sort of conglomerate funding, representing different mission oriented agencies within the government, as well as different parts of the business world, much as already takes place in Japan. And, this kind of cooperative research design, a cooperative definition of problems, application of techniques in these ways seems to me to be one of the major kinds of approaches that a young bachelor's or doctorates in business administration students should be aware of, so that when they get into a situation where certain kinds of the - certain kinds of data may be relevant, they will know that these alternatives exist, and they can participate in these things as they come about during the next years. Peter Blanck: One last question that simply concerns your own research: what do you find fun about research, what do you enjoy about research, what is stimulating about it for you, and what do you hope to learn in the future from research? Phil Stone: Oh, I've generally enjoyed it tremendously, but what particular things - Peter Blanck: What do you like about research that's fun? Phil Stone: The "Ah-ha!" phenomenon, saying, "Gee, nobody on the history of the human race on earth has ever sort of had that insight before, and because I went down that path, he sort of uncovered that rock, he found what was underneath." That sort of thing is great. It's a privilege to be able to go down that path and have a society in which at least some people do such explorations. What I think is interesting is what kind of more innovative endeavors may take place at institutions like this one during the next ten years. Certainly, the experiment in social relations with a meld of anthropology, psychology and sociology was an exciting period, but led to a focus that some people complained was too much concerned or preoccupied with a microanalysis of human relationships. We've gone now to a series of other stages where, I think, there'll be a new agenda in the eighties which may focus around certain major topics of interest that coalesce people from various disciplines. I think one of the most exciting ones for the next decade is - it's unfortunately somewhat of a buzzword already - the issue of productivity, and that to work out an agenda for research on productivity that goes from large-scale structural issues down to the sense of productivity for the individual worker or craftsperson on a project is an exciting challenge and which people can agree upon. Other kinds of formulations in the past have not done so well, and I think it's interesting to look at why they haven't done so well. Things like work redesign, or things that are involving certain organizational variables have not tended to lead to the development of theory in the way that I think that is most fruitful, but I think that's there's a motive now - there's a push to develop a formulation in the eighties which will be an exciting blend of applied and more abstract theory, and that this will be used to marshal such issues as how can a society that is rather adversarial in its orientation between people compete in terms of productivity with a society that is more homogenous and cooperative in the way it works. This kind of thing. Master's Series on Field Research 12