Tuesday, January 25, 2011

Chapter Three draft, notes

Chapter Three

“But I do not believe that such comprehensive methods are always the best. Methods that collect and analyze data on a need-to-know basis, such as experiments or even surveys, also can be of great value. A quest for comprehensiveness reflects an expectation that each study should stand alone, an expectation that in itself emphasizes the uniqueness of each site and each occasion for research. If we have only one chance to study a phenomenon, then collecting all possible information about it when we can makes sense. But why assume we only have one chance?
(Charney, 2001, p. 410)

Outline –
1. Research into technical communication practices is difficult for a number of reasons, including the rather pedestrian but nonetheless tricky problem of carrying out an organized, thoughtful study. However, as discussed in Chapter Two, technical communication is a field which bridges the humanities and the sciences as well as academics and workplace practice. In Chapter Two I offered a description of the split between the literary humanities (the colleagues with whom we often share departmental affiliation) and the sciences (the colleagues with whom we often work in teaching students and presenting technical data to the public). In this chapter, I discuss the methodological problems which spring from this ideological split.
2. In seeking to further the bridging nature of technical communication, both as a practice and an area of research, many of us pursue the scientific question of what can be known; however, the utilitarian nature of the service course means that immediately practicable knowledge takes precedence over knowledge for its own sake. Thus, for the TC service course, the most pressing methodological question is what must be known?
The question of what must be known breaks down into recognizable forms rather than abstractions. For constructing a research method beneficial to a TC service course, the very least of what should be investigated includes the most common workplace genres and the most common rhetorical situations in which those genres are enacted. Genres and the circumstances under which they are enacted serve as rules to a sport, the basic organizing concepts behind the plays of the game.




3. What are the research debates in TC?
a. academy versus industry
b. academy
i. calls for partnership
1.
ii. against the industry
1. miller
a. “This discourse is infected by the assumptions that what is common practice is useful and what is useful is good. That good that is sought is the good of an existing industry or profession, with existing structures and functions. For the most part, these are tied to private interests, and to the extent that educational programs are based on existing nonacademic practices, they perpetuate and strengthen those private interests—they do indeed make their faculties and their students ‘more responsive tools’” (1989, p. 21).
b.
2. Bushnell
a. “It is tempting to act as the bridge from industry, to bring its practices to our students so that they may successfully ‘return’ with those skills after they graduate. But this impulse ignores what our mission as college and university teachers should be: to prepare our students to be critical thinkers, and to see that communication (because of the nature of language itself) is a complex human enterprise that goes far beyond describing or informing” (Bushnell, 1999, p. 177).
b.
3. Blyler
a. "In doing so, critical researchers wish to 'situate and analyze' their own research practices 'within larger structures of power and privilege,' asking 'whose interests are being served by our research efforts?'" (Blyler, 1998)(p. 40).
b. Blyler's (1998)call for a blending of the researcher/participant relationship is complicated by her praise of feminist research wherein "the participants, rather than the researcher, identify the issues to be addressed" (p. 41). If that's the case, then research participants within industry, with industry-minded goals, may well direct research which is indeed explanatory and descriptive--just the things against which Blyler argues.
iii. against science, empiricism
1. blyler
2. herndl
3.
to be included:
(Blakeslee, 2009)

4. Methodology –
5. What is there, and can we measure it?
a. Law
b. William James?
c. Creswell
6. If we can measure it, how can we measure it?
a. Creswell
b. Macnealy
Methods: ethnography, which is one of the “comprehensive” methods alluded to by Charney, can record behavior within a cultural group (such as with Winsor’s study). Culture, however, is unlikely to be replicated within the service course classroom, and therefore the behaviors prescribed/circumscribed by that culture, cannot be replicated in a useful fashion.
Disciplinary Identity
Aggregable research adds to our sense of disciplinary identity by way of charting history in terms of practice.

What you aim for is consensus, but you have to be happy with reasonable assurance. Happy with what you can know.

Work with what you can get, not with what you’d like to have. Because academics want to know the truth, but in the TC service course we’re more concerned with practice rather than enduring truth, or rather, we’re more concerned with the truth about contemporary writing practice.
More concerned with actionable knowledge rather than universal truth.

The goal of technical communication is actionable knowledge rather than universal truth, so the research methods should reflect that goal. Knowledge that you can use to improve the human condition.

The existential despair felt by scientists and literary humanists is treated differently by each: scientists work to improve the human condition despite the existential realization of the aloneness of existence, and part of the way they do this is through collaboration—teamwork.

Surveying Instructors and Engineers

My procedure consists of two simple surveys, one delivered to instructors of TC service courses and the other delivered to civil engineers. The instructor survey has 12 questions. Five of those questions are multiple-choice, 1 is true/false, 3 are open-ended, and 3 offer multiple choices as well as an open-ended response option. None of the questions ask for the instructor to draw a conclusion or make an inference. Of the purely open-ended questions, one asks for the participant to provide a definition of “revision,” one asks for a description of student group-writing activity in the classroom, and one asks the participant to describe the advantages of group revision versus writing and revising as an individual.

However, there are problems inherent in surveys. For one thing, surveys assume content knowledge on the part of the participant. Since many instructors of TC service courses might not have TC training, education, or experience, their knowledge of technical communication may be limited. However, the survey is directed to people already teaching a TC service course and asks only about their experience and opinions as an instructor, so their knowledge of TC as a discipline or professional practice is not a limiting factor. Indeed, the purpose of the survey is to discover what, if any, group writing occurs in those classes; thus, the instructors may be the best source of such information.

Instructors are often unreliable sources of information about their own classes or teaching. A classroom instructor may have false impressions of student activity for a variety of reasons. An instructor may have too many students to observe at once, or may only be able to glean surface-level details concerning classroom activity. An instructor may also be mistaken about how well or how diligently students are working. Group activity is also often frowned upon in classrooms, since academic achievement is routinely tied to individual performance; thus, an instructor may have a negative outlook on any group activity in the classroom. Surveys are also problematic in that they ask respondents to report based on memory, and respondents may remember incorrectly or through the lens of their own prejudices/assumptions about what their students have done.

I have chosen surveys because they seem to be the best method for data collection for my study. In discovering what sort of group revision activities happen in a classroom, interviews would work equally well for gathering useful data; however, in order for my study to be useful it needs to encompass a number of schools and participants. Interviews would be unwieldy and time-consuming, and might not gather information any more useful than from a survey. For the purposes of my study, problems associated with participant knowledge and memory would be just as problematic for interviews as with surveys.

There are no publicly-available records for determining classroom activity in a TC service course. While there are syllabi and course descriptions, these are only loose descriptors and do not offer several key pieces of information vital to this study. For instance, while a syllabus might indicate how many written assignments are required that same syllabus might not include information on group revision activities because those activities might not result in a grade. In fairness, many institutions of higher learning are now putting more information into syllabi in an effort at communicating what takes place in classrooms. Future studies might include a comparison of how syllabi have changed over time to reflect public interest in higher education. For now, however, a survey is a better method of gathering information about classroom activity.

My study is also an attempt to gather a broad swath of information so as to provide some insight into how TC service courses are taught nationwide. Interviews on such a scale are not manageable at this time, and for broad-based data surveys work just as well.

Part of the purpose of this study is to offer a method by which TC service courses can be regularly updated. Service courses are, by nature, not well funded, and as I’ve mentioned before they are often taught (and administered by) non-tenure track instructors who may not have extensive training or interest in research. A survey which can be regularly administered with beneficial results is a cheap, quick way to gather usable data on activities in the classroom and how they relate to workplace writing.

Surveys are not interactive, meaning that while the researcher may be gathering data, he or she is not actively conversing with the participants. Thus, an opportunity to build useful relationships may be missed. It seems clear, however, that many service courses are isolated from their primary stakeholders: future employers. Instituting a regular survey of local businesses and employers may be the first step in establishing a dialogue. Simply sending out a survey indicates an interest in serving the community, whether on or off campus.

My sample:

For my instructor survey, I am recruiting instructors of TC service courses at colleges and universities which have both a graduate degree in TC and an accredited college of engineering. I chose this population because it is likely to represent the “best” in what is offered in TC service courses because the instructors at such institutions are more likely to have some training and background in TC, they may have increased access to resources for professional development, and they are more likely to have an ongoing relationship with the engineering faculty and engineering professionals. Those conditions are in stark contrast to other schools which do not have any TC degree, much less a graduate program, and which do not have an engineering program.

Because I work in a civil engineering department and have ready access to civil engineers, both academic and non-academic, I have elected to survey civil engineers. Civil engineers also offer the opportunity to study professionals who have a broad range of writing experience because civil engineers regularly communicate with a wide audience, such as clients in government, the military, city councils and private citizens with various educational and business backgrounds. In contrast, aerospace engineers (for example) often communicate only with a small audience, such as other aerospace engineers and government employees. Thus, civil engineers are in need of writing abilities that address the needs of a diverse audience.

For my survey of engineers, I have chosen to survey engineers contacted by the Advisory Council of the Zachry Department of Civil Engineering at Texas A&M, where I work. The Advisory Council is made up of engineers who, according to the department’s bylaws, “provide advice, support and counsel to the Department Head with the express purpose of helping to maintain the highest level of academic excellence reflecting the profession of Civil Engineering. They are all degreed, licensed engineers and they also represent a wide range of engineering activities, from City Manager, City Engineer, military engineer, and consulting engineer. The advisory council also represents a range of technical expertise, from water resources management, construction, structural, coastal and ocean, pavement, transportation, and other areas. In short, this group is made up of successful engineers who offer a variety of experience. They will be asked to distribute the survey to the engineers who work for them as well as other civil engineers.
Texas A&M does not have a technical communication degree program or graduate study in TC, so I will not be distributing surveys to instructors at A&M.

Validity

Validity, as defined by MacNealy, is “the degree to which a specific procedure actually measures what it is intended to measure.” The research question for this dissertation is “How can we continually mine workplaces for usable information on their writing practices so that those practices can be performed in the classroom?” To answer MacNealy’s challenge, my study must produce results which (a) accurately reflect workplace writing, and (b) can be used in a TC service course to improve instruction. Therefore, the survey results must be evaluated in such a way that some standard of “usable information” is met.

In order to reach a standard by my results may be evaluated, some of the survey questions are intended to determine what, if any, differences exist between how service course instructors and professional engineers perceive group writing. How important is group revision? What are the benefits of writing and revising in a group as opposed to performing those same activities as an individual? Answers from both instructors and engineers will provide some idea as to how these two groups value group revision, thus establishing a baseline by which service courses can be judged in terms of “keeping up” with business.

The surveys also ask which writing technologies are used to enact group revision. The aim of such questions is multifarious: technologies are the tools with which we work, and if service courses in TC are supposed to prepare students for writing in the workplace, then our classes should contain at least some of the technologies used to produce workplace writing. But finding out what writing technologies are used in the workplace goes beyond “skill building,” and reflects a concern articulated by Selber that we should “concern ourselves with such changes and encouraged computer literacies in our classrooms that consider the rhetorical, social, and political implications of computer-mediated communication and work” (Selber, 1994, p. 366). If, for instance, we find that workplace writing increasingly makes use of the “cloud” for not only storage and transmission, but for creation, revision, and analysis, then what are we to make of that increasing reliance upon corporate entities which now encourage us to move beyond stand-alone devices and rely instead on an increasingly public set of technologies and activities? To address that question we first need to know what kinds of writing technologies are used in the workplace for specific activities, such as group revision.



Threats to Validity

Threats to validity are described by MacNealy as “issues affecting the determination of causal relationships.” None of the questions in my surveys ask participants to address causes or causal relationships. Each question deals only with gathering information related to the activity of group revision, either in the classroom or in the workplace. However, in fairness my research question implies the causality in any explanation which addresses it. In asking “How can we continually mine workplaces for usable information on their writing practices so that those practices can be performed in the classroom,” I am suggesting that there is an answer to that question, and that that answer will exist in at least two parts: one, a suggestion as to how the challenge can be met, and two, a reason why that process meets the challenge. The process of determining why a process can address a question necessitates causal reasoning. MORE HERE.












Blakeslee, A. M. (2009). The Technical Communication Research Landscape. Journal of Business and Technical Communication, 23(2), 129-173.
Blyler, N. (1998). Taking a political turn: The critical perspective and research in professional communication. Technical Communication Quarterly, 7(1), 33.
Bushnell, J. (1999). A Contrary View of the Technical Writing Classroom: Notes Toward Future Discussion. Technical Communication Quarterly, 8(2).
Charney, D. (2001). Guest Editor's Introduction: Prospects for Research in Technical and Scientific Communication--Part 2. Journal of Business & Technical Communication, 15(4), 409.
Miller, C. R. (1989). What's Practical about Technical Writing? In B. E. Fearing, and Sparrow, W. Keats (Ed.), Technical Writing: Theory and Practice. New York: Modern Language Association.
Selber, S. A. (1994). Beyond skill building: Challenges facing technical communication teachers in the computer age. Technical Communication Quarterly, 3(4), 365.