Chapter 12 – Discussion Questions (Making Research Decisions) # 5 The purpose of conducting research is to collect data to satisfy a research question. In the design of the study, researchers must carefully construct not only the design of the survey questions, but the answer choices too. Researchers must design answer choices that minimizes, if not eliminates, the chance of bias or errors. Without such preventative measures the results would be erroneous, bias, and therefore useless.

In order to accurately measure the responses of subjects, researchers use one or more methods of data collection including rating, ranking, categorization, and sorting. The answer choices are set up according to the collection method chosen. Perhaps one of the most frequently used data collect methods is rating, which allows the subject to select an answer based on a scale of good to bad. However, the results of the rating system are only good when set up to eliminate bias.

Researchers can use the rating system in several different ways and those researchers who do not use a balanced rating scale will collect data with more bias than the researchers who do not. One of the problems with developing a rating system is the choice of response terms. The example below demonstrates some of the widely used scaling codes used by researchers. a Yes—Depends—No - not balanced b Excellent—Good—Fair—Poor c Excellent—Good—Average—Fair—Poor d Strongly Approve—Approve—Uncertain—Disapprove—Strongly Disapprove In order for the scale to be balanced it must have an equal number of both favorable and unfavorable responses.Responses “B” and “C” are not balanced because they provide more favorable responses than unfavorable ones.

Another issue facing researchers is not being able to measure the response due to the input of answers that result in non-conclusive results. Such results are created by responses “A” and “D” which provides unforced answers such as “depends” and “uncertain” that does not provide the researcher with any conclusive data to be analyzed. Too many uncertain responses leave the researchers with no idea about the attitude towards the research topic as well as create bias (Cooper, 2011).Careful attention is taking during the design phase of research because the design ultimately effects the end results.

Results may be inconclusive or end up of no value to the researcher if the results are bias or non-conclusive. Therefore, it is important for the researcher to use an appropriate measure of collection for the sample as well as implement the measures without bias and in a way that results in conclusive data. Chapter 5 – Discussion Questions (Terms in Review) # 1- 3 Question #1 Sometimes companies, and the managers within them, are plagued by a dilemma in which no one can quiet put their finger on.Managers and researchers can often determine what the problem is through exploratory research. Exploratory research includes the use of both internal and external data which can be sourced through a number of ways including literature reviews, internal report reviews, and even interviews.

Through gathering such information, a researcher can turn a management question into a research question. However, not all sources are equal and therefore managers and researchers a like must follow the proper evaluation factors in order to determine if the information is valuable to the management decision process.Sometimes exploratory research through the use of secondary data sources is all a manager needs to make a management decision. Encyclopedias, textbooks, handbooks, magazine and newspaper articles, and most newscasts are considered secondary information sources (Cooper, 2011). However, these sources are of lesser value to the researcher if the information is from an unworthy source.

First, the researcher must evaluate the purpose of hidden agenda of the information source. The first question to ask is why does the information exist and secondly, what is its purpose.The researcher should heck for bias of information presented (Cooper, 2011). It is not uncommon for the author of the data to have their own opinion and present the data in a bias way. Such is often the case when someone of the Republican party presents information about President Obama.

The second evaluating factor is the scope or the depth of the topic coverage. The information should not be too broad or lacking, but rather include a lot of factual information such as the time period and geographical limitations (Cooper, 2011). The more information the researcher has to go and the more up to date it is the more worth the information is.The third evaluating factor is authority; the level of data (primary, secondary, tertiary) and the credentials of the source (Cooper, 2011).

Authority relates to the credentials of the source and if the information is even factual. Data that lacks credentials cannot be verified and therefore the information may be poor or distorted and therefore should not be used in the management decision making process. The two last evaluating factors are the audience and format. The audience is who the paper caters to in terms of age group, believes, attitudes, racial profile or any other specific group of people.Managers should be careful to avoid information that is to bias such as pointing specifically towards a religious or political group. Other sources to avoid include those written to entertain school aged children.

That type of information will be tainted and not render useful for the manager in making a decision. Lastly, the format of the source should be evaluated. For example, is the information easy to find, is the design appealing, and is the help section helpful (Cooper, 2011). Researchers should also avoid sources in which present the information in a confusing manner, make the information hard to find, or is not helpful.This type of source waste the researcher and manager’s time.

Evaluating such information will eliminate sources that are full of bogus or bias information that is of no use to the manager. Information that is to one sided, lacking in information, or unverified does not help the manager make a decision. Instead, managers need the most accurate, timely, and trustworthy information they can find. Sources in which pass the five evaluation processes help managers make decisions because it is information that can be trusted, analyzed, and used to make the best decision possible.When managers are faced with dilemmas they may find themselves scouring the endless web of information found on the internet.

The internet is vastly full of information on nearly any topic imaginable, but that information is not useful if it cannot be verified as having some worth to the decision making process. Therefore, managers and researchers must evaluate the worth of the information using an evaluation process that eliminates worthless sources. Researchers must evaluate everything about a source from its credentials to the purpose of the information.However, doing so will help in the decision making process by eliminating faulty sources.

Question #2- Define the distinctions between primary, secondary, and tertiary sources in a secondary search. Depending on the research question, researchers will use either a primary or secondary research approach. The first collects data that does not already exist while the other looks for information that does exist. Using the primary research approach the researcher will create the data while the researcher will search for existing data using the secondary approach.Researchers must first have knowledge of what the different types of search are and how to collect them.

From there, the researcher is armed with the knowledge of how to get the information they are looking for based upon the type of research they are conducting. Primary sources of research are original works of research such as court rulings, regulations, laws, memos, letters, complete interviews or speeches, and most government data (Cooper, 2011). This information remains, and is taken for, in its raw and original state. In other words, the information is original and not an interpretation of another party.Secondary data is the interpretation of primary data (Cooper, 2011). Some sources of secondary data include encyclopedias, textbooks, magazines, newspaper articles and newscast (Cooper, 2011).

The information is secondary because it is an interpretation of the original information through another party. Therefore, the information is no longer original but presented through a secondary source. Tertiary sources are interpretations of secondary research, indexes, bibliographies, and other finding aids (Cooper, 2011). Tertiary sources are a collection of primary and secondary research.Other examples include guidebooks, manuals, and directories; all of which tell the researchers where to find the primary and secondary research.

Under the secondary search approach, researchers search for existing primary, secondary, and tertiary, information. Through this approach the researcher will be obtaining primary information that already exists in conjunction with secondary information. Therefore, the researcher will not be conducting any surveys, interviews, or other type of research to obtain primary information, but rather gathering results from such research through existing primary information.Under the primary approach all primary information would be created and not derived from existing data. Sometimes the research question forces researchers to create their own data called primary data through a primary research approach. Other times the researcher can obtain existing primary and secondary data through a secondary approach.

The latter is more budget and time friendly, saving the researcher both time and money. However, the type of research method chosen depends on the research question at hand.Question #3- What problems of secondary data quality must researchers face? How can they deal with them? Researchers face many problems when using secondary data. The quality of secondary data must be verified and the researcher must be trained to detect when the data is untrustworthy.

Some of the issues researchers face are verifying and authenticating the data. The researcher must check the credentials of the source and be able to detect what the purpose of the information. Finding secondary research is not difficult, but verifying its worth and value is difficult.The internet is full of information sources such as blogs, news articles, copies of published work, as well as rogue authors who may have the expertise to write the article, but wrote it with a specific intent. Since nearly anyone can publish information online it is difficult for the manager to determine if the information was written by a ninth grader as a school project or if it has any scholarly value at all.

Aside from authenticating the validity of the information the researcher also faces finding quality information in terms of depth and format.At times the information may be verified, but not in depth enough or provide enough quality information for the researcher to use. Other times the source might not be up to date or and leaves gaps of questions to be filled. Then there is also the issue of data being presenting in a confusing or hard to find manner. Due to these issues, the five point evalua tion process has been implemented to help researchers separate quality data from meaningless data. Finding quality secondary research is not always simple if one does not have access to private libraries of scholarly information.

When left to their own devices, researchers are faced with a ton of secondary information stemming from multiple sources all of which need to be verified. Trained researchers can quickly skim through information and are keener to spotting quality information. On companion website * Read the case study, State Farm: Dangerous Intersections. Answer discussion questions 1 through 5.

1) Identify the various constructs and concepts involved in the study The theory or idea driving State Farm to conduct studies which identify the most dangerous intersections in the United States was to reduce risk.Through such studies the auto insurance company was able to identify the most dangerous intersections in hopes the city would address the issue through constructing a safer intersection. The idea was to identify these dangerous intersections and then offer grants to the cities which were the most dangerous in an effort to elevate engineering costs. Through the use of their own internal data, State Farm compiled and published a list of the top 10 most dangerous intersections in America.

) What hypothesis might drive the research of one of the cities on the top 10 dangerous intersection list? Making the top 10 most dangerous cities in the United States might prompt some researchers to take notice of a particular study and desire to research the city further. There are many hypothesis and intriguing questions a researcher might want answered involving a particular city on the list. Any number of hypotheses can be drawn from the study: The volume of cars on the road leads to more accidentsPoorer cities experience more accidents due to the lack of funding for new engineering Cities with a large populating of aging adults are more dangerous Cities with a larger population of young adults are more dangerous 3) Evaluate the methodology of State Farms research To verify the quality of the research, State Farm used internal reports of accidents. The company excluded volume reports and police reports since volume reports are insignificant to the study and police reports are inconsistent. Accident rate data was also not included because the accident data is merely a rate of traffic volume divided by actual accidents.

State farm further created a classification system to classify the accident based on severity. The severity depended on the amount of bodily injury paid out due to the accident. Additionally, only reports in which the State Farm driver was at fault were used for the study. Using internal reports qualified the accuracy of the information and was more reliable than other sources. The exclusion of accident rate data was also a significant factor because it is not accurate of exactly how many accidents occurred and how bad the accident was.

However, excluding accidents that were not the fault of a State Farm driver could be affecting the actual results in a negative way. In my opinion, all accidents on file should be including in the report to provide the most accurate picture. The inclusion of evaluating accidents based on severity was a significant factor because it eliminated mere fender bender accidents to major accidents including fatality, which are the intersections that are the most dangerous and need the most help. 4) If you were State Farm, how would you address the concerns of transportation engineers?The concerns of the transportation engineers involve cost and determining what the actual issue causing the dilemma is. To address the concerns of transportation engineer’s additional surveys would be conducted on surrounding residents, public drivers, and of the engineers themselves. Exports will also have to be researched to provide an analysis of the situation and intersection as well as provide a recommendation for changes.

Immediate resolutions could involve the engineers replacing sings or placing new signs on the road until a more permanent resolution is concluded. ) If you were State Farm, would you use traffic volume counts as part of the 2003 study? What concerns, other than those expressed by Nepomuceno do you have? If I were State Farm I would not use traffic volume counts as part of the 2003 study because I believe they cloud the results. Traffic volume counts will dilute the results because the research question is what are the most dangerous intersection in America and is not to answer what are the most dangerous intersections per volume of travelers.The end results of such a study could portray an intersection as being less dangerous than it really is. For example, if an intersection reported 2,000 car accidents a year, but reported two million travelers that same year then the results will be portrayed as less concerning because it’s a small amount per volume. The study should be soley based on the number of accidents at any given intersection.

My only concern is that the study does not include drivers at fault that are not insured by State Farm. The study should include anyone they have the data on regardless of insurance coverage.