Home > Essay examples > How Theory, Research and Evaluation Serve Public Folklore Projects

Essay: How Theory, Research and Evaluation Serve Public Folklore Projects

Essay details and download:

  • Subject area(s): Essay examples Sample essays
  • Reading time: 14 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 3 October 2024
  • File format: Text
  • Words: 3,968 (approx)
  • Number of pages: 16 (approx)

Text preview of this essay:

This page of the essay has 3,968 words.



Historically, there was a tension  between “pure research” and “applied practice”. Sixty years ago, folklorist Richard Dorson (1950) famously dubbed public folklore as “fakelore” participating in heated debates over public folklore and its place. It divided theory and practice and the connection between the two (Baron, 1992, p.309).  Dorson directed his criticism towards Benjamin Botkin, whom he saw as a folklore “popularizer” and who, according to Dorson, stood in the way of research by simplifying tradition for mass consumption. Botkin answered by pointing out the weaknesses of theorizing folklore for the sake of theorizing folklore alone, stating that while “the study of folklore belongs to the folklorist, the lore itself belongs to the people who make it or enjoy it” (Botkin, 1953, p.199). By the mid-1980s, the division between “academic” and “public” folklorists prompted Barbara Kirshenblatt-Gimblett to question whether all of this arguing comes down to “a mistaken dichotomy.” In perpetuating this dichotomy, academic programs in folklore have “consistently refused to examine their own essentially and inescapably applied character” (1988, p.141). Public folklorists, for their part, have contributed to the chasm because while “folklore in the public sector . . . has its own intellectual tradition,” much of it “remains to be written” (p.149).

Some academics still believe that work of public folklorists does not have much of theoretical thinking, thus undermining public folklore. It is not the same division as it used to, and the attitude towards public has changed over the years; and it is important to illustrate how public folklore is concerned with the same theoretical discussions as academics.

Not enough attention and focus goes to some parts of public folklore. And in this paper I want to look at them promoting the importance to such aspects of public folklore as evaluation and research methods. Because public folklore can and should have a strong theoretical foundation behind it. That the methods we use are crucial and there is a need to discuss problematic aspects that come with public folklore.

No matter what project you create, whether it is a public folklore project or a business project, the evaluation is a crucial element often overlooked or underestimated by many. “Evaluation is a process that takes place before, during and after an activity. It includes looking at the quality of the content, the delivery process and the impact of the activity or programme on the audience(s) or participants” (Research Councils UK, 2011, p.2).

Evaluation is crucial in understanding and determining whether a project achieved what it was set out to, how well it was implemented, what impact the project has had. It s an opportunity  to reflect critically on both the activities and processes, their impact and value. This knowledge can be used internally by the team to drive improvement and externally to demonstrate achievements.

Evaluation is more than just information gathering. It is a reflection of what the activity/project means, the ways of interpretation, the contributors of your data and stakeholders. And analyzing that information allows your and your team to improve the project, achieve success and goals, be efficient. Data collection in itself is pointless without thorough analysis and reflection (Research Councils UK, 2011, p. 3).

Often evaluation is seen as a part that is needed only for submission to the funders, as a proof of success. Most funding organizations and agencies require evaluation reports to see the impact of the project. However, evaluation should not be treated as an unwanted nuisance that you are forced into, but rather it should be  used as an ongoing management and learning tool to improve an organization's effectiveness. Not as a check mark at the end of the project to fill the requirements, but as a useful method of tracking progress and success in order to gain new perspectives on the activities that can potentially improve the project. It is imperative to  conduct internal evaluations to get information about the programs to make better decisions about the implementation of those programs. The crucial part to make internal evaluation an ongoing process at every level of an organization in all program areas. Another important aspect is to remember to involve all of the program's participants. Staff and sponsors are going to have different perspectives on the same issue. While often it is not possible to make everyone happy due to limitations, it at least opens a conversation that can lead to a compromise.

Evaluations should be done not only internally, but on a larger scale as well. External evaluations conducted by someone from the outside of the project or organization can provide a different point of view, findings overlooked by the staff, emphasize aspects the team did not pay enough attention to. Usually those are done for funding purposes; however, it might be a sound idea to invite an external evaluation, especially if it is a big project.

There are two types of evaluation: formative and summative. Formative evaluations are ongoing evaluations that are conducted during program development and implementation. They serve as a helping hand in directing your project in the best way to achieve the goals and improve program. Summative evaluations are conducted once the programs are well established, providing with the information of how well the program is achieving its goals. Within the categories of formative and summative, there are different types of evaluation and methods to evaluate (Rossi, et al, 2004).

Common types of formative evaluation (Spaulding, 2008):

1)Implementation evaluation, which monitors the precision of the program delivery.

2) Needs assessment, which identifies who needs the program, how much they need it, and what are possible ways  can be implemented to achieve the need.

3) Structured conceptualization, which assists stakeholders with defining the program, the target audience, and the possible outcomes.

4) Process evaluation, which investigates the process of delivering and implementing the program, including the possible alternatives.

Common types of summative evaluation:

1) Goal-based evaluation, which determines if the intended goals of a program were achieved.

2) Outcome evaluation, which scrutinizes whether the program caused verifiable effects on specifically defined target outcomes.

3) Cost-effectiveness and cost-benefit analysis, which determine the efficiency of a project by standardizing outcomes in terms of their dollar costs and values.

4) Impact evaluation, which encompasses the overall effects of a program, whether they were intended or unintended, by asking what impact the program had on a larger scale (for example, on a community).

An evaluation can use quantitative or qualitative data, and often will include both types. Both methods provide important information for evaluation, and both can improve the project as each has its own limitation. Combined they generally provide the best overview of the project and allow more flexibility.

Quantitative data answers such questions as “How many?”, “Who was involved?”, “What were the outcomes?”, and “How much did it cost?” There are numerous ways to collect quantitative data. Some of them are: questionnaires, surveys, observation, review of databases. And each method also has different ways of implementation. For example, surveys can be distributed online or by a person; involve writing your responses in the form or answering questions on the phone (Holland et al., 2005; Garbarino et al., 2009). While gathering the data, it is crucial to choose the right method. It will depend on resources, circumstances and the goal of your evaluation. What type of information you want to get out of evaluation? – that question will drive the choice of method.

“Quantitative data measure the depth and breadth of an implementation (e.g., the number of people who participated, the number of people who completed the program). The strengths of quantitative data for evaluation purposes include their generalizability (if the sample represents the population), the ease of analysis, and their consistency and precision (if collected reliably)” (Clinical and Translational Science Awards Consortium & Community Engagement Key Function Committee Task & Force on the Principles of Community Engagement, 2011, p. 175). There are obvious limitations to evaluating quantitative data like a lack of context, poor response rates, and difficulties in measurement. They provide an overview of what happened, but usually not why.

Qualitative data answer such questions as “What is the value added?”, “Who was responsible?”, and “When did something happen?’’ Qualitative data can be also collected using various methods, the most common being interviews and participant observation (Patton, 2002).

One of the strengths of observation is the context that allows for a better results and understanding of a situation. It helps to explain behaviors and motivations; asks for reasons, not just for facts. The researcher sees what is actually happening, which can change the results tremendously as often participants lie or omit the truth in simple questionnaires and surveys (Ericsson et al , 1993). While it is a useful method, there are also limitations to it. It requires resources, time, and willingness from people you want to observe.

The context is in general a strength of qualitative data, explaining complex issues and answering such questions as “why” and “how” behind the “what.” However, it also complicates analyzing and interpreting data, requiring more substantial human and financial resources  (Patton, 2002).

Choosing an appropriate evaluation might become tricky as there is a big variety of them; and each has its own advantages and disadvantages. For example, an email survey can gather data quickly and present it in eligible format. However, it will not include those without technological access and people are less motivated to respond as opposed to a survey that was physically given to them. A pen and paper questionnaire needs less technical support and is more accessible, but can be harder to distribute and collect. Analyzing responses can also take more time and will require a person to do, when with online surveys there is a software that does analysis for the researcher (betterevaluation.org). There is also a question of motivation. Often the team behind a project does not want to spend their time and energy on thorough evaluation, providing only a superficial evaluation. However, of you really want your project to succeed or to know if it was successful, evaluation becomes an integral part. It improves not only the project, but people working on it as well. The lessons learned from well-done evaluation can help in the future projects and endeavors.  

There are some situations where evaluation may not be a good idea: when the program is unstable, unpredictable, and has no consistent routine; when those involved cannot agree about what the program is trying to achieve; when a funder or manager refuses to include important and central issues in the evaluation (Thomson, G. & Hoffman, J., 2003). However, in general evaluation is important and should not be overlooked.

It is not so simple to just enforce the evaluation. There are many challenges that come with it and that is possibly the reason why not enough attention paid to it. From the outset it should be acknowledged that seeking to place a value on the work public folklorist undertake is a difficult and frequently controversial task. There are challenges that come with defining and understanding certain terms; cultural and political issues involved. When we talk about “cultural” or “social” impact, how do we measure them?

There are many different definitions and understandings of value and these can have very different meanings to different people. Some of them include: educational value; cultural value; intrinsic value; option value; heritage value; economic value; public value; social value; financial value; blended value; instrumental value, etc. (Kelly & McNicoll, 2011, p.6). One of the big discussions is whether ‘value’ can be assessed objectively or it is subjective in its nature? Whether something can have an ‘intrinsic value’ that is beyond quantification?

The lack of a common terminology across different disciplines creates a problem, especially since folklore is very much interdisciplinary in itself and often works with other fields. While working on a project, it often involves people from various disciplines with different backgrounds. Two things should happen to avoid the potential problems that arise in such situation: develop a common terminology across disciplines or establish terminology and concepts before a particular project. The last one is not ideal as it will work out internally, but will encounter the same problems externally (Kelly & McNicoll, 2011, p.7).

There is no single recognized approach in assessing value. Each method comes with its own advantages and limitations. Moreover, there is no consensus on defining ‘value’.  What is there is a variety of tools and methods, sometimes very similar, but they are not necessarily consistent in approach nor posses a solid theoretical foundations. This has been noted by members of the SROI network in their  blog commentary on the subject of ‘Lack of consistency’:

“…The biggest problem that is faced by all of us interested in social value, impact, returns – whatever language you prefer – is the lack of consistency. And yet I still keep hearing ‘we can’t support one approach’ or ‘organisations should be able to choose methods that are most appropriate to them’ or ‘small and start up organisations should be able to do something simple’.

A similar message is found in a report about value measurement in the cultural sector:

“The cultural sector faces the conundrum of proving its value in a way that can be understood by decision-makers …it will not be enough for arts and culture to resort to claiming to be a unique or special case compared with other government sectors, the cultural sector will need to use the tools and concepts of economics to fully state their benefits in the prevailing language of policy appraisal and evaluation….” (O’Brien, 2010, p. 37)

So, what is the verdict? What is the best approach? As a folklorist my first impulse is to go with qualitative methods and data. However, public folklore exist in a reality of funding organizations, governments and businesses. Entities that prefer quantitative data. In all my research I did not find one method or approach that would work for all types of  projects. The best advice is to combine qualitative and quantitative methods. Sally Fort in her book “Public Engagement: Evaluation Guide” (2011, p.3) provides such guidelines:

“• Events and projects are all unique.

• Be sensitive and use common sense.

• Use the most appropriate methods for your work and audiences, participants or visitors.

• Check which information you need to collect for your event or project.

• Only collect information you will be able to analyse.

• Explain to participants why you are collecting the information.

• And that contributing information is optional for them.

• Use a combination of visual, auditory and kinaesthetic methods.

• Keep your aims and objectives in mind when using or creating evaluation tools.

• Plan to collect both qualitative and quantitative information; use a combination of open and closed questions to help this process.

• Put yourself in the shoes of your participants, make it easy for them to complete your evaluation requirements.

• Consider how the learning from your evaluation will be shared with those involved and more widely”.

Those are broad advises; and when it comes a particular project, you need to assess what methods are the best in this specific situation. It requires a pubic folklorist to be familiar with a wide range of approaches and methods, be up to date with current discussions. It also means that this topic is something that public folklorists need to get into. There should be more articles, more discussions, more concerns. However, not much done in that regard. I myself did not think of evaluation or research methods when contemplating about my future as a public folklorist or potential ideas.

As I started to think over those topics and research them I came to a few realizations I want to point out. And one of them is the lack of using Social Network Analysis in folklore. I was introduced to it back in my undergraduate studies, in Social Sciences Research Methods class. However, I did not encounter it while studying in Folklore program. I did not think of it before I remembered of the method during my work term where I suggested to use it to illustrate the impact of the Office of Public Engagement. After remembering it I realized we never talked about it in Folklore classes; and that, in my opinion, should change.

Social network analysis (SNA) “is the process of investigating social structures through the use of networks and graph theory” (Otte & Rousseau, 2002, p.441). Social network analysis is a method that analyzes the connections between individuals or groups or institutions, who are represented as nodes (actors) and their connections are represented as edges (links). Social network analysis emphasizes interaction, the relationships between actors, rather than individual behavior. By examining the network researchers are able to determine how they [networks] influence the way actors/nodes function.

One social network can include actors of different types that have certain attributes, and are associated with diverse types of interactions, which in turn have different intensities. Social actors can be individuals, social groups, organizations, events, cities, countries. Links are understood not only as communication links between actors, but also as links for the exchange of various resources and activities, including conflict relations. Thus, network models consider different social, political, economic structures as stable patterns of interaction between actors. A special place is occupied by cognitive social networks, which reflect the opinion of each actor about the relationships of other actors in the network (D'Andrea, 2009).

Dr. Daning Hu (2012, slide 24) explains which differences exist between a social network analysis and a non-network explanation in their presentation. According to them, Social Network Analysis:

– refers to the set of actors and the ties among them

– views on characteristics of the social units arising out of structural or relational processes or focuses on properties of the relational system themselves

– inclusion of concepts and information on relationships among units in a study

– the task is to understand properties of the social (economic or political) structural environment, and

– how these structural properties influence observed characteristics and associations among characteristics

– relational ties among actors are primary and attributes of actors are secondary

– each individual has ties to other individuals, each of whom in turn is tied to a few, some, or many others, and so on.

Social Network Analysis uses several concepts and terms such as: density, centrality, indegree, outdegree, and sociogram (Laat et al, 2007, pp.87-103).

Density refers to the “connections” between actors/nodes. Density means the number of connections an actor/node has divided by the total possible connections an actor/node could have  (Laat et al, 2007, pp.87-103).

Centrality refers to the behavior of individual actors/nodes within a network. It measures the extent to which an actor/node interacts with other actors/nodes in the network. The more an actor/node connects to others in a network, the greater their centrality in the network. Centrality has such variables as in-degree and out-degree (Laat et al, 2007, pp.87-103).

In-degree centrality puts a specific actor/node as focus point. Centrality of all other actors/nodes is based on their relation to the focal point of the "in-degree" actor/node(Laat et al, 2007, pp.87-103).

Out-degree centrality also puts a specific actor/node as focus point, but analyzes the out-going interactions of the actor/node. Out-degree centrality measures how many times the focus point actor/node interacts with others (Laat et al, 2007, pp.87-103).

The researchers are presented with a huge number of computer software developed for Social Network Analysis and visualization of networks. Some of them are: NetForm, IKNOW, KrackPlot, Gephi, UCINET, FATCAT, MultiNet, GLAD, SNAPS, NEGOPY, GRADAP, InFlow,  gem3Ddraw, Moviemol, STRUCTURE, daVinci, GraphEd, GraphViz, MatMan, PermNet, etc.

As folklore, whether we talking about public folklore or academia, works a lot with people and their interactions, Social Network Analysis can be a great tool to use. Aside from providing analysis, it can also illustrate the impact of the projects and activities by showing what and whom the projects impacted and to what degree. SNA is particularly useful with dealing with huge numbers of people. It helps to find the key people, relationships, the ways information spreads and interactions are made.

There is also a new way to interact with people, create projects, research and evaluate. Something that will change all the sciences and should be talked about already as in the last years it became a reality. I want to talk about VR or Virtual Reality.

The term “virtual reality” (VR) refers to an immersive simulation. “In general … the term virtual reality refers to an immersive, interactive experience based on real-time 3-D graphic images generated by a computer” (Pimental & Teixeira, 1995, p.15). “Our preferred definition is an immersive experience in which participants … view stereoscopic or biocular images, listen to 3-D sounds, and  are free to explore and interact within a 3-D world” (Pimental & Teixeira, 1995, p. 91).

It would be a lie to say that virtual reality technologies have not been used in social sciences. However, it was mostly used in large expensive labs that had enough resources to develop or buy it. Nowadays VR technology went on another level, becoming more and more popular among researchers. Thanks to the improvements to the technology with enhanced realism, affordable costs, and increased possibilities for application, VR may be well on its way to  become a staple research method for social scientists (van Gelder, 2017).

A person using virtual reality equipment is is given an ability to look around or/and move in the artificial world, interact with features and items in it. Virtual reality is accessed by wearing a virtual reality headset and using controllers that usually accompany it. VR headsets are goggles with a screen in front of the eyes. Programs may include audio through speakers in the headset or external headphones. Some controllers provide an ability to feel the vibrations or rather sensations.

Virtual Reality is not only used for gaming or scientific experiments, but a way to communicate creating a virtual presence; you are able to see people in 3D in real time and interact with them.

According to Kelly (2016), around 230 companies were developing VR-related products (headsets, software, games, etc.) by the year 2016 with Facebook employing 400 people for VR development. Other companies (Google, Apple, Amazon, Microsoft, Sony and Samsung) financed and established departments and groups dedicated to VR technologies as well. On April 5, 2016, HTC shipped its first units of the HTC VIVE SteamVR headset. This was the first major commercial release of a Virtual Reality technology allowing ordinary people to access a technology that had been utilized only by few (Prasuethsut, 2016).

Now, VR is not just a technology available to the most high-tech labs, but to ordinary people around the world. This changes a lot. It opens a way to communicate and interact with audiences through VR technology. The cheapest VR headsets you can buy are sold at a price of $20 dollars (Google Cardboard), which makes it very accessible.

In my opinion, public folklorists should be more open to new technologies and approaches. There are already virtual museums that use VR technology, but the possibilities are endless. This is a way to preserve culture, a way to reach out to people. It allows people with disabilities to participate in the conversation and activities; allows to have a creative space with no limitations.

Often folklorists are stuck in the past. But folklore is present and future. We should not be afraid of those new technologies, but be the first to embrace them. Instead of using the same methods and approaches, we should look for new ideas and dare to experiment with them. Folklorists are looking at so many different topics, but forget that how you look at them matters as much, if not more. The search for the best ways and practices should be on the front line of our discipline. I do not have an answer to which evaluation method is the best or what new technologies we will have an access to soon, but what I do know is that I am going to look for them. And that is one of the biggest lessons of my work term.  

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, How Theory, Research and Evaluation Serve Public Folklore Projects. Available from:<https://www.essaysauce.com/essay-examples/2017-4-14-1492141022/> [Accessed 14-04-26].

These Essay examples have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.

NB: Our essay examples category includes User Generated Content which may not have yet been reviewed. If you find content which you believe we need to review in this section, please do email us: essaysauce77 AT gmail.com.