On 11th October NSMNSS and the SRA co-ran an event looking at social media research tools. Speakers came from a range of backgrounds, and discussed mix of qualitative and quantitative methodologies including text, image, network, and geographical analysis. All slides can be found here http://the-sra.org.uk/events/archive/ and presenters will be contributing to a blog series about social research tools, due to be released later this year, so keep your eyes peeled!
Steven McDermott kicked things off by discussing the idea that ‘data is an ideology- a fiction that we can’t take at face value’. In his session Steven not only discussed which tools he used, but urged researchers to critically engage with the information we get from these tools, and what biases they may carry. He concluded that social media data should be used as an ‘indicator’ (rather than a fact) alongside other methods, such as ethnography, in order to get the ‘full picture’.
Next, Wasim Ahmed talked about NodeXL, a free Microsoft Excel plug-in he uses for twitter analytics, but can also be used with Facebook, Instagram and more! The main focus of this session was the graph function of NodeXL, which allows the mapping of networks. The tool also has a graph gallery, which allows users to access historic data stored there. NodeXL is a free tool and very user-friendly according to Wasim, so he recommends downloading and having a look at mapping your own data.
Moving on to developing tools for social media analysis, Luke Sloan from the COSMOS introduced their analysis tool. Luke started off by saying that the programme was created for researchers who ‘don’t understand technology’ meaning that complex computing language is not required to use it. Like NodeXL, COSMOS is also good at mapping, and can break down tweets by geography, gender and time, as well as identifying popular words and phrases in tweets; particularly useful for content and sentiment analysis.
Philip Brooker then discussed social media analytics using Chorus. The majority of the session was interactive, with Philip demonstrating how to use Chorus with twitter data. Chorus allows users to retrieve data from twitter by searching for hashtags and phrases. A good element of this tool is that it allows users to continually add data, allowing for longitudinal datasets to be created. It also has a timeline function which can be used to see the frequency of tweets alongside different metrics (again, very useful for sentiment analysis). There is also a cluster explorer function, which allows users to see how different tweets and topics interact with each other. A function which will allow for gathering of demographic information from twitter profiles is also currently being developed.
There were a couple of sessions on using social media for qualitative analysis; the first from Gillian Mooney was on using Facebook to conduct and recruit for research. Gillian emphasised that Facebook is good for stimulating discussion and debate, but she also identified a few drawbacks in the practical and ethical implications. Recruitment seemed to have been slow moving via Facebook, and Gillian suggested that twitter may be a better way of recruiting participants. She also stated that there are wider ethical implications with Facebook research because it often means that the researcher actively participates with the platform, which blurs the line between the researcher and participant. While this makes ethical research more difficult to conduct, she believes that it makes for more vibrant research. She ended with a call for ethical boards to be more understanding of social media research, and for a clear and consistent research ethics framework across all platforms.
Sarah Lewthwaite continued with qualitative analysis, by talking about developing inclusive ‘digital toolboxes’ so that research is accessible to all. Sarah stated that online research must be made accessible to all people in order to get a better sample and more vibrant data. While web accessibility is becoming more of a legal requirement for social media companies, there are still gaps in accessibility across platforms, and we therefore need greater technological innovation for social media and research tools. Sarah Lewthwaite used the ‘over the shoulder’ method (using a remote desktop and screen sharing) to observe how some people with disabilities access and use social media.
Our final group of sessions was on image analysis.
Francesco D’Orazio discussed image (and more specifically, brand) coding and analysis using Pulsar, which works across a range of social media platforms, including twitter and Instagram. To conduct the analysis, an algorithm must be created, alongside human coders, to define certain concepts (i.e. image topics), search images, and tag them with the concepts before clustering them. Francesco believes that doing this form of image analysis can do more for a brand than simple logo detection.
Finally, Yeran Sun discussed using images to map tourist hotspots. Yeran used Flickr (an often ignored platform for research), and geo-clustered images via their meta-data using R and QGIS (free and open to use) to show popular tourist destinations. Often, images will have longitude and latitude tags, which allow for precise mapping. If used effectively, geo-tagging such as this can be used to provide the ‘best’ route for tourists to see all the popular hotspots, or inversely, create ‘alternative’ routes for those who wish to stay away from popular tourist sites!
0 komentar:
Post a Comment