Some challenges data collecting from jobseekers on social media

Part of my survey data collecting strategy involved targeting users of careers services via Skills Development Scotland‘s (SDS) various social media accounts. Rather naively, I thought that this method would be a cheap, quick, and easy way to gather potentially hundreds of responses from young jobseekers.

There was some logic to my misplaced assumption. On Facebook alone, for example, there are at least twenty one local SDS pages. Some of these have hundreds of likes, and some have thousands. Additonally, SDS have a centralised Twitter page with over 16,000 followers, and My World of Work Twitter and Facebook accounts with 6,000+ followers and 13,500+ likes, respectively.

Now, there is likely to be some crossover as far as the followers and “likers” of these pages are concerned (i.e. some of them will have liked more than one of the pages). But even still, I thought that my survey had the potential to reach at least 20,000 jobseekers. Based on that, even a 2% response rate would have given me about 400 responses – which I thought was a cautious estimate.


The reality was very different than the one I’d anticipated, and most of the challenges stemmed from the fact that I was not personally administering the survey. Scattergun dissemination was an issue, for example. The survey was not posted on all of the accounts, or the link was retweeted/shared instead of actually posted. This was understandable given the number (possibly dozens) of very busy people who would have been involved in operating these social media pages.

In the cases where the survey was posted to a social media account, it was often buried under other user-content. That is to say, because the survey was not pinned on any of the pages, it soon disappeared down Facebook Walls/Twitter Feeds and into obscurity. If the social media user wasn’t online within an hour or so of it being posted, it is unlikely they actually saw it. The window of opportunity would have been smaller if the social media user had a busy individual newsfeed.

Not all of those who saw the survey link would have been active jobseekers. This is a pertinent point. Even if, in theory, my survey reached thousands of users who had “liked” or followed the pages where is was posted, it would be almost impossible to determine how many of those users would have been actively seeking work at that time. The period of job search can vary wildly between individuals, before the person either a) finds employment, b) gives up searching, c) takes a break from searching, or d) postpones searching due to other pursuits (e.g. full-time education).

In summary, before I even get to general survey concerns such as survey response fatigue (i.e. people who are eligible to participate, but for whatever reason can’t be bothered), potentially thousands of participants would not have been reached in the first place.





Job search, information literacy, and the need for an information science perspective

A colleague sent me this article, which is a summary of research findings suggesting that less than half of young people in Scotland are successful at finding a job online. Perhaps unsurprisingly, those from disadvantaged backgrounds have fared the worst, and are less likely to have asked a professional for assistance.

For those of us with a background in library and information science, this will not come as a surprise. The issue here, it seems, is that being able to operate ICTs does not necessarily equate to being information literate:

Information literacy: “Knowing when and why you need information, where to find it, and how to evaluate, use and communicate it in an ethical manner” (CILIP, 2016).

My own research findings, at this preliminary stage, also point in the same direction. That is to say, insofar as using social media to find job search information is concerned, people aged 16-24 are not necessarily confident in their abilities to do so.

This is why I would argue that more job search research needs to be conducted from an information science perspective – perhaps now more than ever. As stated by Wanberg (2012, p.3), “job search has become so pervasive and frequent that it is now considered to be an integral part of work”.

People have almost innumerable information sources from which they can access job search information (not only job adverts, but CV help, interview advice, company profiles etc.). This information can be critical to their career development, and the choices they make – now, and in future job searches. As such, understanding job search information behaviours should be a priority.


Post survey data considerations

I have finally gathered a substantial number of responses to my survey questionnaire on job search and social media. The survey was aimed at 16-24 year olds living in Scotland who are currently looking for a job. It has taken me literally months get there, and there have been plenty of trials and tribulations that should make for a good blog post or two in the near future. Maybe somebody out there can learn from my experiences.


In the meantime though, here are a few things I need to think about as I proceed to quantitative data analysis:

1. What is the consequence of having a heterogeneous sample? One of my supervisors questioned me about this on numerous occasions when I was building the survey questionnaire. Had I considered the nature of the group I was sampling? Did I really want to include all 16-24 year old jobseekers?

Well, for me, the answer is still the same now as it was then (which I suppose is a good thing) – yes and no. Yes, because I want my research to tackle the concept of networking at a broader level, and to provide a platform of knowledge from which to probe the networking behaviours of specific categories of jobseekers. No, because it means having to analyse a complex and multi-layered sample.

Within the 16-24 age range, I have picked up responses from people with hugely varying education levels (No qualifications > PhD qualifications), employment statuses, and job search goals. The latter of these is quite significant – some people have multiple goals e.g. I would like a job in my field but am also willing to settle for any job that pays. Making sense of these nuances, and how they have impacted upon networking behaviours during job search, is likely to be time consuming.

2. How representative is my survey sample? It is almost certain that there will be some bias. However, with regard to the general population of young jobseekers in Scotland (or anywhere), I would argue that sampling bias is unavoidable. I will present my argument in a future blog post, because I think the topic merits some discussion in isolation.

There are a few basics to consider though, in terms of demographics. For example, is there a proportionate representation of unemployed respondents? What about people who consider themselves to have a disability, or who were born outside of Scotland – have they a representative voice? To answer these questions, I need to compare my figures with those in general population surveys.

3. What is the data telling meI suppose this is the fundamental question that any researcher would have to consider this stage. Does my data help me to answer my research questions? What trends are emerging from the data? Do they require further attention in the final qualitative element of my field work? To a large extent, the quality of the data I have gathered will depend on the quality of the questions I have asked.