So, you want to recruit participants online?

When it comes to recruiting participants, it’s easy to feel confused by all the different options available to you. That’s why we’ve written this post juxtaposing five popular participant recruitment tools: Amazon MTurk, Prolific (us 🙂), Qualtrics, SurveyMonkey and TurkPrime. We hope you find it helpful.

Over the last 15 years, human data collection has broken free from the confines of the laboratory and spilled onto the internet [1]. Compared to traditional sample collection methodologies (in-person, post, or phone) online data collection offers several distinct advantages: Faster data collection, larger samples, reductions in costs, and importantly, more diverse populations. Arguably, online samples are more like the national population than a typical, WEIRD sample of university students [2].

More researchers than ever before now conduct studies online: Running scientific experiments, surveying potential customers, predicting elections and probing the population’s health (etc., etc.). As demand has grown, a new ecosystem of work-marketplaces and participant recruitment tools has sprung up helping to put researchers in touch with thousands of eager survey participants and microtaskers all around the world. Let’s take a look at the different options for recruiting participants.

Disclaimer: Before we start, we’re going to state the obvious. This is a Prolific blog, written by Prolific, so we’re undeniably biased. BUT, our aim is to be useful to anyone wanting to recruit participants online. We accept that different recruitment platforms will be right for different people and we’ve sought to make that clear.

Amazon Mechanical Turk (MTurk)

Founded in 2005, MTurk’s primary purpose is to provide companies with an on-demand workforce which can be used to complete tasks that computers cannot do (HITs). Many of the tasks on MTurk are designed to train machine learning algorithms (e.g., categorising images), but researchers also use MTurk to collect survey data. [3]

On MTurk, there are no restrictions on how researchers can collect their data. MTurk does not set a minimum reward for completion of tasks, so researchers can pay participants whatever they like (even nothing), and it’s up to participants whether they think the HIT is worth completing. For an extra fee per participant recruited, researchers can restrict their task to certain demographics of interest, such as participants who own cars, are aged 35-45, or have Reddit accounts. There's good evidence that common psychological effects can be replicated using an MTurk sample [3]. But, MTurk is not famous for its data quality. Earlier this year, members of the research community found evidence of bot-like responses to their surveys (more here).

There’s also evidence that MTurkers are expert survey takers: They're often familiar with common experimental paradigms and adept at avoiding attention checks. This raises questions about the suitability of MTurk’s pool for research. For example, some research has found that using nonnaive participants can reduce effect sizes. Further, concerns have been raised about the ethics of collecting research data in exchange for very low pay (see here and here). It’s been argued that low rewards encourage low effort responses [4], and that low reward HITs are often completed by those participants most desperate for work. This is likely to create undue pressure to take part in the research.

Prolific (us 😉)

At Prolific we are dedicated to delivering the highest quality survey results at fair and ethical prices. Our mission is to empower great research, and we care about good research practices. We are proud of our engaged, diverse and fast-growing participant pool * (see this scientific paper). We weed out bad apples by actively monitoring and banning bots and malingerers, and encourage high quality responding by ensuring a minimum reward of £5/hour. We further support honest answering with rigorous rules regarding data quality.

Our prescreeners allow researchers to choose any demographics at no extra cost. We have participants from OECD countries all over the world. This means that you can select participants from a certain country of residence, to mothers only, to teachers who smoke, or all of the above combined. We’re continuously expanding our prescreening options, including our soon-to-be-released representative samples feature! 😃

Because Prolific is self-service, it gives you full control over how you collect your data. We transparently connect you and your participants – you can direct participants to any software you like, from apps, to surveys, to games, to quizzes. You can message participants directly through Prolific without compromising anonymity. Excitingly, we support a diverse range of study designs, such as longitudinal studies, economic gambling tasks, dyadic studies, Skype interviews and more. So if you want to pioneer a complex study design, then Prolific is the perfect place to conduct your research. Our top-notch customer support team will always be there for you to help set up your study. Plus, researchers in Europe, did you know that Prolific is fully GDPR-compliant?

  • Wondering who our participants are? Meet Hollie, Aaron, Chetta, and Mike. These are some of the research participants powering Prolific.


Qualtrics is primarily a survey software company, but we’re specifically going to focus on their participant recruitment service: Qualtrics Online Sample.

Qualtrics’ participants are recruited from multiple market research panels. Qualtrics say they can recruit pretty much any participant demographic, from nurses, to students, to CMOs, to mothers, and we don’t doubt this. But it might not be the fastest nor cheapest approach

One issue with Qualtrics is its opaque pricing. Qualtrics doesn’t provide any pricing information on their website – everything is handled via personalised quotes. It takes quite some time to get a quote and set up the survey, meaning it’s tricky to run pilot studies, or get results quickly. This makes it difficult to work out whether you can afford their recruitment service until you’ve invested substantial time on the phone and by email.

Another potential problem is the lack of direct control over the data collection process, since there’s someone else collecting your data on your behalf. Plus, Qualtrics Online Sample requires a subscription to their survey software. This means that you need to pay for a Qualtrics subscription in addition to the participant recruitment fee.


Like Qualtrics, SurveyMonkey are mostly known for their online survey software. But again, since we’re talking about participant recruitment we’re going to focus on their recruitment tool: SurveyMonkey Audience.

SurveyMonkey Audience gathers responses predominantly from its pool of US participants (using SurveyMonkey Rewards and SurveyMonkey Contribute) and it provides options for demographic targeting. It’s worth knowing that researchers can only recruit participants using SurveyMonkey Audience if their survey is made and hosted using SurveyMonkey’s survey software. This may limit your options, because you can’t run longitudinal studies or communicate with participants, etc.

Another potential downside is that participant rewards are unclear and uncontrolled. As far as we can tell (and this is part of the problem), researchers have very little control over participant rewards using SurveyMonkey Audience. For example, SurveyMonkey Rewards pays a fixed reward of $0.35 for completing a survey. Given the evidence that data quality is connected to reward level [5][6], this raises questions about the suitability of the SurveyMonkey pool for research.


Because MTurk’s user interface is notoriously difficult to navigate, TurkPrime has built a product on top of it: MTurkToolkit, which aims to make it easier to run research studies on Amazon’s MTurk. They also offer PrimePanels, a separate paid service which aggregates several survey panels together.

Due to concerns about the quality of participants on MTurk, TurkPrime provides a suite of tools (e.g., IP location verification, duplicate IP-address blocking, and controls to manage participant naivety) to help filter out the worst of the bunch. Unfortunately, these data quality features cost extra, making this a potentially pricey pursuit. PrimePanels are delivered by requesting respondents from other market research platforms, but because PrimePanels aren’t directly controlled by TurkPrime, researchers are unable to run longitudinal studies, recruit samples of less than 50, message participants, or fully control how much their participants are paid. This limits the service to more traditional, “one-shot”, surveys.


At the end of the day, you have to decide for yourself what works best for you. Our Prolific Team is certainly there for you whether you need guidance setting up your experiment or help finding difficult-to-reach demographics. Any questions, let us know!


[1] Birnbaum, M. H. (2004). Human research and data collection via the Internet. Annual Review of Psychology, 55, 803-832.
[2] Casler, K., Bickel, L., & Hackett, E. (2013). Separate but equal? A comparison of participants and data gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Computers in Human Behavior, 29(6), 2156-2160.
[3] Crump, M. J., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research. PloS one, 8(3), e57410.
[4] Lovett, M., Bajaba, S., Lovett, M., & Simmering, M. J. (2018). Data Quality from Crowdsourced Surveys: A Mixed Method Inquiry into Perceptions of Amazon's Mechanical Turk Masters. Applied Psychology, 67(2), 339-366.
[5] Aker, A., El-Haj, M., Albakour, M. D., & Kruschwitz, U. (2012). Assessing crowdsourcing quality through objective tasks.* LREC*, 1456-1461.
[6] Lovett, M., Bajaba, S., Lovett, M., & Simmering, M. J. (2018). Data Quality from Crowdsourced Surveys: A Mixed Method Inquiry into Perceptions of Amazon's Mechanical Turk Masters. Applied Psychology, 67(2), 339-366.

Show Comments