It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Hello everyone!

I am a psychology student at the University of Luxembourg and I am currently writing my Master's thesis. For this purpose, I would like to conduct a study to determine the relationship between different aspects of social skills and the perception of virtual characters.

I am currently looking for a few people to "test" my survey and to give me a short feedback if any problems occur during the procedure. It won't be boring!

The survey takes about 30 minutes and must be done on a PC or laptop. Participants must be native English speakers.

I am grateful for anyone who takes the time to help me here - the data will of course be treated confidentially!

Best regards,
Paul Hommel

Link: P.S.: On the first page you will be asked for your Prolific ID - you can ignore this and enter any character.
Post edited March 28, 2021 by Worm1natoR
Not sure if it's me, but I cannot open the link.
The link doesn't work without https:// at the beginning.

Only native English speakers requirement excludes a lot of potential participants.
Post edited March 28, 2021 by InkPanther
Yes, apparantly posting https links is disabled for new users - at least I had some trouble.

But it should work now, thx for the reply!

---
avatar
InkPanther: Only native English speakers requirement excludes a lot of potential participants.
I know, but otherwise there may be problems of understanding, which I definitely want to avoid.
Post edited March 28, 2021 by Worm1natoR
Ok, I briefly clicked through the test. I'm not an english native speaker, but my spouse is from the US and I'm doing my PhD in English in political sciences.

There are some aspects in the survey design that I would maybe change a bit:

1. Frankly, I did find the survey a bit tirrering and would probaply not complete it voluntarily. You often got a lot of questions on the same page, except for the last third. Somehow breaking these up into smaller packages could up your completion rate.

2. Related to this, it would help to have some indiciation of progression if possible, e.g. a completion bar. I found myself wondering just how many clips I have to click through and that could help to keep survey takers engaged. You could also consider small messages of encouragement inbetween clips, e.g. "You are doing great. We are already at the halfway point". It doesn't matter much if it's fellow students and they are forced to complete it, but if it's strangers online, I could see many dropping out.

3. Sometimes I wondered if questions asked are maybe a bit redundant. You might not get more information out of this just for asking more questions. If possible, I would go over the questions and ask myself, if you really need all of them and if an answer is basically already covered in another question.
avatar
Robette: There are some aspects in the survey design that I would maybe change a bit: [...]
Now that's a nice feedback, thank you!!

For the first point, you're right - I think it's not a problem to split up the questionnaires into smaller packages, I'll definitely consider that.

Same goes for a progress bar, as I totally agree with you that it might be frustrating not to know how many questions will follow (especially in the last video part).

For the last point, some questions are a bit redundant, yes. Then again, these are standardised tests, so unfortunately it is hardly possible for me to change individual questions... I guess/hope, this is still bearable.

Again, thank you so much for your feedback!
That native speakers requirement is a problem, and doesn't quite make sense if the idea is to avoid misunderstandings. There are plenty non-native speakers who have a great grasp on the language and plenty native speakers who don't...

But yes, as Robette said, not knowing how far along you are will make many give up, either quickly, after they spot the lack of progress indicators, because they don't know whether they'll have time for it (irrelevant of the estimate provided), or after a while, if they're starting to get bored or doubt they have the attention span for it, not knowing that they may perhaps be closer to the end than they think, or because they just don't feel like they're making progress, or start to wonder whether there's progress to be made (recall an intelligence test back in the day that scored not based on answers but on how long it took you to realize it was generating endless questions).

Also, if there are many parts, keep in mind that results will be altered by fatigue or boredom, especially if the parts are repetitive, many getting to a point where they'll just click through, or at least no longer really think about what they're clicking, even if they meant to take it seriously until then.

Plus that of course when you have such answer scales, you'll want to correct for the preference to pick the extremes.
I'm a native English speaker (or more accurately natively-bilingual, including English, which I assumed to be equivalent) and began doing the survey. However, I have always found it uncomfortable to watch videos (films, games, etc.) in any of the languages I speak without subtitles, as I lose mental focus (i.e. my mind wanders) without the visual feedback. If you ever add subtitles, please reply and I will retake the survey. Otherwise, good luck.
avatar
Worm1natoR: [...]
For the last point, some questions are a bit redundant, yes. Then again, these are standardised tests, so unfortunately it is hardly possible for me to change individual questions... I guess/hope, this is still bearable.

Again, thank you so much for your feedback!
Yea, but what does standardised mean here? That your supervisor determined the questions? With fewer questions it's still standardised.

I briefly clicked on the link again and right on the first clip there are examples like:

Uninspiring/Spinetingling, Boring/Shocking, Predictable/Thrilling which is an example of three variables which essentially describe the same perception.

The same goes for Unattractive/Attractive, Ugly/Beautiful and some others. It seems you always got three questions per items that could just be condensed into one and would not change your data at all.

There are various questions that just repetitions. If anyone genuinely answers the test these items should be the same or at the most one point apart. However, by having them all included you very much risk tiring your survey taker.
Post edited March 29, 2021 by Robette
...So my perspective is a bit more akin to this; I don't know how many videos there were, but I feel that presenting them as a series, or from a dropdown list to consolidate them into one panel would help keep me focused as I dropped after the second video question upon the realization I'd be having to punch these dots without anything between in the interim.


"Select a video from this list, upon viewing, enter your thoughts according to the presented critera" or "Below is a panel of videos. Upon watching them all, enter your thoughts [et al]"

But that's a matter of how I work vs how others work, and I'm already a test case. I have papers to prove it. ;)
avatar
Worm1natoR: .
Can we get some sort of evaluation in comparison with others. For example I got 67% right. Is that good? Is that bad? Am I a psychotic maniac or just some misfit? :D
Btw, I think the instructions should be kept visible at all times imho.
Post edited March 29, 2021 by blotunga