Jump to content
Testers Wanted: Newton Driver Shafts ×

TBS

Member
  • Posts

    322
  • Joined

  • Last visited

Reputation Activity

  1. Like
    TBS got a reaction from EasyPutter in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  2. Like
    TBS got a reaction from GolfSpy_KFT in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  3. Like
    TBS got a reaction from cksurfdude in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  4. Like
    TBS got a reaction from edingc in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  5. Like
    TBS got a reaction from Rickp in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  6. Like
    TBS got a reaction from CarlH in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  7. Like
    TBS got a reaction from MattF in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  8. Like
    TBS reacted to GolfSpy_APH in Readers, which books are you picking up these days?   
    Not necessarily a book, i still have serval golfers journal issues to go through. Have a nice display set up for them at home, but need to bring a few of them in to work and read in my downtime.

  9. Like
    TBS reacted to bens197 in Readers, which books are you picking up these days?   
    Several titles caught my eye today and I walked out of Barnes & Noble with these four. Ryan Holiday is a great storyteller in case you’re looking for something new.
     
    This year I’ve read the courage to be disliked, kindness & wonder, thinking fast & slow and the righteous mind, why good people are divided by politics and religion.  
     
    So what about you guys, what are you picking up?

  10. Like
    TBS got a reaction from russtopherb in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  11. Like
    TBS got a reaction from Kenny B in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  12. Like
    TBS got a reaction from cnosil in How'd you play?   
    Late recap - 5/14/21 RTJ Oxmoor Valley, Ridge
    86 (42/44); 5 FIR; 7 GIR; 35 putts; 10 chips; 1 Bird; 3 penalties
    First round in a month where I couldn't find my driver swing. Felt like I just lost the feel but it made me go with a fairway finder hybrid off the tee a few times, which turned out really well. I'll be playing this shot more. Putter finally got hot on the back 9 and started make some putts. Short game is slowly getting there. Irons were pretty good. 1 hosel rocket which lead to a penalty but the other 2 penalties were just dumb mistakes. Not necessarily bad shots.
    Biggest goal was achieved - have fun. A very enjoyable walk with the sun setting.
  13. Like
    TBS got a reaction from Albatrass in Southeast: KY, TN, NC, SC, AL, GA, FL   
    Yes! We need to get a Rhapsodo video out on the forum for all to love.
  14. Like
    TBS got a reaction from cksurfdude in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Go ahead and share... Isn't MGS about providing "the truth and nothing but" and shouldn't their forum fall under the same mission?
  15. Like
    TBS reacted to Kenny B in Got Problems?   
    My dad passed away this morning. Yesterday morning he had a heart attack.  At the hospital the doc said he had congestive heart failure, fluid on lungs, low BP, low oxygen level, and kidneys failing.  It was somewhat unexpected, but he has had problems for several years that got worse during COVID.  He had a stroke in 2014 at age 83, but during COVID, his assisted living facility was shutdown and we couldn't visit.  He was basically shut up in his apartment for 9 months; body and mind deteriorated.  He fell on Christmas morning, cracked his pelvis and two bones in his neck; spent 3 months in rehab.  We moved him 300 miles to a care facility here in Richland WA on March 19 so he was close to us.  
    At age 90 he has had a very full and happy life.  An Army veteran of the Korean War, dad went back to college and graduated from Oregon State University with a BS in Horticulture in 1962, three years before I graduated from H.S.  He worked in hazelnut research until he retired from the USDA in 1987 and raised cows on his farm.  He and my mom, who passed away in 2017, traveled around the west in their RV having fun with the many friends they had met over the years.  He was a great dad and a great person, and I enjoyed my time with him as his son.  
  16. Like
    TBS reacted to Tom the Golf Nut in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Great work on gathering all the data @TBS it must have taken quite some time.
    I could have been referred to as a stalker when I first joined. All I was interested in was the reviews and most wanted testing results. It helped me narrow down some equipment choices. I'd go buy and go into sleep mode on the site. It wasn't until someone had a question about something that I was very knowledgeable about before I got involved in the forum posting. 
    My story is probably similar to others. Stalking and reading but not contributing much in the beginning.  But once you get involved your hooked. The first time I was selected to test something it was a full set of irons. Probably not the norm but my handicap probably fit the need for the type of clubs to be tested. As others have stated that they don't apply for everything, I follow the same thought. It has to be something I want to try. If it is something that I wouldn't use, why would I want to test it. If it is something that I wanted to try, and was selected and it just doesn't work for me, I would have no problem giving it to another spy to try. I wouldn't try to sell it to make a buck. That's not what we are about or at least we shouldn't be.
    Testing is a privilege and should be treated as such. There is a lot of work involved. But it can be a lot of fun on the way. Whenever I test I get other golfers opinions as well. I know some pretty good golfers and I let them hit the clubs or try the ball. I would have never given a thought to a direct to consumer brand it it weren't for that opportunity. Thanks MGS. They stayed in the bag. My second opportunity was for a dozen balls. I wrote the brand off years ago. But doing a full honest test was enlightening and I was impressed with how the model performed for the price. 
    In closing, thanks to everyone who makes this forum what it is. And a special thank you to the Moderator's. We all know sometimes they have their hands full. 👏
     
     
  17. Like
    TBS got a reaction from cksurfdude in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  18. Like
    TBS got a reaction from GolfSpy_APH in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  19. Like
    TBS reacted to tony@CIC in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    @TBS great thread! I'm a bit late and would only reiterate the comments many have made - including the commitment to give timely, honest and thorough feedback to the Forum members. The work clearly falls under my category of 'careful what you ask for'.
    As someone who was first selected to be a tester for C-300 in 2019, I can say I had no idea of the work involved. Most first time testers probably end up feeling the same because while we think we know what's involved, we're too pumped up from being selected to think about the testing details and the time commitment. And it doesn't end with the final (stage) write-up. One only has to see the ongoing SuperSpeed thread to see that we still get posts a year later. 
    Would I do it again - of course. I think it's a great experience, and a way to give unique, unfiltered insight on products back to the Forum. 
  20. Like
    TBS got a reaction from cksurfdude in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Yes - That should be added somewhere. In all of those cases, these "special" selections were clearly stated in the Tester announcement if I remember correctly. 
  21. Like
    TBS got a reaction from pakman92 in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  22. Love
    TBS got a reaction from TR1PTIK in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  23. Like
    TBS got a reaction from Chevelle in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  24. Like
    TBS reacted to xOldBenKenobiX in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    @TBS That is some awesome work.
    I appreciate the data insight that you have compiled.
    I am guilty to initially have came to this forum just for testing opportunities, now I really enjoying being around and reading all of the great content that other members like yourself put out. 
  25. Like
    TBS got a reaction from gavinski91 in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
×
×
  • Create New...