Jump to content
Testers Wanted: Newton Driver Shafts ×

TBS

Member
  • Posts

    322
  • Joined

  • Last visited

Reputation Activity

  1. Like
    TBS got a reaction from EasyPutter in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  2. Like
    TBS got a reaction from GolfSpy_KFT in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  3. Like
    TBS reacted to Golfspy_CG2 in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    As someone who has had hand in selecting testers for the past four years, I found this research and post very interesting.  I think all of the MODS had to take a deep breath when we saw the title and the beginning of the thread.  While I am confident in the system that has been handed down form the moderators before us and continued under our management works, you never know what the numbers may say.  
    I think MPR said most everything above better than I can say it.  But one thing I will add on to his post and it's in reference to some of the stats that @TBS mentioned.   I think the number of post-Median- is a bit high.   While he did a very good job of explaining he has no way to know the number of posts when they were selected, I can tell you from 4 years history, the number of posts after being selected for a test the first time increases much faster than the numbers indicate.  There are many caes where the number is in the 100's and more than a handful with less than 100--granted those are the smaller tests that he mentioned.
    Where he was probably close to spot on is the reputation mean of 1.92  This is a number we look at as it separates those members who may have a high volume of posts just due to the number of years they've been hear and applying for 30 tests a year will add up.  But the reputation points that are earned for providing  quality posts will be indicated in this number.   Reputations of 2.0 or higher will go a long way in breaking any possible tie breakers when it comes down to the last spot in a test.
     
  4. Like
    TBS got a reaction from cksurfdude in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  5. Like
    TBS reacted to golfish! in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Definitely agree on the topic of getting selected for small things first then getting selected for bigger items. I started with a test for CBD oil, then got selected for a Ben Hogan 3W test, and now I am testing the Motocaddy electric push cart. Makes sense that the admins will bring you in on testing with a relatively lower value item to make sure you are reliable and can write a coherent review then put you in the hat for bigger ticket items. 
    Also agree that as I have been selected for testing, I become more selective on my applications. Before my first test, I applied for everything! After actually completing a test, I realized how much work goes into an actual review and now I only apply for items I am actually interested in and willing to put in the work. Plus, I am a perfectionist so I end up writing this huge review and then have to pare it down to keep it within the guidelines of the review so extra work for me xp.
    The way that I approach testing opportunities is also from the perspective of sharing is caring. There are only so many testing opportunities and many times more eligible forum members. As a courtesy, if you have already been chosen ( as @GolfSpy MPR mentioned, if you are currently testing you won't be selected but that should be a given!) for some testing opportunities you should give others an opportunity (and give the admins less applications they have to go through lol) 
    Overall great post @TBS!
  6. Like
    TBS got a reaction from GolfSpy_APH in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  7. Like
    TBS got a reaction from cksurfdude in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Go ahead and share... Isn't MGS about providing "the truth and nothing but" and shouldn't their forum fall under the same mission?
  8. Like
    TBS got a reaction from cksurfdude in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Yes - That should be added somewhere. In all of those cases, these "special" selections were clearly stated in the Tester announcement if I remember correctly. 
  9. Like
    TBS got a reaction from pakman92 in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  10. Like
    TBS got a reaction from silver & black in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Go ahead and share... Isn't MGS about providing "the truth and nothing but" and shouldn't their forum fall under the same mission?
  11. Like
    TBS got a reaction from sirchunksalot in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Yes - That should be added somewhere. In all of those cases, these "special" selections were clearly stated in the Tester announcement if I remember correctly. 
  12. Love
    TBS got a reaction from TR1PTIK in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  13. Haha
    TBS got a reaction from JohnSmalls in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Go ahead and share... Isn't MGS about providing "the truth and nothing but" and shouldn't their forum fall under the same mission?
  14. Like
    TBS got a reaction from kardboard.kid in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Go ahead and share... Isn't MGS about providing "the truth and nothing but" and shouldn't their forum fall under the same mission?
  15. Like
    TBS got a reaction from JohnSmalls in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Yes - That should be added somewhere. In all of those cases, these "special" selections were clearly stated in the Tester announcement if I remember correctly. 
  16. Like
    TBS got a reaction from Chevelle in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  17. Like
    TBS reacted to mpatrickriley in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Whoa! This is amazing work.
    As the one who wrote most of the FAQ post and who has a hand in choosing testers, it's certainly reassuring to find data that says that we're doing what we say that we're doing. To be honest, I'd rather see the number of new testers be even higher than it is. We are always looking to get new members involved in testing. I know we have a ton of people who join the Forum just to apply for tests: if you're that guy and you're reading this, please start posting, even a little. It's not feasible for us to pick people who have no posting history; we just don't know if we're going to get a review, and ghosting reviews jeopardizes relationships with the OEMs that make the reviews possible. But if you post, even a little bit, we're going to notice, and we will try to get you involved.
    On the Donor thing: here's a reminder that the Donor tags aren't automatically applied, and we Forum mods/admins don't have access to the donation history for the blog. If you want to be recognized as a donor to MGS on the Forum, shoot one of us a DM with your transaction ID. That stats bear out what we've said before: donations don't guarantee reviews (this isn't pay for play), but it's a component that we take into account, sometimes  to break ties.
    As others have mentioned: being selective in applications will often help your cause. There are some who apply for everything, even when they've just been picked for something and haven't completed that review yet (hint: you'll never be picked for a review while you're still in process of another review). Selective application guy has a leg up on apply for everything guy.
    As others have said, we aim to run a pretty open shop here. If you've got any other questions about the selection process, I'm happy to answer them.
  18. Like
    TBS got a reaction from MattF in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Go ahead and share... Isn't MGS about providing "the truth and nothing but" and shouldn't their forum fall under the same mission?
  19. Like
    TBS got a reaction from gavinski91 in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  20. Like
    TBS got a reaction from hartrick11 in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  21. Like
    TBS got a reaction from edingc in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  22. Like
    TBS got a reaction from Brian A in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
  23. Like
    TBS reacted to GolfSpy_SHARK in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Your original post hit on once selected you become a little more selective in your choices for applying to tests. I know this is true for me, once I got the first one done and had the lay of how reviews go (was not selected initially but someone had to back out due to the pandemic and  gave me the opportunity) I became comfortable with the process and have really tried to focus on things that truly interest me. The site has given me insight into things I never really looked at before. It has prompted a few of my purchases over the past year (push cart and planemate) and has given me some great information along the way. Also some great private chit chats about life in general of note @Shankster@bens197and getting out with some great members @StrokerAce and @Wedgieand of course my main group player @B.Boston. 
    With the new process of getting folks involved to join from social media I hope a lot realize the benefits of the site and are able to enjoy it as much as I do!
  24. Like
    TBS reacted to GolfSpy_APH in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Hopefully not to repeat what I said in my other post, but we know there is a human element to it and I think MGS, mods and the forum community is pretty open as to what in general it takes to be selected. You have mentioned several of those points in your post as well as there are several threads on what it takes to be selected. 
    The forum's mission statement: “MyGolfSpy's Community Forum aspires to continued growth while adhering to its core goal of being a place for serious, honest and fun discussion on both golf and life, and a place to go for product reviews - a true online ‘19th Hole’ where all are welcome and all members are treated with respect and friendship.”
    For some the opportunities come quicker and others it comes much further down the line. Pretty sure it took a couple years before I was selected for one and it was well worth the wait. There is so much that the forum has to offer and testing is only a small part of it. Yes, most come here to start with because of testing however as has been discussed in many threads it is the community and everything else that the site has to offer which keeps us here. 
    Again a lot of the general guidelines are laid out as you have pointed out, from there provide value in the content you produce. It is one thing to have 1,000 comments in various threads it is another to have valuable content for the members here to read and react to. I still believe as a tester you have a big responsibility to the members, MGS and of course the OEM to provide a quality review. Whether you liked the product or not, a honest quality review is what you sign up for. Not just a piece of free gear (I want to be clear I don't believe you think that either, at all), with that there needs to be a level of trust and it is why I mentioned in my first post that a good intro, unofficial review can go a long way in helping members get noticed and gain that trust. 
    I love seeing new people get selected as it gives me a chance to get to know them a little better. I have built some great friendships through this site and enjoy the community. I hope that all who are selected enjoy the process as well as can assist other new members in guiding them in the right direction to do the proper things to hopefully have them selected in the future as well. 
  25. Like
    TBS got a reaction from GolfSpy_BNG in Unofficial Review – How MGS chooses their reviewers, a non-scientific study of 18 months of member reviews.   
    Background 
    After I took a few months off of MGS due to some work and life things that needed my attention, I finally got some time to log back on and post a long overdue review. After posting, I found myself curious as to how I was selected and the likelihood of being selected again, especially since my product was so niche and did not receive many interactions. So I took the challenge upon myself to do a not so scientific study of how reviewers are selected and compare it to the post (https://forum.mygolfspy.com/faq/testing/how-are-equipment-testers-chosen-r11/) by the Mods on how testers are selected. While there are no bombshells, I think we Spys understand the value of data in telling our story. So, I did my best at pulling all data from the past 18 months of reviews
     
    Methods 
    First off, I am no data scientist. From DJ to Bryson, I’m somewhere in the middle. I mainly do very basic data analysis in my role through excel. I know, I know, SQL is much more efficient and I’ll get around to learning it. For this study, I focused on 18 months of product reviews. I chose this timeframe to get some data points outside of the pandemic. I started going through the review pages and noticed it was not as organized as I thought, so I chose to do only 18 months of reviews that were already on the “Past Opportunities” page in the dropdown menu. Like I said, I believe that there may have been a few more reviews that have not been archived to this page, but I don’t think it was significant.
    I then did the manual labor of data entry and collection. I clicked on each individual “TESTERS ANNOUNCED” within the time frame, recorded the following data points 
    -       date of review published
    -       product
    -       rough product MSRP 
    -       MGS username
    -       If the User has a Donor badge on their profile
    -       Content count
    -       Total reputation
    -       Month and year MGS account was created
    -       City and State
    Once all of this information was input into the spreadsheet, I then added a note if they had multiple reviews and I also created an additional metric to measure the quality of the posts. All analysis comes from these data points.
     
    Limitations
    Overall, the main limitation is that this dataset was manually curated, so there could be a mistake, but any mistake would be minor. I have checked that there is not an additional 0 added somewhere but there could be +/- 5 units off when it comes to reputation due to fat fingers on a keyboard. Additionally, the MSRP was calculated off a quick Google search. I get that there could be sales, or the price was actually $25 higher at the time of testing. Finally, the content created, and total reputation numbers are not accurate to the time the member was selected. There is no way for me to go back and see this information, but I proceeded with collecting this because for the most part, I believe that most people’s posts grow linearly. So, the quality of posts metric should not be too far off. I will address that in bigger reviews (Cobraconnect), this ratio can get skewed. Enough with the small talk, let’s take a look at the data.
     
    Data:
    Let’s take a look at the data!
    By the numbers-
    -       48 total products
    -       202 unique testers
    -       ~$119,625 total costs of products tested
     
    As I said in the introduction, I will be using the MGS selection criteria as a framework of analysis.
    Picture – I did not go through and collect this data. This is also the easiest of all of them. Put something up there! I’ve seen pictures of the people (I assume), cartoons, their children, memes, etc. Just put something up. (Inclusive, but probably true)
    Prove review chops - 71 (35%) of people had more than 1 review. If you get chosen, even for the smallest thing, a pushcart distance tracker, a dozen balls, etc. take it seriously and write a good review. This means good pictures, concise statements that clearly articulate your opinion on a product, timely submissions, and good follow ups. A good first review will go a long way to getting your second review and the data clearly backs this up. Additionally, one anecdotal note, it was very common for there to be a progression of items to review, starting with a smaller item and then growing to more expensive items. (True)
    Start posting – Please remember that this category is not perfect because I was not able to pull the post and reputation numbers at the time the testers were selected. I am assuming that people post at a linear rate. I looked at three areas for this section. Number of posts, Total reputation, and third metric that I created, Quality of post score. I wanted to look at how many interactions per post the reviewer gets. This metric speaks to higher quality of posts for the forum.
     
    Total number of posts
    Total reputation
    Quality of Post Score*
    Range
    5 - 14,307
    3 - 26,497
    0.09 - 5.8
    Mean
    1,412
    2866
    1.92
    Median
    513
    824
    1.85
    Because of such a large range with bunching happening at the higher end of each category, I would suggest forum members look at the Median. The best chance for you to be chosen is to have over 500 posts with over 800 total reputation which comes out to 1.85 interactions per post. Like I said earlier, there were a number of members chosen with less than this number but most under 500 posts were in the 100-200 range. I think the biggest takeaway, is to post frequently and create posts that others want to engage in. I think an average user can get to 200+ posts in 2-3 months. 
    There might be an issue that leads to inequity when it comes to being chosen. As I started reviewing some of the posts and interactions, if you are chosen for a more high-profile test, your posts and interactions will increase at a higher rate than other members. This easily leads to a boost in your profile making it easier to be selected. I don’t think there is a way around smoothing out this distinct increase, but it is something that I hope mods take notice of. (True)
     Become an MGS Donor
    This category comes with major limitations. The only public piece of data that I was able to collect was if the member included the MGS Donor badge in their profile. From this information I found 59.9% of testers had a donor badge in their profile. I suspect that this number may be closer to 70-80% but these members have not reached out to get the donor badge added.
    (Inconclusive but probably true)
    Overall, MGS is clear about how testers are selected. For the most part their criteria is clear and the data shows that they follow the criteria for selection. 
    Common misconceptions
    I wanted to add a section that debunked some of the common misconceptions that I had when I first joined the forum. 
    -       I have to be a member for a long time before I get anything (FALSE)
    o   Over the past 18 months, members who were selected had profiles created from March 2010 – November 2020. 42 testers (21%) joined in 2020. If you’re new and are contributing to the forum, you have a decent chance of being selected.
    -       Once I get one review, I won’t ever get another (FALSE)
    o   71 (35%) members had more than 1 review in 18 months. I suspect if I go back even further, they had other reviews too. As I said in the above section, if you get chosen and do a good review, you actually put yourself at an advantage to be selected again.
    o   No reviewer had more than 3 reviews in the past 18 months. It appears that this is the limit, it looks to be extremely rare for a member to have more than 1 review every 6 months. 
    -       I have to be good at golf to be selected for a review (FALSE)
    o   This idea is very much false, MGS wants members of all skill levels to be involved in testing. In order to prove this, I pulled the handicaps listed on members’ profiles. This is a limitation because it is member provided information that is most likely not updated regularly. Regardless, here is what I found.
    Range – (33 - +5) Mean – 10.09 Median – 9.2 o   If anything, having an accurate handicap is beneficial to being selected because, it appears that MGS wants to have a broad range of testers for each product, specifically balls and clubs.
     
    Conclusion
    For the most part, MGS sticks to what they say when it comes to choosing testers but it appears that there is some non-measurables that they use when it comes to selecting first time testers. It does not appear that there is a formula or that it is completely random. There seems to be a human element which makes it hard to say exactly why specific members were chosen over others from the outside looking in. 
    Shoutout to @Brian A, @jlukes, @B.Boston, @txgolfjunkie, @gavinski91, @Mtbryant01, @daviddvm for having been selected to review over $3,000 worth of equipment! These individuals were ones who had been selected for this year or last year's Cobraconnect challenge. So each of them had at least one product review in addition to cobra connect in the past 18 months. 
     
    TL;DR - Overall, post quality content frequently, if you are given a chance to test an item, take it seriously, and keep posting. If you’re new and you stay involved, your time will come. 
     
    Okay, I’m tired. Let me know you’re thoughts!
×
×
  • Create New...