Sunday, January 31, 2016

Hillary Clinton's Biggest 2016 Rival: Herself

In a recent post I noted that despite Bernie Sanders doing better in many important indicators, Obama 2008 received 3x more media coverage than Sanders 2016.

Reasonably, a reader of my blog noted that not all coverage was equal, that a presidential hopeful might be happier having no coverage than negative coverage. So I decided to do some textual analysis of the headlines comparing Sanders and Clinton in 2016 and Obama and Clinton in 2008.

I looked at 4200 headlines mentioning either Obama in 2007/08, Sanders 2015/16, or Clinton 2007/08 or 2015/16 scraped from major news sources: Google News, Yahoo News, Fox, New York Times, Huffington Post, and NPR (From January 1st, 2007 to January, 2008 and January 1st, 20015 to January, 2016).

First I constructed word clouds for the Clinton and Sanders race.
Figure 1: Hillary Clinton's 2015/2015 headline word cloud. Excluding "hillary" and "clinton" as terms when constructing the cloud.
Figure 2: Bernie Sanders headline word cloud. Excluding "bernie" when constructing the cloud.
From looking at the differences between Figure 1 and Figure 2, there appears to be some pretty significant differences. First off, the most frequent term in Figure 2 is "Clinton" followed by a lot of general stuff. "Black" for black vote since there is some concern that Bernie can't get the black vote perhaps combined with some high profile black political activists endorsing him.

Figure 1 though is a world of difference. Almost every major word is a scandal. Email and emails, ben ghazi, private server, and foundation. Each referencing either the email scandal in which Clinton set up an potentially illegal private server to house her official emails while secretary of state, Ben Ghazi, the affair in which diplomats died as a result of terrorist action which many have blamed on Hillary Clinton, as well as the alleged unethical misuse of Clinton foundation funds as a slush fund for the Clinton families luxurious tastes. Interestingly, "Bruni", as in Frank Bruni, a New York Times reporter who has taken some heat for his critical reporting of Hillary Clinton has appeared in the cloud.

But is this really so bad? How does these word clouds compare with those of 2007/2008?

Figure 3: The word cloud from 2007/2008 for Hillary Clinton excluding "hillary" and "clinton".
Figure 4: The word cloud from 2007/2008 for Barack Obama excluding "obama".
From Figure 2, 3, and 4 we can see a significant and substantive difference from that of Figure 1. In those figures the most newsworthy thing to report is the rivalry for the primary seat. All other issues are dwarfed. With Figure 1, scandals and criticism of Hillary Clinton abound. Looking at these word clouds, I would suspect that the Clinton camp would be happy to have the news coverage they had in the 2008 campaign rather than the coverage they are currently having.

But are these frequency word graphs really a reasonable assessment of the media? What of the overall tone of these many articles?
Figure 5: Sentiment analysis of the news coverage of Clinton 2008 and 2016 and Obama 2008 and Sanders 2016. Scales have been standardized so that a positive rating indicates higher likelihood of emotion being displayed and negative rating indicates lower likelihood of emotion being displayed.
From Figure 5 we can see that headlines mentioning Sanders score the highest on the emotions: anticipation, joy, surprise, trust, and positivism. He also scores the lowest in: anger, fear, sadness, and negativity. While Clinton 2016/2008 score the highest on: anger, disgust, fear, sadness, and negativity and the lowest on: anticipation, joy, trust, and positivism.

Compared with 2008, Clinton 2016 articles appear to: have less anger, anticipation, joy, trust, and fear while also having more disgust, sadness, surprise, negativism, as well as slightly more positivism. Overall, the prospects as gauged from the emotions engendered by the media appear to be pretty bleak for Hillary Clinton.

It is interesting to note that articles about Sanders score emotionally very similar in general to that of Obama in direction except that Sanders seems to be outperforming Obama with higher: anticipation, joy, trust, and positivism while also performing better by getting lower scores in: anger, fear, sadness, and negativism. In only one indicator does Obama do better than Sanders and that is in the emotion disgust. The largest emotional difference between Obama 2008 and Sanders 2016 is that Obama articles scored the lowest on surprise while Sanders have scored the highest.

Overall, we must conclude that at least in terms of emotional tone of articles if not coverage, Sanders is doing significantly better than Hillary and even better than Obama was at this time in the 2008 presidential race.

Thursday, January 28, 2016

Obama 2008 received 3x more media coverage than Sanders 2016

Many supporters of presidential hopeful Bernie Sanders have claimed that there is a media blackout in which Bernie Sanders has been for whatever reason blocked from communicating his campaign message. Combined with a dramatically cut democratic debate scheme (from 18 in 2008 with Obama to 4 in 2016 with Sanders) scheduled on days of the week least likely to be viewed by a wide audience this is seen as a significant attempt to rig the primary to ensure Clinton gets the nomination.

Despite a strongly supported petitions with nearly 120 thousand signatories and 30 thousand signatories demanding more debates, Debbie Wassermann Schultz, chair of the Democratic National Committee (DNC) and former campaign co-manager for Hillary Clinton's 2008 campaign has repeatedly denied the possibility of considering more debates.

Combined with a complex fiasco earlier in the year dubbed "DataGate" in which the DNC temporarily shut down the Sanders campaign from accessing critical voter information two days before the third debate based information presented by Schultz and refuted by the vendor. Access to the data was quickly restored after a petition demanding action gathered 285 thousand signatures in less than 48 hours.

With these two scandals in mind, Sanders supporters have become increasingly paranoid of what they view as the "establishment" acting to protect its candidate, Hillary. In this light, they have been very frustrated by the lack of media coverage of Sanders. Supporters claim that he and his views are almost entirely unrepresented by the news media.

I have been wary of jumping on this bandwagon. It seems natural that the democratic front-runner would get more coverage than that of a less known rival. Clinton naturally attracts media attention as she seems to have a new scandal every day while Sanders seems to be a boy scout who apart from being jailed for protesting segregation in the 60s, not enriching himself from private speaking fees and book deals, adamantly defending the rights of the downtrodden, and standing up to the most powerful people in the world really has little "newsworthy" about him.

Setting aside the difficult question of what the media considers "newsworthy", I would like to ask the question, "Is Sanders getting more or less media coverage than Obama got in 2007/2008?"

In order to answer this question, I look back at the front pages of online newspapers from 2015 and 2007. Starting on January 1st and going up till yesterday, I scraped the headlines of Google News, Yahoo News, Huffington Post, Fox News, NPR, and the New York Times.


Table 1: This tables shows the frequency the name "Sanders", "Obama", or "Clinton" (or "Bernie", "Barack", or "Hillary") have come up in each of the news sources for which headlines were recorded in the current race compared with that of the 2008 race. The columns Sander/Clinton and Obama/Clinton show the relative frequency. The highlighted rows show the relevant headline ratios with numbers less than 1 indicating the ratio of headlines featuring a challenger to that of Clinton.


Race
Web N Sanders  Obama  Clinton  Sanders/Clinton  Obama/Clinton
2008 NYT 25902 1 100 138 0.01 0.72
2008 Fox 39132 10 167 357 0.03 0.47
2008 Google 8452 0 103 131 0.00 0.79
2008 HuffPost 1281 0 40 60 0.00 0.67
2008 NPR 20878 0 90 94 0.00 0.96
2008 Yahoo 27308 3 266 334 0.01 0.80
2016 NYT 36703 142 592 531 0.27 1.11
2016 Fox 32971 78 1284 898 0.09 1.43
2016 Google 21036 67 378 253 0.26 1.49
2016 HuffPost 45131 236 925 549 0.43 1.68
2016 NPR 9216 52 259 106 0.49 2.44
2016 Yahoo 19844 44 346 206 0.21 1.68

From Table 1, we can see that NPR is the news network which has the most balanced coverage of Obama in 2008 and Sanders in 2016. Fox is the least balanced of the networks with almost no coverage of Sanders. It is worth noting the the coverage of Sanders is abysmal in general, with no agency reporting on Sanders even half as much as Clinton. This is a significant deviation from Obama's race against Clinton in which only Fox reported on him with slightly less than 50% coverage.

Table 2: This table shows the total number of news reports across all agencies for each candidate in each race. 

Race  N  Sanders  Obama  Clinton  Sanders/Clinton  Obama/Clinton
2016  164901 619 3784 2543 0.24 1.49
2008  122953 14 766 1114 0.01 0.69

From Table 2 we can see that both candidates Sanders and Obama have not received nearly as much coverage by the media as their rival Hillary Clinton. Sanders however seems to be at significant disadvantage compared with Obama at the same time in the previous race as Obama on average had about two articles written about him for every three written about Clinton. Sanders has significantly less coverage with only one article written about him for every four written about Clinton.

By this time in the 2008 primary race, Senator Obama had received 2.8 times as much coverage relative to his rival Hillary Clinton as Senator Sanders (.69/.24=2.8). This is despite Sanders doing better than Obama in many key metrics (Crowds, Donations, and Polling).

With Sanders taking the lead in New Hampshire and neck and neck with Clinton in Iowa, we might wonder if coverage is improving for the Sander's campaign.

Figure 1: The top curve is the relative frequency of Obama coverage relative to that of Clinton while the bottom curve is that of senator Sanders to that of Clinton. A 1 on the y axis represents equal coverage of the challenger with that of Clinton.
From Figure 1 we can see that despite a remarkable performance in energizing large crowds, doing well on polls, and collecting an immense quantity of donations, media coverage appears to be dreadful for Sanders with even in the current peak, for every two stories about Clinton, there is only one story about Sanders.

This is probably in part due to how the DNC and the Clinton camp (doubtful there exists any difference) appear to have white washed the primary, restricting the debate structure and constantly adjusting Clinton's positions so that they appear indistinguishable from that of Sanders'.
Figure 2: Shows a popular twitter meme which conveys the frustration many have with the media.

In number of written articles Bernie has suffered due to an apparent media blackout. He has also suffered in lack of airtime. We can see this from Figure 2, in the number of minutes of coverage of him aired as of the 20th of December.

The criticisms of the DNC rigging the debate process and the bias in which candidate the media chooses to follow are significant concerns for any democracy. This all fits well within a "systemic" corruption framework of thinking. However, this framework might not accurately fit what is actually happening with the media and within the DNC.  Additional investigation is required before further conclusions can be made.

But even in the presence of uncertainty as to the true nature of the presidential campaign. Accusations such as these and others levied against the Hillary Clinton and the DNC should be investigated with due diligence as they represent a fundamental threat to the existence of the democracy far more pernicious and dangerous than anything Middle Eastern terrorists can muster.

Tuesday, January 19, 2016

Who are Turkopticon's Top Contributors?

In my most recent post "Turkopticon: Defender of Amazon's Anonymous Workforce" I introduced Turkopticon, the social art project designed to provide basic tools for Amazon's massive Mechanical TURK workforce to share information about employers (requesters).

Turkopticon, has a been a runaway success with nearly 285 thousands reviews submitted by over 17 thousand reviewers since its inception in 2009. Collectively these reviews make up 53 million characters which maps to about 7.6 million words as 5 letters per average word plus two spaces. At 100 words every 7 minutes this represents approximately 371 days collectively spent just writing reviews. It is probably safe to considered this estimation an underestimation.

So given this massive investment of individuals in writing these reviews, I find myself wanting to ask, "who is investing this kind of energy producing this public good?"

In general, while there are many contributors, 500 contributors represent 54% of the reviews written. With the top 100 reviewers making up 30% of the reviews written and the top 15 representing 11% of all reviews written.

Figure 1: Using this graph we can find the Gini coefficient for number of submissions at around 82% indicating that a very few individuals are doing nearly all of the work.
Within Turkopticon there is no ranking system for reviwer quality so it is not obvious who are the top contributors and what their reviewing patterns look like. In this article we will examine some general features of the top contributors.

Table 1: A list of the Top 15 Turkopticon review contributors. Rank is the reviewer rank by number of reviews written. Name is the reviewer's name. Nrev is the number of reviews written. DaysTO is the number of days between the oldest review and the most recent review. Nchar is the average number of characters written in each review. FAIR, FAST, PAY, and COMM are quantitative scales that Turkopticon requests reviewers rank requesters by. Fair indicates how the requester was at either rejecting or failing to reject work. Fast indicates how quickly the requester approved or rejected work. Pay indicates how the reviewer perceived the payment scheme for work was. And Comm refers to communication which indicates, if the worker attempted to communicate with the requester, how well that requester addressed the worker's concerns.

Rank
Name
Nrev
DaysTO
Nchar
FAIR
FAST
PAY
COMM
1
bigbytes
5236
294
219
4.89
4.99
2.74
3.26
2
kimadagem
3732
327
490
4.98
4.95
3.23
2.45
3
worry
2637
649
186
4.97
4.87
3.29
3.84
4
jmbus...@h...
2539
538
110
3.10
3.05
3.08
1.55
5
surve...@h...
2488
177
344
4.85
4.77
4.13
4.27
6
jaso...@h...
2100
260
78
4.98
4.90
4.78
4.73
7
shiver
1721
303
139
4.94
4.89
4.44
3.81
8
Thom Burr
1594
434
288
4.69
4.81
4.54
3.52
9
jessema...@g...
1539
467
157
4.96
4.70
3.64
4.00
10
absin...@y...
1320
309
75
4.97
4.91
3.99
3.78
11
Rosey
1313
634
101
4.80
4.76
4.35
4.18
12
CaliBboy
1281
83
201
4.02
4.07
2.71
3.84
13
ptosis
1278
367
110
3.00
3.04
2.89
3.29
14
NurseRachet (moderator)
1274
669
351
4.76
4.70
3.91
3.72
15
TdgEsaka
1234
523
258
4.75
4.81
3.73
3.00

Find the full list as a google document here (First Tab).

From Table 1 we can see that all of the top 15 reviewers have contributed over 1,200 reviews with bibytes being the most prolific reviewer contributing over 52 hundred. In terms of the reviewer active on Turkopticon the longest, NurseRachet (a forum moderator) has been on the longest followed by worry and Rosey. In terms of the longest winded kimadagem has the longest average character count per review at 490 characters or  approximately 70 words per review while CaliBboy has the shortest reviews at only 75 characters or around 10 words.

In terms of the averages the four rating scales there is a fair bit of diversity between the top reviewers with jaso...@h.. having the highest average score between the four scales of 4.8 and jmbus...@h... having the lowest average scores, around 2.7 followed by ptosis with a average a tiny bit higher than 3.

So now we have a pretty good idea of what in general the top contributors to Turkopticon look like.

But what of the quality of the contributions?

In order to understand what a quality contribution in Turkopticon looks like we must consider the standards that the community has come up with after years of trial and error.
1. The four different scales should be distinct categories. That is a high pay rate should not cause someone to automatically rank a high Fairness or visa versa.
2. To this end what is referred as 1-Bombs an attempt to artificially drop a requesters score by ranking all scales 1 should be avoided. Similarly, 5-Bombs should also be avoided.
3. Within Turkopticon there is also the ability to flag reviews as problematic. If one of your reviews is flagged, it means someone has a problem with it.
4. In general we would like reviews to be approached with a level head so that reviewers write independent reviews rather than ones based on their current mood.
5. Finally, in general we would like reviewers to review as many categories as they can when writing reviews.

Variables
From these 5 guidelines, I will attempt to generate variables that measure each of these targets.
1. For different scales I will focus on the relationship between pay and the other three scales for individual requesters (FairPay, FastPay, and CommPay for the correlations between Fair, Fast, and Comm with pay respectively). The reason I focus on Pay is that it seems to be the scale often times that concerns Mturk workers the most.

Table 2: For reviewers the average correlation between Pay and other scales.
ALL
Top 100
Top 15
FAIRPAY
0.80
0.56
0.44
FASTPAY
0.73
0.48
0.37
COMMPAY
0.81
0.67
0.61

From Table 2 we can see that the average reviewer has a very strong positive correlation between Pay and the other scales with FAIR, FAST, and COMM in the .73-.81 range. In contrast the Top 100 and especially the Top 15 all have much lower correlations. We should not necessarily hope for a zero correlation between these factors since one might expect a requester who pays too low might also act unfairly, not respond quickly to submissions, or have poor communication habits.

2. 1-Bombs and 5-Bombs are easy to observe in the data in terms of all 1s or all 5s. However, it is worth noting that all of either 1s or 5s might actually be a valid review given the circumstances. Variables 1Bomb and 5Bomb will be a variable measuring the likelihood that an individuals review will be either of the two categories.

3. Flags are also a variable that can be directly observed. Multiple flags can be featured on a single review. The highest flag hit in my data has 17 flags. The variable FLAG is the average/expected number of flags for an individual reviewer's reviews.

Table 3:  The prevalence rates of 1-Bombs, 5-Bombs, and Flags.
ALL
Top 100
Top 15
1BOMB  
0.192
0.038
0.019
5BOMB  
0.179
0.049
0.025
FLAGS
0.014
0.005
0.005

From Table 3 we can see the prevalence rates of 1-Bombs, 5-Bombs, and Flags is much higher among the general reviewers than that of the Top 100 and especially among the top 15.

4. In order to attempt to measure "level-headedness" I will just look at how reviews trend from a rating perspective. That is, is the value of the current review correlated (either positively or negatively) with the value of the next review?

Table 4: The auto-regressive one step correlation between review levels. In this case the "ALL" category only includes the 3,700 reviewers who have written more than 10 reviews.

 ALL
Top 100
Top 15
FAIRar1
 0.00
0.10
0.10
FASTar1
 0.00
0.09
0.12
PAYar1
 0.02
0.10
0.07
COMMar1
-0.07
0.04
0.04


From Table 4 we can see that inter-review correlation is pretty small especially when compared with the correlation between pay and other scales within the same review (Table 2). Interestingly for the average reviewer, there is almost no correlation across reviews. This might be a result of reviewers writing less reviews in general, thus spacing them more widely and therefore less likely to be sequentially influenced by personal psychological trends.

5. Finally in terms of completeness we can easily measure completeness in terms of how frequently reviews of individual scales were not completed.

Table 5: The completion rates of individual scales.

ALL
Top 100
Top 15
FAIRC
0.849
0.665
0.705
FASTC
0.825
0.651
0.695
PAYC
0.901
0.916
0.918
COMMC
0.605
0.147
0.081

From Table 5 we can see that the completion rates of all scales are more or less equivalent between that of the general reviewers and that of the Top 100 and Top 15 except in the case of COMM. In this case we can see that the top reviewers are much less likely to rate communication.

Constructing A Quality Scale

In order to construct the best scale given our data, we will choose those variables and values that seems to typical of the top 15 most prolific reviewers. From Tables 2 and 3 we can see very distinct differences between the average reviewer and top reviewers. However, for our auto-correlation and completeness rates we see very little differences in general except that the top reviewers are much less likely to rate communication. I can't know exactly why this is the case but I suspect it is a combination of top reviewers avoiding 1-Bombs and 5-Bombs perhaps in combination with top reviewers finding it not typically worth their time to directly communicate with requesters.

So here is my proposed index using standardized coefficients (x/sd(x)):
ReviewerProblemIndex = 3*Flag + 3*1Bomb + 1/2*5Bomb +
                                          1*FairPay + 1*FastPay + 1*CommPay

Because we have standardized the coefficients we can read the scalars in front as directly representing the weight of that variable. Flags, I will weight the strongest as they are an indicator that someone in the community has a problem with the review. Next highest rating are 1Bombs which are widely regarded as a serious problem and frequently discussed on the Turkopticon forum.

5Bombs, FAIRPay, FastPay, and CommPay are also discussed but not considered as important (Turkopticon Discuss). I have caused the 5Bombs to be half as important as FairPay, FastPay, and CommPay variables as it seems cruel to penalize someone for being generous with reviews.

So let's apply our index and see how our top 15 reviewers score!

Table 6: The top 15 most prolific contributors ranked based on the ReviewerProlemIndex (Index, RPI). IRank is the ranking of reviewers in terms of the RPI. Name is reviewer name. Nrev is the number of reviews written. Rank is the reviewers ranked in terms of number of reviews written. The other variables are described above.

IRank  Index   Name Nrev  Rank  Flag  1Bomb  5Bomb  FairPay  FastPay  CommPay
1 1.9 jessema...@g... 1539 9 0.001 0.001 0.016 0.12 0.09 0.20
2 2.1 kimadagem 3732 2 0.002 0.000 0.014 0.05 -0.01 0.27
3 3.2 worry 2637 3 0.000 0.003 0.006 0.11 0.11 0.53
4 3.5 absin...@y... 1320 10 0.000 0.000 0.007 0.24 0.13 0.55
5 3.5 bigbytes 5236 1 0.001 0.000 0.007 0.20 0.04 0.54
6 4.0 surve...@h... 2488 5 0.001 0.001 0.008 0.32 0.29 0.34
7 6.4 shiver 1721 7 0.001 0.005 0.015 0.50 0.33 0.76
8 6.6 jaso...@h... 2100 6 0.001 0.004 0.070 0.41 0.27 0.83
9 10.9 Thom Burr 1594 8 0.002 0.013 0.030 0.87 0.84 0.92
10 11.0 Rosey 1313 11 0.004 0.009 0.022 0.81 0.81 0.85
11 12.4 NurseRachet (moderator) 1274 14 0.016 0.022 0.078 0.39 0.32 0.46
12 12.7 CaliBboy 1281 12 0.022 0.004 0.005 0.20 0.21 0.47
13 13.1 TdgEsaka 1234 15 0.015 0.016 0.029 0.57 0.40 0.73
14 13.4 ptosis 1278 13 0.009 0.039 0.034 0.80 0.78 0.73
15 17.2 jmbus...@h... 2539 4 0.003 0.170 0.020 0.99 0.98 0.92

From Table 6 we can see that in general the more prolific reviewers also tend to be higher ranked on the RPI with a few exceptions. One exception is "jmbus", despite being the fourth most prolific contributor he/she is ranked at the bottom of the top 15 contributors list. This is likely due to having the highest 1-Bomb rate of the index with 17% of reviews being 1Bombs. His/her reviews also seem to be almost entirely correlated with Pay as FairPay, FastPay, and CommPay are all correlated upwards of 90%.

Similarly, "jessema" though only being the 9th most prolific reviewer seems to have the highest quality of reviews (slightly ahead of "kimadagem") with very low Flag, 1Bomb, and 5Bomb rates as well as very low correlation between the scales Fair, Fast, and Comm with that of Pay. Interestingly, though both "Thom Burr" and "Rosey" have very high correlation rates between Pay and the other scales, because the have relatively low Flag, 1Bomb, and 5Bomb rates they are ranked near the middle.

Overall, except for a few exceptions, I am very impressed that the top contributors seem to score so well on the RPI index.

Table 7: The Top 100 most prolific contributors ranked based on the Reviewer Problem Index (RPI).
Rank  Index   Name Nrev  Rrank  Flag  1Bomb  5Bomb  FairPay  FastPay  CommPay
1 -0.13 seri...@g... 488 64 0.000 0.000 0.006 0.00 -0.05 0.00
2 1.67 james...@y... 365 98 0.000 0.000 0.000 0.29 0.00 0.18
3 1.72 donn...@o... 1064 23 0.001 0.000 0.006 0.04 0.04 0.27
4 1.85 jessema...@g... 1539 9 0.001 0.001 0.016 0.12 0.09 0.20
5 1.94 iwashere 689 44 0.003 0.000 0.017 0.00 0.05 0.12
6 2.03 kimadagem 3732 2 0.002 0.000 0.014 0.05 -0.01 0.27
7 2.06 mmhb...@y... 422 79 0.005 0.000 0.009 0.00 0.00 0.00
8 2.21 aristotle...@g... 579 51 0.002 0.000 0.010 0.10 0.11 0.19
9 2.90 Kafei 561 55 0.002 0.000 0.027 0.16 0.13 0.27
10 2.93 turtledove 1188 19 0.001 0.000 0.012 0.32 0.04 0.34
90 15.28 Anthony99 571 53 0.005 0.014 0.391 1.00 1.00 1.00
91 15.83 cwwi...@g... 543 57 0.011 0.070 0.026 0.84 0.85 0.84
92 16.25 rand...@g... 490 63 0.002 0.157 0.051 0.97 0.97 0.99
93 16.76 trudyh...@c... 378 95 0.008 0.140 0.056 0.87 0.84 0.80
94 16.79 jmbus...@h... 2539 4 0.003 0.170 0.020 0.99 0.98 0.92
95 17.30 hs 945 28 0.010 0.115 0.098 0.87 0.86 0.89
96 17.94 ChiefSweetums 691 43 0.010 0.185 0.054 0.68 0.68 0.81
97 21.49 Playa 414 85 0.010 0.239 0.014 0.93 0.90 1.00
98 31.56 Tribune 360 99 0.053 0.011 0.108 0.76 0.61 0.97
99 35.74 taintturk. (moderator) 1176 21 0.027 0.499 0.014 0.89 0.87 0.73
100 40.53 Taskmistress 698 42 0.017 0.755 0.020 0.91 0.91 0.96


Find the full list of Top 100 ranked here (Second Tab).

In Table 7 we can see how reviewers score on the RPI across all of the Top 100 reviewers. The Top 10 have great scores with SERI having the top ranked score with over 488 reviews written and no Flags or 1Bombs and only three 5Bombs. For SERI there is also no correlation between Fair or Comm with an amazingly negative correlation with Fast.

The worse 10 reviewers is much more interesting mostly due to tainturk a Turkopticon moderator and Tribune a former moderator being on the list. Everybody on the worse 10 list suffer from very high correlations between the other scales and Pay. Tainturk though also suffers from having 50% of his/her reviews being 1Bombs (for those reviews in which all of the scales were completed). This is not the worse as Taskmistress has 75% 1Bombs but this was surprising. Looking back at the early reviews I see that 1Bombs seem to be common earlier in Turkopticon and are intended to reflect a Amazon Terms of Service violation, something that has since been implemented.

Similarly Tibune has one of the highest flag count rates in the entire list with an expected numbe rof flags of 5% on his/her reviews. However, as Tribune was invited to be a moderator despite this spotted history, we can only assume that my rating system has some serious flaws.

Overall, I would therefore take the RPI ranking with a grain of salt. Perhaps some of the longer time contributors to Turkopticon are suffering from changing standard over time. If I have time I will revisit the rating system looking at only reviews within the last year or two.