Tuesday, May 2, 2023

The Counter Intelligence Of The US’s Counter Intelligence Apparatus

The Counter Intelligence Of The US’s Counter Intelligence Apparatus

Yet Another Leak!

Another national security leak seems to be viewed by many in the US and around the world as merely another instance of incompetence or self-righteousness by a young person who lacks a proper understanding of ethics or national security. However, more insightful individuals might begin to ask crucial questions, such as whether there are any controls over who gains access to national security secrets. Additionally, do we not have any tools or methods to identify and eliminate individuals who may be ethically, maturity-wise, or otherwise unprepared to handle sensitive information?

The answer is, surprisingly, that there are 18 National Intelligence Agencies! These agencies regularly employ polygraphers and psychologists trained in detecting abnormal behavior and mental states. However, the issue with using secret polygraphers and many other counterintelligence efforts is the age-old question: “Who watches the watcher?” Worryingly, the answer seems to be no one.

As a result, the system is deeply corrupt, and as long as it is impervious to public scrutiny, its corruption will continue to grow. Based on personal observations and conversations with numerous friends and acquaintances who have shared experiences, it appears that many polygraphers hired by the CIA, NSA, and other organizations routinely act in unethical ways, physically, sexually, or emotionally abusing those they evaluate. To prevent their abuses from being exposed, they often falsify “confessions” and test results.

But why? What is gained from being from being unethical? I dare say there is a multitude of potential benefits that unscrupulous polygraphers may experience. These benefits are complex from the intensely personal rush of the polygrapher “getting off” (psychologically or physically) by dominating another human being to that of being rewarded by a system that lauds the generation of confessions whether true, coerced, or entirely fabricated by the polygrapher. Of course, there might be more nefarious reasons for the corruption of the US’s counterintelligence apparatus. Not all nations are friendly to democracy or personal liberty…

But, we can’t blame everything on Putin or Xi Jinping. We give them too much credit.

Instead, let’s instead imagine that these agencies are typical human organizations that tend towards extremes when secrecy is enforced and accountability is largely absent (think cults, secret police, child protective services, etc.). Why do such organizations drift towards extremes?

You guessed it!

It is all about the incentives.

Incentives Drive Outcomes

In order to illustrate this drift, I have built a small multistage simulation. Let’s imagine the polygraph procedure and see if we can show how under very mild conditions (which arguably describe observed features of the system) we end up with a system in which not only is polygrapher selection tending towards corruption, but their supervisors even more so, and unfortunately even the employees they “successfully” screen.

library("dplyr")
library("ggplot2")

Simulation

Setup

Three different agents:

  1. Potential employee (candidate)
  2. Polygrapher
  3. Polygrapher’s supervisor

We will call the unethical spectrum that poligraphers can draw from as “unethics” with the most ethical polygrapher (unethics = 0) always following ethical rules and never falsifying confessions while the most unethical polygrapher (unethics = 1) will always take every opportunity to falsify confession and otherwise break ethical rules.

unethics = runif(N)^3

hist(unethics, breaks = 20)

We can see that the simulated polygrapher population is drawn from a population in which “unthics” is quite initially low.

Now let’s say that when a candidate comes for a polygraph there is a small possibility that they will admit to some real concern. If they admit to a real concern then the polygrapher gets his payoff R (think probability of promotions).

First we will simulate a single day. Add the polygrapher and his/her ethics.

day1 = data.frame(polygrapher = 1:N, unethics= unethics)

The polygrapher meets their candidate for the day and the candidate either confesses to an real concern or does not.

day1$real_confession = 1*(runif(N) < actual_confession_probability)

If the candidate does not confess to a real concern then the polygrapher at that point may choose to deploy unethical interrogation procedures but there is no reason to do so if the candidate has already confessed to something horrible.

day1$unethical = 1*(runif(N) < unethics) * (1-day1$real_confession)

If the polygrapher employs unethical interrogation procedures then there is a decent likelihood they can solicit a fake confession. Among my four friends who have gone to similar proceedures two of them were coerced to confessing to something they had not done in order to satisfy their polygrapher and get passed. Two of them including myself refused to confess to things the polygrapher alleged. Let’s say that people who refuse to confess to a false accusation under severe/abusive pressure have a personal moral flexibility score between 0 and 1 but a likelihood of making a false confession between 50%-100% (defined as .5+moral flexibility/2). The reason this score is so high is because naive candidates are pitted against seasoned polygraphers who are expert manipulators.

day1$candidate_flexibility = runif(N)

So if polygraphers applying unethical procedures, then there is a random chance they will solicit a fake confession.

day1$fake_confession = day1$unethical * (runif(N) < .5 + .5*day1$candidate_flexibility)

Many candidates will confess to something to get the pressure off of them and get there clearance. But what happens to those who experience unethical abuse by the polygrpaher and refuse to “confess”.

Now the polygrapher is in a very tight spot. If the polygrapher lets the candidate get through the system then that candidate is now a risk to the the polygrapher. Ie the canidadate can make complaints and otherwise flag the unethical or criminal behavior of the polygrapher.

However, the option exists for the polygrapher to falsify a confessions and thus defanging complaints about ethics from a person who has already “admitted” to something terrible. Polygraphers who have already chosen to deploy unethical practices would find the next step of lying about a confession to be natural and self-protective.

day1$confession_lie = (day1$unethical == 1) * (day1$fake_confession == 0) * (runif(N) < day1$unethics)

Finally let’s imagine there is some magical world where there is actually ethical oversight over the CIA’s polygraph/security apparatus. Let’s say if there is unethical actions by a polygrapher without a confession then there is a possibility of being caught.

probabilty_of_caught = .01

However, if there is a confession (fake or coerce) then this pobability is reduced even lower.

probabilty_of_caught_with_confession = probabilty_of_caught / 2
day1$disciplinary = day1$unethical * ( (runif(N) < ((probabilty_of_caught_with_confession) * (day1$fake_confession + day1$confession_lie) + 
                                    (probabilty_of_caught * ((day1$fake_confession + day1$confession_lie) == 0) ))))

So let’s see the results of our simulation.

Test takers pass if they confess to something significantly “bad enough” (real_confession), the test taker melts under the pressure and fakes a confession, or the polygrapher chooses not to falsify the polygraph results.

day1$pass = 1*((day1$real_confession + day1$fake_confession + (1 - day1$unethical)) > 0)

Given the initial parameters we have a 94% pass rate. This is due to a 48% fake confession rate and only 48% of polygraphers using unethical practices to falsify the results. There is also a slight number 2% which pass due to confessing something sufficiently bad.

mean(day1$pass)
## [1] 0.936
mean(day1$fake_confession); mean(1 - day1$unethical); mean(day1$real_confession)
## [1] 0.177
## [1] 0.759
## [1] 0.059

We have 59 real confessions.

sum(day1$real_confession)
## [1] 59

We have 241 polygraphers acting unethically.

sum(day1$unethical)
## [1] 241

And we have 177 fake confessions.

sum(day1$fake_confession)
## [1] 177

Resulting in 59 disciplinary actions.

sum(day1$disciplinary)
## [1] 59

Is this interesting at this point? Probably not very.

We basically get what we put in. What gets a little more interesting is seeing how the system develops over time based on incentives. Let us first design a function which will run our day simulation.

daySimulate = function(active_polygraphers, 
                       actual_confession_probability = .05, 
                       probabilty_of_caught          = .01, 
                       probabilty_of_caught_with_confession = .005) {
    
    N = nrow(active_polygraphers)

    dayN = active_polygraphers
    dayN$real_confession = 1*(runif(N) < actual_confession_probability)  
    dayN$unethical = 1*(runif(N) < active_polygraphers$unethics) * (1-dayN$real_confession)    
    dayN$candidate_flexibility = runif(N)
    dayN$fake_confession = dayN$unethical * (runif(N) < .5 + .5 * dayN$candidate_flexibility)  
    dayN$confession_lie = (dayN$unethical == 1) * (dayN$fake_confession == 0) * (runif(N) < dayN$unethics)    
    dayN$disciplinary = dayN$unethical * ( (runif(N) < (
            (probabilty_of_caught_with_confession) * (dayN$fake_confession + dayN$confession_lie) + 
            (probabilty_of_caught * ((dayN$fake_confession + dayN$confession_lie) == 0) ))))

    dayN$pass = 1*( (dayN$real_confession + dayN$fake_confession + (1 - dayN$unethical)) > 0)
    
    return(dayN)
}

Now let us add some additional rules for the system:

  1. Every thirty days polygrapher’s are reevaluated. Those with two disciplinary actions against them in the last six months are assigned to different roles in the organization.
  2. Also every thirty days the 10 polygraphers with the lowest rates of confessions are replaced with a new set of polygraphers.
polygraph_pruning = function(active_polygraphers,
                            number_of_disciplinary_actions_allowed = 2,
                            number_of_polygraphers_replaced        = 10,
                            number_of_supervisors                  = 10,
                            current_count = 100) {
    
    N = nrow(active_polygraphers)
    nDisciplined = sum(active_polygraphers$disciplinary >= number_of_disciplinary_actions_allowed)
    
    # Assign Supervisors
    active_polygraphers$supervisors = 1*(row_number(-active_polygraphers$nConfessions * .000000001^active_polygraphers$disciplinary) <= number_of_supervisors)
    
    # If a polygrapher has been disciplined too many times then they are replaced.
    if (nDisciplined>0) {
        #print(c(nDisciplined, 'polygraphers replaced'))

        selector = active_polygraphers$disciplinary >= number_of_disciplinary_actions_allowed

        active_polygraphers$polygrapher[selector] = current_count + (1:nDisciplined)
        active_polygraphers$unethics[selector] = runif(nDisciplined)^2

        current_count = current_count + nDisciplined
    }

    # If a polygrapher is in the bottom N they are replaced.
    if (number_of_polygraphers_replaced>0) {
        selector = row_number(active_polygraphers$nConfessions) <= number_of_polygraphers_replaced

        active_polygraphers$polygrapher[selector] = current_count + (1:number_of_polygraphers_replaced)
        active_polygraphers$unethics[selector] = runif(number_of_polygraphers_replaced)
        active_polygraphers$ndays[selector] = 0

        current_count = current_count + number_of_polygraphers_replaced

        active_polygraphers$nConfessions = 0
    }

    
    # Select supervisors from polygraphers with the most confessions but no active disciplinary actions
    
    return( list(active_polygraphers , current_count) )
    
}

number_of_activate_polygraphers = 100
actual_confession_probability = .05
probabilty_of_caught = .002
probabilty_of_caught_with_confession = .001

frequency_of_reevaluation = 30
number_of_disciplinary_actions_allowed = 2
number_of_polygraphers_replaced = 5
discipline_reset = 180

number_of_days = 720
simulateSystem <- function(number_of_activate_polygraphers = 100,
                           actual_confession_probability  = .05,
                           probabilty_of_caught = .002,
                           probabilty_of_caught_with_confession = .001,
                           frequency_of_reevaluation = 30,
                           number_of_disciplinary_actions_allowed = 2,
                           number_of_polygraphers_replaced = 5,
                           discipline_reset = 180,
                           number_of_days = 720) {
  
active_polygraphers = data.frame(polygrapher = 1:number_of_activate_polygraphers, 
                                 unethics     = runif(number_of_activate_polygraphers)^6, 
                                 nConfessions = 0, 
                                 disciplinary = 0, 
                                 ndays = 0)

day_dataframe = data.frame(day = 1:number_of_days, 
                          mean_unethics     = NA,
                          mean_unethics_top = NA,
                          real_confession = NA, 
                          fake_confession = NA,
                          confession_lie  = NA,
                          disciplinary    = NA,
                          candidate_flexibility_pass = NA,
                          candidate_flexibility_fail = NA)

  current_count = number_of_activate_polygraphers
  
  N = number_of_activate_polygraphers
  
  for (i in 1:number_of_days) {
  
      # Run results through the day simulator
      dayResults = daySimulate(active_polygraphers, 
                                actual_confession_probability = actual_confession_probability, 
                                probabilty_of_caught = probabilty_of_caught, 
                                probabilty_of_caught_with_confession = probabilty_of_caught_with_confession)
      
      # Pass the results to a dataframe
      day_dataframe[i,] = c(i,
               mean(active_polygraphers$unethics),
               mean(active_polygraphers$unethics[active_polygraphers$supervisor == 1]),
               sum(dayResults$real_confession) ,
               sum(dayResults$fake_confession) ,
               sum(dayResults$confession_lie)  ,
               sum(dayResults$disciplinary)    ,
               mean(dayResults$candidate_flexibility[dayResults$pass == 1]),
               mean(dayResults$candidate_flexibility[dayResults$pass == 0])   
                   )
      
      active_polygraphers$ndays = active_polygraphers$ndays + 1
      
      active_polygraphers$nConfessions = active_polygraphers$nConfessions + (
          dayResults$real_confession + dayResults$fake_confession + dayResults$confession_lie)
      
      active_polygraphers$disciplinary = active_polygraphers$disciplinary + dayResults$disciplinary
      
      # If the evaluation period has been hit assign active supervisors
      if ( (i %% frequency_of_reevaluation) == 0) {
          pruned = polygraph_pruning(active_polygraphers, current_count = current_count)
          active_polygraphers = pruned[[1]]
          current_count       = pruned[[2]]
      }
      
      if ( (i %% discipline_reset) == 0 ) { active_polygraphers$disciplinary = 0 }
  }
  return(list(day_dataframe = day_dataframe, 
              dayResults    = dayResults,
              active_polygraphers = active_polygraphers))
}
x = simulateSystem(number_of_days = 1000)
day_dataframe       = x$day_dataframe
dayResults          = x$dayResults
active_polygraphers = x$active_polygraphers

Simulation results:

In the following table we can see the effects of the incentive schema on polygrapher ethics. On day 1 random sampling polygraphers ends up with an average unethics of .15. Which means initially the likelihood of a polygrapher to break ethical rules is only about 15%. Also, candidates who pass the polygraph procedure either through true confession or fake confession have a “flexibility” of 55% intially while those who fail are slightly more moral with a flexibility of only 27%. We can also see that those polygraphers who generate the most confessions (whether real or fake) already are much more likely to be unethical. With a score of 73% unethics by day 31 of the simulation.

Table 1:

knitr::kable(head(day_dataframe[day_dataframe$day %% 15 == 1,]))
day mean_unethics mean_unethics_top real_confession fake_confession confession_lie disciplinary candidate_flexibility_pass candidate_flexibility_fail
1 1 0.1535725 NaN 4 14 1 0 0.5508659 0.2719333
16 16 0.1535725 NaN 5 6 3 0 0.5349045 0.3429499
31 31 0.2033327 0.7353261 6 16 3 0 0.5319780 0.3302434
46 46 0.2033327 0.7353261 4 12 1 0 0.5288585 0.2464654
61 61 0.2370944 0.7608060 3 13 3 0 0.4898947 0.2933348
76 76 0.2370944 0.7608060 6 17 2 0 0.4642052 0.2116843

Over time the simulation end up replacing the least effective polygrpahers (generally those who follow ethical rules) with random selections of new polygraphers. By day 991 as a result of this selection process the average unethics of the polygraphers in the system has drifted to 80% with the “top” polygraphers having an unethical rating of 96%. Interestingly we can see that the adverse incentive system has also resulted in the candidates who “pass” the polygraph as now having a ethical flexibility of 54% while those who fail have a flexibility of 28%. IE, statistically those who refuse to sign fake confessions and thus fail are actually those with a higher ethical standards than those who pass on average.

This is an interesting result and consistent with intuition. Corrupt systems such as those involving bribery, kickbacks, etc. end up corrupting both the corrupter (ie the polygrapher) as well as the victim (the candidate).

Table 2: Snapshot of the end of the simulation.

knitr::kable(tail(day_dataframe[day_dataframe$day %% 15 == 1,]))
day mean_unethics mean_unethics_top real_confession fake_confession confession_lie disciplinary candidate_flexibility_pass candidate_flexibility_fail
916 916 0.7997596 0.9428243 4 64 11 1 0.5497275 0.3783560
931 931 0.8066865 0.9356297 3 57 21 0 0.5096309 0.3062531
946 946 0.8066865 0.9356297 9 62 11 0 0.5244614 0.3196479
961 961 0.8195815 0.9610793 7 58 17 0 0.5313487 0.3139352
976 976 0.8195815 0.9610793 6 56 16 0 0.5354786 0.4062887
991 991 0.8034161 0.9582148 2 61 12 2 0.5421242 0.2860935

If we look at the supervisors who are selected from the top 10 most effective polygraphers in the system we find a truly unethical bunch. With the “most ethical” supervisor being much more likely to deploy unethical practices than the typical polygrapher while the most unethical supervisor practice systematic and consistent unethical practices only choosing not to act unethically about 1% of the time.

Table 3:

knitr::kable(active_polygraphers[active_polygraphers$supervisors == 1,])
polygrapher unethics nConfessions disciplinary ndays supervisors
14 365 0.9796909 10 0 190 1
15 163 0.9964339 10 0 790 1
30 30 0.9211820 9 0 1000 1
34 412 0.9687165 10 0 70 1
35 145 0.9080019 10 0 850 1
46 166 0.9179333 8 0 790 1
56 283 0.9691554 10 0 430 1
57 415 0.9596243 9 0 70 1
62 197 0.9724210 10 1 700 1
92 279 0.9889890 10 0 460 1

Now let’s look graphically at how ethics drift over time due to the incentives in the system.

In the figure below we can see the average ethics of the polygrapher move dramatically while the “top polygrapher” does not really move that much. The cause of this movement is entirely base on the incentives in the system.

Figure 1:

ggplot(day_dataframe[day_dataframe$day > 30, ], aes(x=day, y=mean_unethics)) + 
  theme_bw() +
  geom_line(aes(colour = mean_unethics), linewidth = 2) + 
  ylab('Polygrapher Unethics') +
  scale_color_gradient(low="gray", high="red") + 
  geom_line(aes(y = mean_unethics_top), color="steelblue", linetype="twodash", linewidth = 2) + 
  theme(legend.position = "none")

We can see this by running the simulation again but changing the probability of being caught when deploying unethical practices. Even a change from .1% to 10% and .05% to 5% results in dramatically different expected behavior by polygraphers with polygraphers ethics drifting very little over time.

Figure 2:

x = simulateSystem(number_of_days = 1000,
                   probabilty_of_caught = .1,
                   probabilty_of_caught_with_confession = .05)

day_dataframe       = x$day_dataframe
dayResults          = x$dayResults
active_polygraphers = x$active_polygraphers

ggplot(day_dataframe[day_dataframe$day > 30, ], aes(x=day, y=mean_unethics)) + 
  theme_bw() +
  geom_line(aes(colour = mean_unethics), linewidth = 2) + 
  ylab('Polygrapher Unethics') +
  scale_color_gradient(low="gray", high="red") + 
  geom_line(aes(y = mean_unethics_top), color="steelblue", linetype="twodash", linewidth = 2) + 
  theme(legend.position = "none")

Having even high rates of checking for and catching polygrapher abuse (30% and 15%) results in even more dramatic changes in the population of polyragraphers having little ethical drift over time. Shockingly, under these conditions, those polygraphers most likely to be recognized and promoted to supervisors are actually those most ethical.

x = simulateSystem(number_of_days = 1000,
                   probabilty_of_caught = .3,
                   probabilty_of_caught_with_confession = .15)

day_dataframe       = x$day_dataframe
dayResults          = x$dayResults
active_polygraphers = x$active_polygraphers

knitr::kable(head(day_dataframe[day_dataframe$day %% 15 == 1,]))
day mean_unethics mean_unethics_top real_confession fake_confession confession_lie disciplinary candidate_flexibility_pass candidate_flexibility_fail
1 1 0.1328231 NaN 5 10 2 2 0.5178134 0.2941226
16 16 0.1328231 NaN 7 10 0 3 0.5277926 0.5738695
31 31 0.1454454 0.1459775 5 10 3 2 0.4450546 0.4120254
46 46 0.1454454 0.1459775 3 15 3 4 0.4825806 0.3049907
61 61 0.1635972 0.0903043 3 13 1 1 0.4681765 0.0847121
76 76 0.1635972 0.0903043 3 14 1 4 0.4783364 0.3117455
knitr::kable(tail(day_dataframe[day_dataframe$day %% 15 == 1,]))
day mean_unethics mean_unethics_top real_confession fake_confession confession_lie disciplinary candidate_flexibility_pass candidate_flexibility_fail
916 916 0.3078299 0.1116442 8 21 3 1 0.4898941 0.3763426
931 931 0.3042549 0.2595712 4 24 3 3 0.4696140 0.4861778
946 946 0.3042549 0.2595712 10 18 3 10 0.5315689 0.2665043
961 961 0.2764454 0.0726653 3 18 1 1 0.5027710 0.2665816
976 976 0.2764454 0.0726653 3 19 3 3 0.4694358 0.3804199
991 991 0.3399286 0.0695872 5 19 6 3 0.4977805 0.3616711
ggplot(day_dataframe[day_dataframe$day > 30, ], aes(x=day, y=mean_unethics)) + 
  theme_bw() +
  geom_line(aes(colour = mean_unethics), linewidth = 2) + 
  ylab('Polygrapher Unethics') +
  scale_color_gradient(low="gray", high="red") + 
  geom_line(aes(y = mean_unethics_top), color="steelblue", linetype="twodash", linewidth = 2) + 
  theme(legend.position = "none")

Having even higher rates of checking for and catching polygrapher abuse (60% and 30%) results in even more dramatic changes in the population of polyragraphers having little ethical drift over time. Interestingly because the rate of abuse correction is so high, under these conditions, those polygraphers most likely to be recognized and promoted to supervisors are actually those most ethical.

Conclusions

It is well known that polygraphers deployed by the CIA, NSA, and similar organizations deploy widespread abusive, illegal, and unethical practices when conducting their jobs. Those who have not experienced the sheer shit show of the process find such claims often times unbelievable. Those who have experienced it uniformly seem to find their experiences both traumatic, embarrassing, and shameful. Nobody I have told about my terrible experience has been surprised by it but rather in turn felt compelled to tell me of their own unfortunate experience.

How has such a system developed? Likely lack of proper incentives. The previous simulation has minimal assumptions: 1. that so long as there are few consequences and poor oversight, 2. some incentive to cheat (act unethically), 3. the pool of potential polygraphers is drawn from people with a wide range of ethical standards

Yet under these assumptions we can see massive shifts from an active pool of generally ethical polygraphers to a highly unethical pool of criminal polygraphers driven by reasonable incentives (getting results) to hellish extremes driven primarily by failures to implement reasonable consequences to control for unethical and criminal behavior.

It is no surprise to me under these circumstances that the US is replete with people willing violate their oaths to share secret or top secret information with newspapers, foreign governments, even discord channels. If the “counter-intelligence” security apparatus is so myopic and flawed as to allow polyraphers such as Jeremy free reign, one can hardly expect it to screen out potential national security threats.

There is Hope

The extent of unethical misconduct within the security process at the CIA, NSA, and other government agencies is so bad that it would not take that much effort to track down and eliminate the worst of the worst. Simply reviewing the recordings from polygraph interviews and following up with exit surveys from participants would likely yield tremendous results.

There is a painful but simple fix, get rid of bad rubbish. Fire those who systematically betray and embarrass their country, their fellow citizens, and really all citizens and defenders of democracy by acting unethically for their own gain. For those who have acted criminally by physically or sexually assaulted their victims, falsified federal documents and conspired with others to cover their crimes, they should should be prosecuted to the fullest extent of the law.

Only by purging the rot can one hope to establish a system which does its job, ie. blocking unethical, immature, or those motivated by foreign loyalties from gaining access to highly sensitive information.