September 29, 2015

Reconsidering the Model as a Unit of Regulation: cybernetics and the adaptive outcome

Here is a preview of an essay Robert Stone and I have been working on as part of the Orthogonal Research initiative during the course of the last year. The formal title is: "The Foundations of Control and Cognition: The Every Good Regulator Theorem". This essay takes a classical tool from the cybernetics literature and applies it to game theoretic and other problems of our interest.

Robert Stone, cybernetics enthusiast

Robert Stone and myself, bringing cybernetics back to the "soft" but immensely-complex (social, brain, and biological) sciences. The full version (with notes, definitions, and additional references) can be found here.

A seemingly simple discrete system with feedback (which makes it not so simple during future iterations). COURTESY: intgr, Wikimedia commons.

I. Introduction
            In the history of scientific discovery, there have been examples of certain persons or facets of their work being considered ‘out of step’ with the dominant scientific or philosophical trends of the time. As such, they risk falling down a deep well in our cultural landscape, with their work’s efficacy lost to subsequent generations. If their work has merit, it may be considered ahead of its’ time by future generations. The timing of a given theory or great idea is largely determined by cultural and cognitive biases that favor the dominant paradigm [1]. In other cases, ideas at the paradigmatic vanguard end up resurrected in a more pragmatic way. The acceptance of such ideas occurs either gradually or in one fell swoop at a later point in time. Let us keep this in mind as we discuss Ronald C. Conant and W. Ross Ashby’s seminal work “Every Good Regulator Theorem” [2] (EGRT):

“[The EGRT is]….a theorem is presented which shows, under very broad conditions, that any regulator that is maximally both successful and simple must be isomorphic with the system being regulated…….Making a model is thus necessary.” [2]

The EGRT characterizes regulation with respect to cybernated control systems. In the case of Ashby and Conant [3], the EGRT developed within the context of several intersecting traditional fields. These include algorithmics, information theory, systems theory, and behavioral science. In such a context, models are exceedingly important. Given the reliance of the EGRT concept on inference and propositional thinking, there is an essential reliance on models. In fact, the EGRT exists at such a high level of abstraction that even with a high degree of specification may not be directly applicable in the real world [4]. However, there are certain advantages of cybernetic modeling that make their cross-contextual application useful.

Ashby's graphical formulation of the EGRT Theorem with original notation. COURTESY: [2].


II. Background
Let us return to the notion of modeling as phenomenology. Systems engage in modeling not simply to purposely regulate their environments, but rather to reactively respond to input stimuli in a way that maintains higher-level states [5]. This ability to model becomes part of their structure at the most basic of levels, though it would be fair to say most modeling (in the way we will use the word) is the result of cognitive processes. The constructivist might argue that such metacognitive dynamics [6] would influence one’s proposed scientific model. Like Shakespeare’s Hamlet, however, the question of whether or not to model (or be) is one of survival, whether that survival be genetic or memetic. Rather than reviewing the proof step-by-step, let’s discuss its potential significance in a variety of use-cases. In the process, we will be transcending the traditional boundaries of autonomic, ‘choice’, or even cognitive.

          Simply put, the Every Good Regulator Theorem says that regulators operate on approximations (e.g. models) of the thing they are regulating. This requires a mapping of the natural world to the model. While one might consider the activities of encoding and translation to be inherently cognitive, genomic systems also perform biological control functions in the absence of cognition [7, 8]. In the biological control example, what matters is not intent, but accuracy. Rather than an actively goal-oriented criterion, what we observe here is passively goal-oriented system output. Accuracy of the approximated model influences the quality of regulation. Thus, there need not be agency on the part of any single system component. Indeed, to survive as a unit in an interrelated system, a regulating machine must construct an interactive model that includes inputs, outputs, and feedback.

Let us consider a couple cases of regulatory dynamics, which may be valuable in understanding the importance of this theorem. We can then move on to what could this mean for both further theoretical development and practical application. A good place to begin in cognitive science is game theory [9]. One of the most simple, effective, and most explanatory strategies in the Prisoners’ Dilemma game is the tit-for-tat strategy [10]. In this 2-player, 2x2 game, the tit-for-tat strategy is simple: ‘Do unto others as they have done unto you’ after an initial good faith move of cooperation. The strategy is simply to copy your opponent's behavior. If the opposing agent cooperates, so does the tit-for-tat strategizing agent; if they defect, the tit-for-tat strategist follows suit. The intended outcome of the strategy is to move the exchange towards an equilibrium (though this is not the only possible outcome, nor is the strategy perfect).

Of specific interest here is that the mechanics of the strategy requires a model to be held in memory by the agent employing tit-for tat (a 1-bit cooperate/defect model), regardless of the strategy employed by the other agent (whether that be a more sophisticated maximizing strategy, or random selections). While an economist might view this as free-riding behavior by one of the two agents, the selection of tit-for-tat by both players can produce a cooperative equilibrium, such as in the evolution of reciprocal altruism in biological systems. The EGRT suggests that the greater the memory for an agent, and the longer it has the opportunity to observe and integrate the moves of its opponent, the greater its’ potential for effective regulation.

Over time, this can lead to greater accuracy for the agent’s cognitive model and a more stable equilibrium game outcome. Further, this equilibrium state can be long-lasting, given extended memory capacity for more detailed models, and may evolve towards ‘a conspiracy of doves’, within a game of homo lupus homini. An agent with a greater memory capacity can also employ more elaborate (or deeper) strategies over time. This development of deeper strategies may also feedback into modifying its model of the external world [11]. Overall, the capability to regulate behavior of other players depends on the inferential and predictive capacities of each player’s model: in a highly complex competitive game environment, a good regulator has a superior model, or it will find itself regulated by a competing agent in the game, especially as the behaviors get more complex.

“The theorem has the interesting corollary that the living brain, so far as it is to be successful and efficient as a regulator for survival, must proceed, in learning, by the formation of a model (or models) of its environment.” [2]

An example of a basic 2x2 payoff matrix characterizing the Prisoner's Dilemma. COURTESY: "Extortion in Prisoner's Dilemma", Blank on the Map blog, September 19 (2012).

III. Further Considerations
Let us now consider a more complicated scenario where we might be able to uncover the universal components of the EGRT phenomenology. The context will be two people on a blind date (this can actually be a complicated scenario). If one has been in one of these (terrifying) contexts, then one can already see where we are going. The cognitive agents are continually competing to increase the efficacy of their models of the other agent, while also attempting to constrain the modeling of the other agent towards a compact image they prefer. Although rarely implemented successfully, winning strategies include accurately modeling the other actor and influencing the state of their mental model. This can include both elaborate, multi-step strategies, and simpler strategies, the complexity of which is does not indicate their effectiveness. If the goal is a continuation of relations, the acquisition and intentional obfuscation of information occurs at appropriate times and in appropriate ways. Furthermore, this information has contextual value. As in most scenarios involving imperfect or asymmetrical information [12], your model must be superior to become the leader of the interaction [13], and thus control of regulation.

Does regulation even require what we would call cognition? This of course depends on our definition of cognition and regulation. However, let us consider that a bacterium does not have a “cognitive” or mental model of its environment, yet appears to have little trouble getting around and controlling some aspects of its landscape. The similarities between chemotactic sensation and mental models built upon multisensory stimuli serve as evidence for the universal character of the EGRT. In fact, Heylighen [14] has proposed that cybernetic regulation is a highly-generalized form of cognition. Yet do thermostats or other mechanical systems possess anything approaching what we consider cognition? While none of these has the cognitive capacity of a brain, they do have information processing capabilities from their physical or electronic structure, memory states, and crude models of how things ‘should’ be, towards which they regulate conditions. Non-cognitive systems possessing these characteristics are obviously still capable of rudimentary communication, control, decision making, and regulation, at least abstractly. We should also expect some degree of continuity that crosses the boundary of the cognitive and non-cognitive, since cognitive systems evolved from less intentional ones with more rudimentary forms of behavioral control.

“...success in regulation implies that a sufficiently similar model must have been built, whether it was done explicitly, or simply developed as the regulator was improved.” [2]


IV. Conclusion
Earlier, we had touched upon the history of scientific discovery, and contextual model building. A scientific theory is simply a model, and its value lies in its efficacy and repeatability (thus its’ trustworthiness and ability to aid in regulation). Theoretical models have tended, historically, to shift from informal, conceptual models towards formal mathematical ones (consider Comte’s Philosophy of Science). As a given model acquires more data, and as those data create ever-more accurate model revisions with higher fidelity. The overall capacity to aid regulation increases via feedback. Thus, the model’s value to humans increases. However, as noted by the example of ahead of their time thinking, scientific thought does not exist in a vacuum, and the landscape conditions need to be aligned so that the model can prove fruitful. Consider how we are witnessing an explosion in robust formal mathematical and/or computer models either aiding or besting human cognitive efforts [15, 16]. Informational revisions of the model often occur faster than the landscape conditions change, so adaptive cross-contextual models may prove more successful in dynamic situations, such as ones which are developed by human thought and human cultural systems.

            This ability to cross the boundary between cognitive and non-cognitive with models may challenge either our informal, colloquial conception of cognition or the universality criterion of the formal EGRT. As both features of cognition and more universal mechanisms, information processing, memory, communication, and selection can occur without any kind of cognitive superstructure. Perhaps the context of what we call “cognition” is too limiting. What about human cognition then is truly universal, and what is unique to a certain set mechanisms and representational models? For example, are models of so-called cellular decision-making [17] an unduly anthropomorphic representation of cellular differentiation and metabolism, or is it drawing upon a common set of universal properties that can only be abstracted from the system by an appropriate model?

Rather than trying to solve this philosophical puzzle now, let us take leave to consider that a deep truth like the one perhaps contained within the formalism of the EGRT should make us question scientific knowledge in a manner akin to reconsidering our firmly-held beliefs. It should make us reconsider how well we understand the relationship between nature and our own conceptual models. In that, it kindles the same spark from which all great scientific theories alight: It leads us to more questions, new ways of thinking about things, and guides us towards more accurate, repeatable, and otherwise ‘good’ models.

“Now that we know that any regulator (if it conforms to the qualifications given) must model what it regulates, we can proceed to measure how efficiently the brain carries out this process. There can no longer be question about whether the brain models its environment: it must.” [2]

References:
[1] Kuhn, T.   Structure of Scientific Revolutions. University of Chicago Press (1962). 

[2] Conant, R.C. and Ashby, W.R.   Every good regulator of a system must be a model of that system. International Journal of Systems Science, 1(2), 89–97 (1970).

[3] Ashby, W.R.   Introduction to Cybernetics. Chapman and Hall (1962).

[4] Fishwick, P.   The Role of Process Abstraction in Simulation. IEEE Transactions on Systems, Man, and Cybernetics, 18(1), 18-39 (1988).

[5] Brooks, R.   Intelligence Without Representation. Artificial Intelligence, 47, 139-159 (1991).

[6] Kornell, N. Metacognition in Humans and Animals. Current Directions in Psychological Science, 18(1), 11-15 (2009).

[7] Ertel, A. and Tozeren, A.   Human and mouse switch-like genes share common transcriptional regulatory mechanisms for bimodality. BMC Genomics, 23(9), 628 (2008).

[8] Gormley, M. and Tozeren, A.   Expression profiles of switch-like genes accurately classify tissue and infectious disease phenotypes in model-based classification. BMC Bioinformatics, 9, 486 (2008).

[9] Gintis, H.   Game Theory Evolving. Princeton University Press (2000).

[10] Imhof, L.A., Fudenberg, D., and Nowak, M.A.   Tit-for-tat or Win-stay, Lose-shift? Journal of Theoretical Biology, 247(3), 574–580 (2007).

[11] Liberatore, P. and Schaerf, M.   Belief Revision and Update: Complexity of Model Checking. Journal of Computer and System Sciences, 62(1), 43–72 (2001).

[12] Rasmussen, E.   Games and Information: an introduction to game theory. Blackwell Publishing (2006).

[13] Simaan, M. and Cruz, J.B.   On the Stackleberg Strategy in Nonzero-Sum Games. Journal of Optimization Theory and Applications, 11(5), 533-555 (1973).

[14] Heylighen, F.   Principles of Systems and Cybernetics: an evolutionary perspective. CiteSeerX, doi:10.1.1.32.7220 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.7220 (1992).

[15] LeCun, Y., Bengio, Y., and Hinton, G. Deep Learning. Nature, 521, 436-444 (2015).

[16] Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A.A., Lally, A., Murdock, J.W., Nyberg, E., Prager, J., Schlaefer, N., and Welty, C.   Building Watson: an overview of the DeepQA project. AI Magazine, Fall (2010).

[17] Kobayashi, T.J., Kamimura, A.   Theoretical aspects of cellular decision-making and information-processing. Advances in Experimental Medicine and Biology, 736, 275-291 (2012).


UPDATE (9/30): During the editorial process, Rob and I had a discussion about using the word "alight" (in the final paragraph). I was not sure about the correct word usage, but Rob assured me that it was being used correctly in this context. But to back this up even further (and to gratuitously insert an informatics Easter Egg), here is the Google Ngram history of "alight" usage since 1800. 




September 14, 2015

Hodgepodge Blogpost, September 2015

Welcome to the blogging hodgepodge for this month. I wanted to clear up by reading queue, and present some of these ideas and articles in an entertaining way. The topics include: modeling, significant results, and hidden variables (but perhaps not discussed in a conventional manner). As a bonus, we get career advice for scientific researchers and relevant discussion.

Mutant phenptypes from the Fukushima area of Japan. COURTESY: National Geographic.


Flawed Models Cannot Be Made Idealistic

"Essentially, all models are wrong, but some are useful" -- George Box. What makes for a bad model? Poor assumptions, oversimplication/vagueness, or underfitting with respect to available data? These articles address some of these issues, with particular relevance to societal consequences.

Kirchner, L.   When Big Data Becomes Bad. ProPublica, September 2 (2015).

O'Neil, C.   Big Data, Disparate Impact, and the Neoliberal Mindset. Mathbabe blog, September 7 (2015).

Schuster, P.   Models: From Exploration to Prediction -- Bad Reputation of Modeling in Some Disciplines Results from Nebulous Goals. Complexity, doi:10.1002/cplx.21729 (2015).

Rickert, J.   How do you know if your model is going to work? Part 2: Intraining set measures. R-bloggers, September 8 (2015).


Once upon a time, this was a viable model of how nature worked. COURTESY: Geocentric Model, Redorbit.


The Real World is Complex, Idealized Methods Notwithstanding

The debate over replicability in Psychology (and by extension sciences that are not particle physics) rages on. This month, a shot was fired from the "Psychology is not very replicable" camp. The Open Science Collaboration published a paper in Science showing that many replications of experiments fail to reproduce the same levels of statistical significance and power as the original studies.

Critics have blamed this lack of replicability on a number of culprits, including shortcomings of the NHST approach itself. Two potential culprits I have pointed to previously include complexity and cultural context, the latter which we will return to in a bit.



What explains these replicated results? . COURTESY: Figure 1, Science, 349, doi:10.1126/science.aac4716 (2015) AND Loria, TechInsider.

Open Science Collaboration.   Estimating the reproducibility of psychological science. Science, doi:10.1126/science.aac4716 (2015).

Loria, K.   Everything that's wrong with psychology studies in 2 simple charts. TechInsider, August 28 (2015).

Barrett, L.F.   Psychology is not in Crisis. NYTimes Opinion, September 1 (2015).


The Unreasonable Effectiveness of Cultural Context*

* a play on: Wigner, E.   The Unreasonable Effectiveness of Mathematics in the Natural Sciences.


Vanderbilt, T.   Why Futurism Has a Cultural Blindspot. We predicted cell phones, but not women in the workplace. Nautil.us blog, September 10 (2015).

* the latest critique of futurism, this time from a sociological perspective.



Yau, N.   Bourdieu’s Food Space chart, from fast food to French Laundry. Flowing Data blog, June
21 (2012).

* our contemporary Economic World, according to Pierre Bourdieu (as told by Leigh Wells).


Career Advice (Not Avarice):

Hossenfelder, S.   How to publish your first scientific paper. Backreaction blog, September 11 (2015).

* this blog post not only provides advice on how to get started as a published researcher, but also gives advice on how to formulate research ideas and structure manuscripts that will garner the interest of editors and reviewers.

Curry, S.   Peer review, preprints and the speed of science. Guardian, September 7 (2015).

* yet another article in favor of the open-science movement, in this case advocating for mechanisms (e.g. preprint servers, open peer review) that have the potential to speed up and otherwise improve the research enterprise.

McDonnell, J.J.   Creating a Research Brand. Science, 349, 758 (2015).

This author uses a marketing metaphor to help imporive the efficiency of a researcher's efforts. The advice bolis down to the following:

* promote results, publications, and lectures all around a central theme.

* find the right breadth of research. This should be greater than a hyper-specialized topic, but narrow enough to constitute a unique niche.

Printfriendly