July 28, 2013

Argument from Non-Optimality: what does it mean to be optimal?

One of my theoretical interests, which has been previously featured on this blog, is something called non-optimality [1]. Non-optimality, as you might surmise, is the tendency for systems to not behave optimally or result in optimal outcomes. This is challenging to people to wrap their head around, as there are entire shelves of library books on optimization methods. Additionally, there have also been several attempts to focus on the nature of non-optimal outcomes in biology and human behavior [2].

Example of the perfect mousetrap, or example of the least optimal biological system: the laryngeal nerve in Giraffa camelopardalis (giraffe). COURTESY: NatGeo YouTube video.

Optimization, whether through approximation or through the use of multiple criteria, is the standard world-view in fields as diverse as economics and physics to engineering and computer science. However, as one moves towards the social and natural sciences, an interesting phenomenon emerges. While optimality criteria can be applied to specific outcomes using theoretical models, they may only sporadically describe general trends [3].

This is an example of ant colony optimization (ACO), an engineering technique derived from insect ethology (e.g. collective behavior in ants, soemtimes referred to as stigmergy). In nature, ants find an optimal (e.g. the shortest) path to a food source by integrating multiple environmental signals (e.g. a network of pheromones). Process of the engineering endeavor shown in A) optimal path-finding demonstration, B) algorithm to discover shortest paths, C) the amount of time it takes to discover a series of shortest paths (e.g. potential optima). COURTESY [3] -- A) Figure 1.7, B) Box 2.4, C) Figure 1.10.

So why is this state of affairs the case? Are social and natural too complex to be optimized, or are the prevailing models simply wrong? To get at this issue in a systematic fashion, I will conduct a point-by-point critique of optimality in evolutionary biology to demonstrate where optimality may truly exist and where the hypothesis may fall short.


We can begin with the classics. In 1990, Parker and Maynard-Smith [4] reviewed the use and usefulness of optimality models in evolutionary biology. While the application of optimality criteria to evolutionary systems is highly diverse, they boil down to five basic components:

1) A model of adaptation must be constructed. Adaptation (through natural selection) is assumed to occur in an optimal fashion. In this sense, optimization as an outcome of evolution is implicitly adaptationist [5]. This adaptationist model is also implicitly probabilistic. For example, what fitness values for x are most likely to result from natural selection? If those values are maximized in a systematic fashion, then it is assumed that natural selection is responsible.

2) Potential strategies related to obtaining an outcome must be defined, such as discrete behaviors or phenotypic variants. This component is biased towards behavioral ecology, but "strategy" can be thought of as a general tendency rather than an intentional behavior. In either case, there is an assumed causal relationship between the employed strategy and the outcome.

3) Must have a maximization (e.g. fitness) or minimization (e.g. energetic expenditure) criteria. In both cases, there is an expectation of a directional process. This process (the route to optimization) is adaptive by definition. Less clear is what constitutes asymptotic convergence of the optimization process. In other words, while a system might tend towards optimization, it does not follow that this automatically results in a given evolutionary system settling into an optimal equilibrium.

A graphical representation of the Prisoner's Dilemma (PD) game-theoretic model of cooperation.

4) Payoffs for pursuing various strategies must be defined (units of max/min criterion). For game-theoretic applications (e.g. PD), the payoff structure is intuitive. However, any direct consequence of the strategies discussed in point 2 has a payoff. This in turn drives adaptation -- the assumption being that equilibrium behavior is the natural outcome of long-term interaction [6].

5) This optimum can either be frequency-independent (individualistic) or frequency-dependent (population context). This means that either individual performance can become increasingly better over time, or that differential reproduction occurs in a population based on the trait in question.


Now that we have reduced optimality models to their basic components, I will now conduct a point-by-point critique of optimality approaches. Hopefully, this will bring us closer to a theory of non-optimality. For now, I will present off-the-cuff critical observations.

a) Not all evolutionary change is adaptive. For example, models of exaptation [7] and  neutrality have been proposed that account for non-adaptive evolutionary changes. Traits that arise by these mechanisms are not likely to be optimized. In fact, much like cultural traits [8], they may often be maladaptive. Alternately, highly adaptive traits may be built upon latent abilities that would be lost through strict optimization [7].

In [5], in silico metabolic reaction networks that evolved to metabolize glucose also allowed for viability using other carbon sources as well. The evolution of environmental generalization undercuts the argument that this system evolved towards an optimum   COURTESY: Figure 1 in [7].

b) What if the "strategies" that result in optimal behavior occur at different levels of organization conflict [9] with each other? For example, different strategies may be taken within the same organism in terms of behavioral ecology and gene expression. This might be understood in terms of the relatively clear mapping between gene expression and behavior in insects [10] versus the unyielding brain-to-behavior complexity observed in mammals.

c) What if the strength of selection is weak, or if selection is alternating across the evolutionary process? In cases of uniformly strong selection, we might expect strong optimization. In cases of sporadic or weak selection, however, this outcome may be highly nonlinear (e.g. small selective advantages can lead to large changes in fitness). This may or may not lead to optimal phenotypes. Such a claim assumes that we even know what the effects of optimization look like at the phenotypic level. It might be useful to review the concept of optimizing selection [11] and its effects on phenotype.

See this HHMI video on pocket mouse evolution for more information on the relationship between the strength of selection and resulting fitness dynamics.

d) Some strategies may have conditional or compound payoffs which may not translate into a global optimum. For example, there may be clear benefits to niche specialization or functional partitioning. Whether or not this consititutes an globally (e.g. whole-organism) optimal outcome is an open issue. In the case of niche specialization, niche construction [12] involves the modification of the environment, which leads to cultural and natural selective feedbacks. Achievement of the optimal outcome may depend on whether this feedback leads to environmental stability or further fluctuations. In addition, recent research may suggest that theoretical predictions of the PD model do not match the situational behavior of humans and other animals [13].

e) While the model of Parker and Maynard-Smith is a heuristic model for biological optimality [14], it is worth noting that in engineering, multiobjective criteria are often used to approximate the optimal properties of a complex system. Since determining biological optimality is also an exercise in approximation, we need to incorporate better ways of finding dynamic equilibrium using multiple attributes. One example of this involves the application of maximization principles to ecological simulations [15]. In this case, even though a system might evolve to maximize one thing, this may not translate into global optimization or even a maximization of fitness.

Or perhaps we can take a lesson from some quarters of economics [16] where Grossman and Stiglitz found that a competitive economy cannot always be in equilibrium, but rather an equilibrium degree of disequilibrium. This has been used as evidence against the efficient markets hypothesis, which relies (sans natural selection) on the concept of market optimization over time.

While I have not focused on more traditional critiques of optimization in the evolutionary biology literature, I hope that this exercise actually leads to a series of non-optimal mathematical models that can go toe-to-toe with traditional optimization models. But I'll leave that development to future posts.

NOTES: 

[1] see the Synthetic Daisies #non-optimality tag for more information. Not all are relevant to biology, but all feature various takes on this concept.

[2] Boyd, R.   Cultural Adaptation and Maladaptation: of kayaks and commissars. In The Evolution of the Mind: fundamental questions and controversies. S.W. Gangestad and J.A. Simpson eds. Guilford Press (2007). In this reference, maladaptation is defined in contrast to adaptation.

Crespi, B.   The evolution of maladaptation. Heredity, 84, 623–629 (2000). In this reference, maladaptation is defined as a deviation from adaptive peaks.

[3] Dorigo, M. and Stutzle, T.   Ant Colony Optimization. MIT Press, Cambridge, MA (2004).

[4] Parker, G.A. and Maynard-Smith, J.   Optimality theory in evolutionary biology. Nature 348, 27-33 (1990).

For a different take (with perspective from the evolutionary computation community and the cognition-as-computation debate), please see: Harvey, I.   Cognition is not Computation: evolution is not optimisation. ICANN97, 685-690 (1987).

Another perspective (from a discrete mathematician) can be found here: Kelk, S.   What mathematical optimization can, and cannot, do for biologists. Lorentz Center presentations.

[5] For more reading on this idea, please see: Orzack, S.H. and Sober, E.   Adaptationism and optimality. Cambridge University Press, Cambridge, UK (2001).

[6] For a take on this idea using human societies as an example, please see: Cremer, H., Marchand, M., Pestieau, P.   Investment in local public services: Nash equilibrium and social optimum. Journal of Public Economics, 65(1), 23–35 (1997).

[7] Barve, A. and Wagner, A.   A latent capacity for evolutionary innovation through exaptation in metabolic systems. Nature, doi:10.1038/nature12301 (2013).

[8] For an adaptationist perspective on this, please see: Logan, M.H. and Qirko, H.N.   An evolutionary perspective on maladaptive traits and cultural conformity. American Journal of Human Biology, 8(5), 615–629 (1996).

[9] for the role of genomic conflict and how it may potentially undercut optimization, please see: Werren, J.H.   Selfish genetic elements, genetic conflict, and evolutionary innovation. PNAS, 108(2), 10863-10870.

[10] For two takes on this, please see: Robinson, G.E., Grozinger, C.M., and Whitfield,C.W. Sociogenomics: social life in molecular terms. Nature Reviews Genetics, 6(4), 257-270 (2005) AND Boguski, M.S. and Jones, A.R. Neurogenomics: at the intersection of neurobiology and genome sciences. Nature Neuroscience, 7(5), 429-433 (2004).

[11] For more information on optimizing selection (which is similar to but distinct from normalizing or stabilizing selection), please see: Travis, J.   The Role of Optimizing Selection in Natural Populations. Annual Review of Ecology and Systematics, 20(1), 279-296 (1989).

[12] Olding-Smee, F.J., Laland, K.N., and Feldman, M.W.   Niche construction: the neglected process in evolution. Princeton University Press, Princeton, NJ (2003).

[13] Khadjavi, M. and Lange, A.   Prisoners and their dilemma. Journal of Economic Behavior and Organization, 92, 163-175 (2013).

[14] Sometimes, the application of heuristics does not translate into optimal behavior or tradeoffs. For an example from information-seeking behavior in an in silico model (ACT-R cognitive architecture), please see: Fu, W-T., Gray, W.D.   Suboptimal tradeoffs in information seeking. Cognitive Psychology, 52, 195-242 (2006).

[15] Ackland, G.   Maximization principles and daisyworld. Journal of Theoretical Biology, 227(1), 121-128 (2004).

[16] Grossman, S.J. and Stiglitz, J.E.   On the impossibility of informationally efficient markets. American Economic Review, 70(3), 393-408 (1980).

July 23, 2013

WARNING: Data and Narratives may lead to Bias

This post contains three features cross-posted to my micro-blog, Tumbld Thoughts. I am carving something at its joints here (see note #5 for reference), but am not sure exactly what. I guess it is the role of belief and human nature in things we usually (from a common-sense perspective, at least) consider to be objective and/or conceptually attractive (e.g. data analysis, scientific theory, and intuitions about complexity).

Featured are three loosely-related topics: the uncritical interpretation of big data (I), shortcomings of the narrative explanation (II), and subtle but important biases in scientific thinking (III). Nothing is sacred (or at least reverent) here, but then again it shouldn't be.

I. Uncritical Interpretations of "Big" Data


This is what a religion based on big data would look like. Or rather, this is what the uncritical interpretation of big data [1] currently looks like. A few readings on the topic:

1) a news article [2] on how the mis-interpretation of big data (and over-reliance on its correlative relationships) threaten to undermine effective decision-making. Of greatest importance is distinguishing between correlation and causation, which is a data analysis (and logical reasoning) issue that predates big data.


2) an op-ed [3] featuring the work of Seth Stephens-Davidowitz, an economist from Google who is debunking the idea that child abuse and neglect decreased in the aftermath of the 2008 economic crisis. He is accomplishing this using a novel methodology, an example of how using different methods to address the same problem yields different results. In this case, aggregate Google searches (based on unobserved, online activity) were used. An example of how we might better extract causal relationships from "big" datasets.



II. Going beyond "the narrative" explanation

Is "the narrative" the best way to convey complex ideas, especially when it comes to social or scientific explanation? Barry Ritholtz and Cullen Roche [4] remind us that in contemporary economics, the prevailing narratives often fail to capture the complexity or even the outcomes of real-world situations. 

This failure is one of explanations that have no capacity to take into account conflicting evidence. This ultimately results in cognitive dissonance, which becomes more pronounced as the narrative explanations continue to fail.

That being said, using a narrative structure to describe the function and complexity of scientific and social concepts is not always to be avoided. Here is my list of reasons why narratives "work" in this context:

1) narratives contain common structures that are shared across cultures.

2) narrative structures are consistent with naive models [5], which represent an intuitive view of the world [6].

3) narratives are compact ways to encode information, such as oral traditions or feature films. Compare the amount of potential information contained in these with a Wiki or a blog post.

Why narratives do not work in this context:

1) narratives can perpetuate naive models of the world in the face of contradictory evidence. Examples of this are given in [4].

2) naive models (and thus narratives) are often conservative, and do not encode nonlinear effects or the parallel progression of events. Narrative thinking favors simple cause-and-effect mechanisms over mechanisms that favor multiple causes or long-term, delayed outcomes [7].

3) like most specialized information, they often require intersubjectivity. Unlike most specialized information, they require a moral logic.This may or may not obfuscate the interpretation of events.

4) narratives often utilize allegorical arguments, in which a single string of text can be interpreted in many ways. For example, if a message is passed around a circle, if often changes due to intrasubjectivity (e.g. individual interpretations). While this is good for cultural diversity, it is not so good for scientific replication.

Whereas the narrative is valuable to a communicator, it may be less valuable from a technical standpoint. Overall, using "plain language" and "narrative structure" can actually undercut the scientific content. 


III. Bias in Scientific Thinking


Is it bias, or is it good science? A series of recent papers/talks may give us some insight [8]. The first is a Skepticon IV talk [9] and Measure of Doubt blog post by Juila Galef on the "Straw Vulcan" phenomenon. A Straw Vulcan is shorthand for the popular misconceptions surrounding logical decisionmaking and how what may seem logical may be co-opted by emotionally-driven biases.


Interesting enough, but what does this have to do with science? Well, recently published papers in PLoS Biology [10] suggest that bias is a natural feature of scientific thinking, which results from a tension between the need to shift paradigms (Kuhnian science) and the need to falsify hypotheses (Popperian science)


NOTES:

[1] Whitehorn, M.   The Parable of Beer and Diapers. The Register, August 15 (2006).

[2] Asay, M.   Big Data's Dehumanizing Impact on Public Policy. ReadWrite content aggregator, July 12 (2013).

[3] Stephens-Davidowitz, S.   How Googling Unmasks Child Abuse. NYT Opinion, July 13 (2013).

[4] Examples from so-called "common sense knowledge" in economic issues includes:

a) Ritholtz, B.   The Narrative Fails. The Big Picture blog, July 19 (2013).

b) Roche, C.   The Fear Trade has been Demolished. Pragmatic Capitalism blog, July 19 (2013).

[5] For a structure learning perspective, please see: Gershman, S.J. and Niv, Y.   Learning latent structure: carving nature at its joints. Current Opinion in Neurobiology, 20, 251-256 (2010).

[6] for more information in the role of narratives vs. data in the wealth inequality debates, see the following:

a) Norton, M.I. and Ariely, D.   Building a better America -- one wealth quintile at a time. Perspectives on Psychological Science, 6(1), 9-12 (2011). Bottom image is taken from Figure 2.

b) Noah, T.   Theoretical Egalitarians. Slate.com, September 27 (2010).

[7] Wexler, M.   Invisible Hands: intelligent design and free markets. Journal of Ideology, 33 (2011).


[9] here is video of Julia Galef's lecture at Skepticon IV, and here is her post on the topic at Measure of Doubt blog.

[10] relevant papers from this issue include the following:

a) Chase, J.M.   The Shadow of Bias. PLoS Biology, 11(7), e1001608 (2013).

b) Tsilidis, K.K., Panagiotou, O.A., Sena, E.S., Aretouli, E., Evangelou, E., Howells, D.W., Al-Shahi Salman, R., Macleod, M.R., and Ioannidis, J.P.A.   Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases. PLoS Biology, 11(7), e1001609 (2013).

July 20, 2013

Human Augmentation Short Course -- Part III

The next two #human-augmentation flash lectures from my micro-blog, Tumbld Thoughts will feature several potential implementations of Intelligence Augmentation (IA), Augmented Cognition (AugCog), and its integration with smart devices. This includes two topical areas: I (Bio-machine Symbiosis and Allostasis), and II (Augmentation of Touch).

I. Bio-machine Symbiosis and Allostatis



The book "The Symbiotic Man" by Joel DeRosnay can be used to frame a graphical discussion on bio-machine symbiosis (e.g. human-smart home interaction) and the concept of mixed allostatic networks. In this case, the symbiotic relationship is between a biological system and a technical one. While there are fundamentally different dynamics between these two types of systems, the fusion of their interactions are not only possible but essential.


As discussed in previous slides, measurements from a human can be used to provide intelligence to the house (in this case, scheduling and other use information). A mitigation strategy can be used to extract information from the collected data and provides instructions for machine learning.

Measurement of the human can be taken on physiological state (e.g. measurements of brain activity or state monitoring of other organs). This can be done using microelectronics, and the measurements must cross a semi-permeable boundary which is selective with respect to available information. Nevertheless, this network allows us to construct a consensus approximation of the body's homeostatic control mechanisms.


This allows us to construct mixed allostatic networks. A mixed allostatic network includes elements from both the house (e.g. appliances) and the human body (e.g. organs). This has already been done by integrating body area networks and domotic networks.

The key innovation here is to unite the function of both networks under global, allostatic control. When the allostatic load of this network becomes too great, this information can be used to modify the mitigation strategy. This may be done in a manner similar to DeRosnay's Symbionomic Laws of Equilibrium.



II. Applications related to the Augmentation of Touch

In this installment of the #human-augmentation tag, we will discuss an assortment of applications that have the potential to augment the sense of touch and upper body mobility. 


The first technology was recently featured in IEEE Spectrum's startup spotlight. The Italian startup Prensilia [1] is working on a robotic hand called Azzurra. The fully artificial hand mimics human grip by using underactuated movements. Inside the hand, the rotary motion generated by a motor is translated to linear actuation to produce biological (e.g. muscle generated) types of motion.

The second technology features the DARPA initiative to create better prosthetic arms. In this video from IEEE Spectrum, the work of Dean Kamen and his group at DEKA Research is profiled. This type of prosthetic arm uses bioelectric signals from chest muscles in combination with servo motors to enable both fine motor and ballistic movements.


The third technology is simulated touch, which unlike the last two does not explicitly involve artifacts. Touch is a physical phenomenon, as contemplated in this Minute Physics video. However, touch also involves human perception, as discussed previously on Tumbld Thoughts. A thourough understanding of this sense allows us to build better ways to interact with virtual environments and robots using touch [2]. 

COURTESY: Chapter 4 from [2b].

NOTES:

[1] Cipriani, C.   Startup Spotlight: Prensilla developing robot hands for research, prosthetics. IEEE Spectrum, July 18 (2013).

[2] The second image from bottom is a LilyPad Arduino project. For more information on the engineering of touch, please see these two books:

a) McLaughlin, M.L., Hespanha, J.P., and Sukhatme, G.S.   Touch in Virtual Environments: haptics and the design of interactive systems. Prentice-Hall, Upper Saddle River, NJ (2002).

b) Bicchi, A., Buss, M., Ernst, M.O., and Peer, A.   The Sense of Touch and its Rendering. Springer, Berlin (2008).

July 13, 2013

Maps, Models, and Concepts, July edition

Here are some maps, models, and concepts reposted from my micro-blog, Tumbld Thoughts. This is a mash-up of recent books and articles in my reading queue, plus recent features from around the web. The order is: I (Nearly-decomposable Systems), II (Rube Goldberg Mechanisms), III (a profile of Elon Musk's Hyperloop), and IV (Maps + Metadata).


I. Nearly-decomposable Systems

This concept was originally proposed in Chapter 4 of "Sciences of the Artificial" by Herbert A. Simon (Chapter 4 is entitled "Architecture of Complexity"). The focus is on a concept called "nearly-decomposable systems".

In the modern practice of algorithmic representation, decomposability enables one to represent a complex system as a strict hierarchy. By contrast, nearly-decomposable systems can be found in systems where short-run behavior is statistically independent but long-run behavior is dependent in an aggregate fashion. While discrete states appear to exist at static intervals, examining the dynamics reveal interactions (or overlap) between these states. 

One example of this can be seen in the above image, which is adapted from Figure 7. A system is partitioned into 8 spatially non-overlapping components (A1, A2, A3, B2, B2, C1, C2, C3). A sparse matrix (top left) can then be constructed to model the selective functional interactivity between these components (discrete states). In this case, short-run behavior is restricted to interactions within each state, while long-run behavior characterizes the interactions between states. Overall, intra-component linkages (e.g. interactions within A1) are greater than inter-component linkages (e.g. interactions between A1 and B2). 

In Chapter 4, Simon applies this concept to the behavior of diffusing particles in physiochemical systems. However, in systems with autonomous intelligence (e.g. social systems), agents (the equivalent of diffusing particles) can influence and communicate with each other. This concept can also be applied to hierarchical systems (e.g. social and biological complexity). In such cases, the distinction between "broad" vs. "narrow" hierarchies (e.g. hierarchical span) becomes important. 

For a related concept as applied to systems biology, please see: 

Alicea, B.   The Curse of Orthogonality. Synthetic Daisies blog, October 3 (2011).

Further Reading:

Agre, P.E.   Hierarchy and History in Simon's "Architecture of Complexity". Journal of the Learning Sciences, 12(3) 2003.

Bentley, J.L. and Saxe, J.B.   Decomposable searching problems I. Static-to-dynamic transformation. Journal of Algorithms, 1(4), 301-358 (1980).

Feigenbaum, E.A.   Retrospective: Herbert A. Simon, 1916-2001. Science, 291(5511), 2107 (2001).

Simon, H.A.   Near-decomposability and the speed of evolution. Industrial and Corporate Change, 11(3), 587-599.

Simon, H.A.   Sciences of the Artificial. MIT Press, Cambridge, MA (1969).


II. Rube Goldberg (e.g. convoluted, non-optimal) Mechanisms

Happy (posthumous) Birthday (July 4th) to Rube Goldberg, the father of the Rube Goldberg machine [1]. Rube Goldberg machines provide a convoluted way to accomplish something that is otherwise simple. For example, to get sand out of a pair of shoes, one could take their shoes off, turn them upside down, and tap them.

In the Rube Goldberg universe, however, you would have to build a complex machine with many degrees of freedom to accomplish the same feat. His creations were massively inefficient, and that's the whole point -- if your worldview is one of parsimony, you will find his comics humorously absurd.

Given that his birthday falls on July 4 (US Independence Day), the associated Google Doodle (from 2010) features a seven-step machine that lights a firecracker. But can Rube Goldberg machines be useful? For more on this, check out a few Synthetic Daisies blog posts [2] on the application of Rube Goldberg-like machines to systems biology and evolution.

In this case, we are evaluating viable function in the context of maximal convolution -- in other case, the more steps to accomplish a task, the better. Perhaps this [3] is a biologically-plausible alternative to the view of evolution as parsimony.


III. Conceptual Porn for Technology Visionaries

Tech visionaries unite! I have run across a critical mass of articles on Elon Musk's proposal to build a high-speed transportation system called the "Hyperloop" [4]. Musk has described it as "a cross between the Concorde, a railgun, and an air hockey table". Supposedly better than high-speed rail, airplanes, or electric vehicles.

The hyperloop concept seems to build off of a number of existing technologies [5], some more developed for commercial use than others. It is not a vacuum tube nor a conventional rail system, although it incorporates both of these design elements. A version of what Musk is envisioning is similar to the Evacuated Tube Transport system, patented by Daryl Oster (founder of ET3 technologies) [6].

How does it work and is it simply hype? See these popular news features from Gizmag.comAutoblog
Greenand The Atlantic Wire for more information. Is this fringe science? Read the book "Physics on the Fringe" [7] and decide for yourself.


IV. Maps + Metadata Tell Interesting Tales

The first set of layered maps are from the NASA/NOAA Green project. The goal of this joint project is to make a remotely-sensed map of the earth's vegetation (land mass vegetation only) using data from the Suomi NPP satellite.

In this YouTube video, it is explained that land cover and vegetation changes can have an effect on weather variability. Viewing these processes over the course of a year (animated in the video) can help us understand interactions between the biosphere and atmosphere. 

Examples of this are shown above. The pictures (from top) represent: the red deserts of Australia (shown as white expanses), snowcover in North America, the grasslands in the Florida everglades, and deforestation in East Africa.


The second set of layered maps are brought to us in the form of a really cool video animation from Cube Cities and Google Earth. Specifically, this is a time lapse of Chicago's skyline and its growth from 1865 (perspective view) to 2014 (birds-eye view). To illustrate the differences, I have screen-captured selected years and placed them in series. The buildings are 3-D models superimposed on a 2-D map of the city. Quite impressive.

NOTES: 

[1] Leibach, J.   Rube Goldberg Mashup. Science Friday blog, July 4 (2013).

[2] Alicea, B.   Machinery of Biocomplexity, new arXiv paper. Synthetic Daisies blog, April 19 (2011) AND Non-razors, unite! Synthetic Daisies blog, January 30 (2009)

[3] Bottom figure (minimal biological model) is from: Alicea, B.   The "Machinery" of Biocomplexity: understanding non-optimal architectures in biological systems. arXiv: 1104.3559 [q-bio.QM] (2011).

[4] Basulto, D.   Is the Hyperloop the Future of Transportation? BigLoop blog, June 12 (2013).

[5] For more information on these technologies, please see the following list:




d) VHST technology. For more information, please see: Salter, R.M. The Very High-speed Transit (VHST) System. Rand Corporation (1972).

See also the following patent document: Oster, D.   Evacuated tube transport. US5950543. USPTO (1999).

July 5, 2013

Thought (Memetic) Soup, July edition

From time to time, I like to reflect on reoccurring themes in society that I find curious and/or strange. I usually post these to Tumbld Thoughts, but I thought I'd post some here as well in a four-part feature called thought (memetic) soup.

I. Soft Paternalism (risk vs. safety, not risk vs. reward)


Do what I tell you with love. And take your own car. Or whatever...... Image courtesy [1]. Here is an actual quote from George Will on the risks (vs. rewards) of high-speed trains [2]:
"the real reason for progressives’ passion for trains is their goal of diminishing Americans’ individualism in order to make them more amenable to collectivism"
Out-of-mind experience? Perhaps. But this is the more likely explanation is that when congestion pricing (considering the true costs of transportation individualism) is taken into account [3], train transportation is cheaper than driving. Nothing really nefarious here.

However, the concept of "nudging" (or soft paternalism) really does seem to be nefarious to me [4]. Is soft paternalism simply sets of smart policies crafted "for the good of the many", or a form of intentionally dishonest signaling?

It's worth distinguishing covert propaganda efforts intended to "do good" from policies that actually benefit society [5], or technological investments by government (such as high-speed rail or other public goods) that provide the potential for immense future benefit.  


II. Data Aggregation and Mental Bundling (rise of the paranocracy?)


Information is power. The power to surveil, and sell products. Or perhaps grist for paranoia. Or perhaps all of the above. There are two memes in this intellectual soup that I would like to explore in my inimitable style.

The first trend involves people displaying an irrational fear of metadata collection by the government. While this is certainly a slippery slope (in the direction of the panopticon [6] or a repressive dictatorship), this is also not greatly different from the mining of survey data or the collection of census data. In their de-identified form, these data are within the scope of ethical informatics practice (although this is open for debate). Collecting metadata aggregated from the internet is a bit like tracking people's footprints in the sand -- which is a violation of privacy only in the most general sense.

While there are legitimate criticisms to be made of this practice, there also seems to be a certain "anti-vaxxer" [7] logic to the recent uproar. Perhaps as Jaron Lanier argues in his new book "Who Owns the Future" [8], people should be paid for both the data they create and access to these data. Or perhaps the entire apparatus of internet technology and automation has destroyed society, a view I am calling "Synthetic Dystopia".

According to Lanier's worldview, the rise of free content has upended the creative and intellectual economy and concentrated the returns for these products in the hands of a few large content aggregators [9].This brings us to the second trend of the post: bigness phobia (with a technological twist). By now, I'm sure you've heard the phrases "too big to fail" and the general sentiment that bigness is bad. In my view, Lanier exhibits some of the same mental confounds and irrational fears as a bigness conspiracy theorist. 

The main confound in Lanier's argument revolves around resolving a central question: are big data and big commerce the cause of bad-for-society behavior, or a business tool for characterizes the underlying problems related to over-optimizing capital and society [10]? Whatever the role of data and the internet in our current economic discontents, drawing analogies to past economic eras (e.g. the robber barons of the industrial age) may not help us as much as people like Lanier might like to believe [11]. And in an informational vacuum such as this (e.g. where there are few historical precedents and many unknowns), people will turn to irrational mental bundling.


III. Gilbert's Mysterious Grape


I am not sure what to make of the Prudential "Stickers" ad. Originally a Super Bowl ad (2013), the ad features social psychologist Dan Gilbert (author of "Stumbling on Happiness") asking people "How old is the oldest person you know"? The point being to demonstrate that people are longer lived now than in the past (when the current retirement age of 65 was established). I don't quite understand this experiment for two reasons:

1) if you ask 1,000 people what age is the oldest person you know, how many individuals does that accurately represent? One might assume that each observation is independent. However, let us reconsider that assumption. For example, if we were to overlay a social network topology over this group of people, what would we find? We might find, for example, that a number of individual answers represent the same very old person ( a so-called "social hub").

Can this be interpreted from the data presented in the ad? Since all of the people in the same were from the same community (Austin, TX), this is very likely. While the number of long-lived people (e.g. more than 2 stadard deviations above the mean) may have increased in the past half-century, they are still relatively rare. What's more, the underlying distribution has likely not changed as much as we'd like to think it has.

2) the question begs a response that states the maximum age of the population rather than the underlying distribution. While it does uncover what people know about old age, it does so in a way that perpetuates a naive view of an aging population (which has it's own complexities a TV ad for retirement planning just won't be able to counter). 

Indeed, the MEAN age of the population has increased in the past 50 years. And the likelihood of someone living longer has also increased. But the manner of presentation exaggerates this trend. And one of the reasons that the mean life expectancy has increased is a combination of statistical subtleties: 1) a subset of the population living longer, and 2) fewer people dying in their youth. 

Additionally, consider that there are other ways to interpret these data. I have my own thoughts about what the data actually mean. What we might be observing in the image at the bottom is a "centrality-oriented frontier". "Centrality" refers to the added information from the social network consideration (many observations representing the same people, or over-represented "hubs"), while "frontier" refers to the maximum extent of values for a specific regime. In economic modeling, such curves are used to represent efficiencies and a range of possibilies in a bivariate space.

This is just one of many possibilities, yet none of them really get at the heart of what's bothering me about this ad. Essentially, what we get with this ad is the promotion of innumeracy on a very large (and high profile) scale. Not only is this bad science, this is also the worst type of entertainment: a reality show called "Gilbert's Puzzling Grape".

IV. The Mental "Reality" of Fairness and False Abundance


Here are two academic papers on primate brain and behavior that I have recently read. I am thinking that they fit together in some fashion. Top graphic is from the Scientific American blog "Beautiful Minds" and a post entitled "Gorillas Agree: Human Frontal Cortex is Nothing Special". Please discuss:

Steckenfinger, S.A. and Ghanzanfar, A.A.   Monkey visual behavior falls into the uncanny valley. PNAS, 106, 18362-18366 (2009).

In this paper, monkeys interact with virtual avatars and their responses are interpreted in the context of the "uncanny valley".

Brosnan, S.F. and de Waal, F.B.M.   Monkeys reject unequal pay. Nature, 425, 297-299 (2003).

In this classic paper, it is demonstrated that capuchin monkeys demonstrate a strong dislike for unequal rewards for the same task. Here is a lecture by Frans DeWaal that demonstrates this effect.

Now read the following two articles in succession. Discuss. Bottom graphic is the album art from Robert Plant's "Fate of Nations" album. Inset is from a recent article in "The Atlantic" by Charles C. Mann entitled "We will never run out of oil".

The first is an article from Bloomberg on James Hansen's critique of the new era of oil abundance and its negative consequences with respect to climate change. A bit poorly written for my tastes, but gets across the main idea that the exploitation of tar sands and other marginal fossil fuel resources is a significant gamble with consequences that are not often publicized in the media.

The second article is a bit more critical of the new era of oil abundance and the rise of "Saudi America". The focus of this article is on the practice of fracking and its negative consequences.


I'm sorry that I ended the feature on such a negative note. However, we will return to the topic of false abundance in a future post.

NOTES:
[1] The State is Looking After You. Economist, April 6 (2006) AND High-speed Rail in the United States. Wikipedia (2013).

[2] Will, G.   High speed to insolvency. Newsweek, February 27 (2011). via Krugman, P.  Subways Pay. The Conscience of a Liberal blog, March 27 (2013).

See also the following article: Weigel, D.   Off the Rails: why do conservatives hate trains so much? Slate Magazine, March 8 (2011).

[3] Congestion Pricing. Wikipedia (2013).

[4] Thaler, R.H. and Sunstein, C.R.   Nudge. Yale University Press (2008) AND Paternalism. Stanford Encyclopedia of Philosophy (2010).

[5] Soft paternalism is dishonest in the sense that it is the covert imposition of will. Just because trickery is used rather than fists and guns doesn't make it any more right.

According to "Nudge" [4], one outcome of soft paternalism is the creation of "choice architectures". But are there other ways to do this (e.g. can they be decoupled from moralistic crusades and propaganda campaigns)?

[6] For a privacy advocate perspective, please see: Rule, J.B.   The Price of the Panopticon. New York Times , June 11 (2013).

[7] The "anti-vaxxer" argument is technically a form of denying science (or more generally, facts and the occurrence of events) to support one's beliefs, but it is also apt here. In this case, big bad big data is conspiring with big brother to do something that you should not trust, regardless of the context or mitigating circumstances (e.g. likelihood of abuse). 

[8] Lanier, J.   Who owns the future? Simon and Schuster (2013). Also see the recent PBS Newshour interview with Paul Solman.

[9] I'm not sure that I am properly representing his argument, but if so there is an alternate hypothesis: digital and free access to information requires a fundamentally different economic model.

While Lanier offers a possible solution (micropayments for internet content), it seems to be a fundamentally unworkable one.This is because we are trying to attach value things which are both novel and intangible rather than extend a model of payment and profit to a new informational model.

This is problematic for a number of reasons -- for example, there are caveats to be raised in charging rents on intangible goods. For more information on this, please see: Krugman, P.   Profits Without Production. New York Times, June 20 (2013).

[10] for example, has the internet created a "winner-take-all" economy, or is the rise of the internet concurrent with the rise of zero-sum capitalism? 

[11] for a fundamentally different view on this, please see the following Moneybox blog post: Yglesias, M.   What Makes Apple, Google, and Microsoft Different From the Corporate Titans of Yesteryear. Moneybox blog, June 21 (2013).

Printfriendly