March 26, 2013

Upcoming Second Life Lecture and Summary Talk

This is cross-posted from my micro-blog, Tumbld Thoughts:


Here are slides from a lecture [1] to be given this Wednesday (March 27) to the Embryo Physics group (in Second Life) at 2pm PST. Slides posted to Figshare. A shorter version of the talk was originally part of HTDE 2012, a workshop in association with the Artificial Life XIII conference.

Selected slides from talk


Also, I am currently on the job market. Here is a short slideshow [2] that profiles my personal research expertise and interests (and the current version of my CV, which can be found here). Please take a look at both, and comments are welcome.


NOTES:

[1] Alicea, B.   Multiscale Integration and Heuristics of Complex Physiological Phenomena. Figshare, doi: 10.6084/m9.figshare.657992 (2013).

[2] Alicea, B.   Short Job Talk. Figshare, doi:10.6084/m9.figshare.639185.

UPDATE:
The talk went well, with about six avatars in attendance (I am the Tron lightcycle avatar). Below are some images from the proceedings, with a transcript of the talk also available.


March 21, 2013

Plausibility and de navitus models of complex systems

What are the plausible configurations of a living system? Let us take animal phenotypes as an example [1]. The number of plausible configurations is somewhere between all forms known to have lived and all possible configurations allowable by genotype and environment. Given the appropriate model, we should be able to constrain this a bit more to produce a concrete set of plausible phenotypes (see Figure 1).

Figure 1. Example of defining plausible phenotypes from a developmental (e.g. epigenetic) landscape. In this case, "plausibility" is defined as the interaction between environmental influence and coherent physiological (e.g. normal and coherent disease) states. Image is from Figure 1 in [2].

Instead of an inferential or predictive model, we will instead turn to an agent-based approach. Each agent will produce a series of naive (or de navitus) models, and a consensus among these models can produce a result comparable to an informed synthesis [3]. Naive models are based on naive theories (see Figure 2), which people formulate as heuristics for interpreting the natural world. When the level of theoretical synthesis is weak or amount of observed data is limited, naive theories tend to dominate. In fact, the early stages of informed synthesis tend to resemble naive theories to some degree, as the ideal conditions for the development of theoretical synthesis are contingent upon both trial-and-error and a high degree of information [4].

Figure 2. Two popular naive theories from the 19th century. INSET: The canals of Mars as envisioned by Giovanni Schiaparelli. BELOW: Expanding earth theory, which predates the notion of tectonic activity. COURTESY: Smashing Lists.

In general, naive theories are clearly inferior to informed synthesis. Naive theories suffer not only from a lack of informational definition, but are susceptible to logical fallacies and cognitive biases (for an example, see Figure 3). Two such examples involve naive theories of evolutionary biology and macroeconomic processes that are based on false equivalence [5]. In the case of evolutionary biology, naive theories tend to confuse Lamarkian (e.g. the propagation of adaptations) with Darwinian (germ-line inheritance) mechanisms [6]. In the case of naive economic theory, the notion that governments should be run like a household involves confusing micro- and macro-economic principles [7].

Figure 3. The potential cognitive substrate (and cognitive biases) of naive theories, at work in the process of superstitious beliefs. COURTESY: screenshot from [8].

The selective nature of historical recall [9] is also strongly influenced by cognitive bias. This can lead to both intentional and unintentional logical inconsistencies, which can only be remedied through either conceptual blending and/or a logical fallacy called appeal to authority [10]. This might be remedied through the construction of an unbiased consensus that would at least approximate more formal models trained with loads of data. But how do we accomplish this? To arrive at an answer, a thought experiment is in order.

What if we could construct a population of agents that each had the capacity to construct novel models de navitus, but also with the capacity for logical consistency. Furthermore, the diversity of these de navitus models would be much higher than is found among human populations, so that popular but false memes never dominate a population. In most cases, a resulting consensus would yield a much more refined theory than would otherwise be the case. This also allows us to study the nature of event or ideonational contingency, and create opportunities for new historical contingencies to arise.

The quasi-evolutionary, agent-based conception works by deriving a consensus model state by sampling multiple agents in the population at the densest concentration of agents. As the collective state moves through a geometric space representing relative plausibility, information from agents within a certain radius is sampled, and this information is incorporated into the consensus (see Figure 4). The brains of each agent consist of two mechanisms: bottom-up and top-down. The bottom-up process allows for each agent to generate naive models based on a set of constant environmental inputs. This can accomplished through a cognitive architecture that allows for significant variation, but still results in model that are capable of logical reasoning [11]. The top-down mechanism selects for the least biased de navitus model based on conflicting features and cognitive biases. This does not ensure bias-free models, but the selection mechanism does learn over time, which favors some level of collective state convergence over time [12].

Figure 4. Example of an evolving de navitus model with contributions of individual agents using the historical model geometry elaborated on in Figure 5.

To better understand how naive models are related to formal theories, there are two components of the naive theory that make them both powerful and potentially misleading. The first is an intuitive sense of how the world works. Without knowledge of the mechanism itself, this can be make relatively simple processes highly ambiguous. We can see how this plays out by comparing two mechanisms: the movement of cars in and out of a tunnel, and the movement of toast in and out of a toaster. In both cases, the brain must infer movement relative to an internal mechanism. In one case, there is transformation, while in the other case there is disappearance into on opening and re-emergence through another. Intuition can provide an answer to this mental challenge, but with a high potential for creative but inaccurate solutions [13].

The second component is a bit more subtle, and involves the mechanism of induction itself. In the case of our cars and bread, the key to explanation and prediction in a naive model is figuring out what happens in the toaster or tunnel. If one has a good grasp of the internal process, then representing this in theoretical terms is not much of a problem. The problem arises when the internal mechanism is not well known by the thinker. This can be seen in human development among small children, who often make up fanciful stories for questions like "Why is the sky blue?" or "Why does the sun set?" [14].

We need only to look at the state of theory in neuroscience to see how what was once considered a good theory of brain function (e.g. phrenology) falls apart given more data. Figure 4 shows how theories from a variety of scientific fields can gain or lose acceptance over time given more data or informed theoretical synthesis work. We can construct a similar graph for naive theories and de navitus models. Figure 5 shows this as an evolutionary process that iteratively incorporates knowledge from agents exploring their worlds.

The future direction for this work is to refine the agent-based architecture and test performance of these computational models against human-derived naive theories and formal scientific theories. Hopefully, de navitus models will allow us to build sophisticated theories and machine learning tutors in situations of sparse sampling and limited opportunities for data acquisition.

Figure 5. The landscape of scientific theoretical synthesis. CENTER: widely accepted ideas. EDGES: ideas in obscurity or disrepute. COURTESY: Slide from [15].

NOTES:

[1] Kirschner, M. and Gerhart, J. (2005). Plausibility of Life. Yale University Press, New Haven.

[2] Grabiec, A.M. and Reedquist, K.A.   The ascent of acetylation in the epigenetics of rheumatoid arthritis. Nature Reviews Rheumatology, doi:10.1038/nrrheum.2013.17 (2013).

[3] An informed synthesis is a prediction or solution based on large amounts of data and theoretical synthesis by experts (e.g. academics, specialists).

For more information about model-based reasoning, please see: Magnani, L.   Abduction, Reason, and Science: process of discovery and explanation. Kluwer, New York (2001), Magnini, L., Nersessian, N.J., and Thagard, P.   Model-based Reasoning in Scientific Discovery. Kluwer, New York (1999), AND Gooding, D.   Creative Rationality: towards an abductive model of scientific change. Philsophica, 58, 73-102.

[4] "Naive" theories include 1) folk theories such as Groundhog Day, which is an intuitive set of predictions about the weather embedded in ritual, 2) early theoretical syntheses such as Greek natural philosophy, and 3) mental models that are consistent with observations of the world such as intuitive physics.

[5] My definition of false equivalence: the assumption of equivalence between A and B which is based on mischaracterized attributes or misleading conclusions.

[6] Shtulman, A.   Qualitative differences between naive and scientific theories of evolution. Cognitive Psychology, 52, 170-194 (2006). See also: Thagard, P.   Conceptual revolutions. Princeton University Press (1992).

[7] Krugman, P.   Running Government Like A Business or Family. Conscience of a Liberal blog, March 14. (2013)

[8] QualiaSoup.   Superstition. March 14, YouTube (2013).

[9] Clues to what this part of the architecture should look like might come from the concept of selective memory and social amnesia: Stickgold, R. and Walker, M.P.   Sleep-dependent memory triage: evolving generalization through selective processing. Nature Neuroscience, 16, 139-145 (2013) AND Ferguson, J.N., Young, L.J., Hearn, E.F., Matzuk, M.M., Insel, T.R., Winslow, J.T.   Social amnesia in mice lacking the oxytocin gene. Nature Genetics, 25(3), 284-288 (2000).

[10] My definition of argument from authority: induction based on the existence of an absolute or divine authority, similar in tone to the god of the gaps argument.

[11] Clues to what this part of the architecture should look like might also come from the concept of a "straw vulcan": Galef, J.   The Straw Vulcan: Hollywood's illogical approach to logical decisionmaking. Measure of Doubt blog. November 26 (2011).


[12] The assumption here is that forced diversity, either through a very high mutation rate or a very large conceptual universe, would lead us to richer descriptions. However, there are caveats to this assumption, including the production of nonsense theories and the lack of convergence to any one consensus over time.

[13] This can seen in terms of the creative license taken in conceptions of scientific processes in motion pictures (something I call movie science). Since these models are partially plausible, movie science requires one to suspend their disbelief. In this context, the depiction of a scientific process can be enjoyable, but perhaps not by experts in the topic.

See also the following classic paper about the use of "naive" models of physics among non-physicists: Chi et.al   Categorization and Representation of Physics Problems by Experts and Novices. Cognitive Science, 5, 121-125 (1981).

[14] Clues to what this part of the architecture should look like might also come from the study of creativity: Carlsson, I., Wendt, P.E., and Risberg, J.   On the neurobiology of creativity. Differences in frontal activity between high and low creative subjects. Neuropsychologia, 38(6), 873-885 (2000).

[15] Alicea, B.   If your results are unpredictable, does it make them any less true? HTDE 2013.1. doi:10.6084/m9.figshare.157087 (2013).

March 16, 2013

But will they pay for "jibberish"? (or jabberwocky)?

Here are more reflections on alternative funding mechanisms for science. Recently, a new model [1] for funding early-stage innovation came to my attention.


The investors at Allied Minds want to fill the "gap" between basic research and commercial development by partnering with University labs, providing everything from seed money to management expertise. This "gap" is a major reason why most basic research is funded via government initiative. Indeed, their basic research areas of interest tend to be skewed towards topics that can be most easily brought to commercial success.

The Allied Minds people view research that falls into this gap this as an underdeveloped asset class [2]. However, it is not clear what exactly the assets are and how they can be monetized (if this is even the appropriate way to look at things, as we will return to later).

Consider the goal of an Allied Minds-sponsored project. They will tend to fund only "breakout" innovation with potential for outsized returns (e.g. profits). However, if this is no different or perhaps even more selective than existing venture capital initiative, have they really solved the problem they set out to solve?

Perhaps the idea of bridging the gap between discovery and innovation is the wrong way to approach this problem. In fact, the problem may actually not be the level of development or degree of practicality, but a natural feature of innovation. That is, by using an economic model that only rewards of large and tangible returns, only a small fraction of research can ever be monetized.



Mark Changizi wrote a guest blog post for the Discover blog "The Crux" in which he coined the phrase the "jibberish" of discovery [3]. The jibberish involves the intellectual jumble, open-endedness, and circuitous route to benchmarks from which emerge the core innovations that can be had from scientific research. He has some good insights on this process. In particular, he notes that there is a mismatch between the reporting of scientific results, the discovery process, and the funding mechanisms that currently exist [4]. When you go outside of the traditional funding mechanisms (e.g. private funding, venture capital), it is hard to propose to make a discovery. This observation is obviously rooted in the realities his experiences with 2AI Labs, but from here on out I would like to think bigger and more in terms of how to tailor economic models to science (rather than the other way around).

Is discovery a process of jibberish? Or waste?

By framing the problem as one of jibberish (or, alternatively, waste minimization [5]), I feel he helps to perpetuate a number of stereotypes and conceptual problems that plague basic science as a self-sustaining economic concern. The consensus in the investment/business community seems to be that this jibberish (or other "waste") is clearly unnecessary, or at very least can be minimized. However, there are three converging misconceptions/economic shortcomings that govern this assumption:

1) The cult of efficiency governs much of the discussion on funding science and the process of discovery. It is a commonly-held assumption that the short, bullet-pointed processes are the most focused, the best worked out, and thus the "safest" investments. This assumption also holds that innovation proceeds in a straight line. Sadly for the optimizers among us, innovation tends not to proceed in this manner [6]. Funding only a small part of this process is efficient, indeed (sarcasm intended).

An alternative to this gap (which reflects the gap between early innovation and marketable product) is to invest in the arcanocracy [7]. The arcanocracy implies a jumble or kluge of ideas and expertise leading to breakthroughs (intagible goods fill the "gap"). Monetizing and formalizing a system of exchange within the arcanocracy would go a long way towards finding the true value of a basic scientific enterprise.

2) There is also a problem I like to call the jabberwocky principle [8]. Investors and other outsiders do not understand the process of discovery because they do not engage in it. In a similar manner, a visitor to another culture cannot truly understand all of its practices unless they become enculturated [9]. This is fundamentally different from the so-called epistemic closure that plagues many dynsfunctional organizational cultures -- rather, I am suggesting that decisions about what is waste and what is not be made from the perspective of the scientific community rather than outsiders (or what Cultural Anthropologists would term an etic perspective).

Value from the outside, or value by understanding the practice?

Part of evaluating scientific discovery from the outside or a classical economic perspective involves not only viewing the process as jabberwocky, but overcoming popular misconceptions of scientists and their practices. Would everyday science by more highly valued if it were not so alien to the everyday activities of society-at-large (in particular the business and commercial worlds)? The classic portrayal of scientists as transmogrifiers or evil geniuses in science fiction has an influence value systems whether we like admit to it or not. This is particularly true of emerging sciences such as genetic engineering or artificial intelligence. We pay large amounts of money for circuses and other forms of entertainment, but do not pay for basic research with the same enthusiasm.

3) There is also an economies-of-scale problem, which is a practical concern but can be overcome. When basic research occurs, it often occurs at small scales/scopes, in what equates to hyperspecialized niche markets. Of course, big science exists. But often, this enterprise (even when supported by the government) is winner-take-all, with elite institutions and groups all too often taking it all. And with the state of science funding in crisis, new solutions and approaches are needed.

This is a significant barrier to new innovation, particularly in light of the current job market for scientists [10]. So why engage in supply-side economics, when new models for the economic value of intangible goods can be developed and employed?


NOTES:

[1] Allied Minds, a firm that takes early concepts from seed funding to start-up.

[2] underdeveloped in the sense that very little of it is monetized or gets rewarded based on conventional capitalist market models.

[3] Changizi, M.   The Colossal Pile of Jibberish Behind Discovery, and Its Implications for Science Funding.
The Crux blog, November 14 (2012).

[4] While he only alludes to this in his blog post, the idea of stable allocations may be applied to the mismatch between what funding agencies/private investors want, and what the scientific enterprise (discovery) often provides.

[5] This content is cross-posted to my micro-blog, Tumbld Thoughts:


Here are a few articles on the limitations of academic science. I post them because, as critiques, they are beset with their own biases about "the system". The first article [A] is about Carver Mead with a critique of mainstream science and how prevailing thought hold back the course of innovation. The second article [B] is about the existence of "waste" in academic institutions.

Now the question is: even though the authors bring up valid criticisms of the current academic culture, are there suggestions the correct (or even workable) solutions? For example, while the mainstream tend to reject offbeat ideas, it may be due to valid concerns about viability and other factors. When can bias be said to be good or bad, and when does it serve us well? Furthermore, can we optimize when and where we employ these biases in a way that is best for scientific discovery?

[A] Myslewski, R.   Chip daddy Mead: 'A bunch of big egos' are strangling science. The Register,    
February 20 (2013). Here, biases is defined as framing problems in terms of obscure mathematics,   
invested ideas, and lack of vision.

If you are interested in reading more about the topic, please see this book:: "Physics on the Fringe" by Margaret Wertheim.

[B] Marty, T.   Clean up the waste. Nature, 484, 27-28 (2012). Here, waste is defined as big egos, 
the lack of modern management practices, and other things that stand in the way of "optimizing" academia. A recent letter to the Washington Post ("It's time to get serious about science") argues that we (the United States) must get serious about funding science in the face of foreign competition. But should we pour money blindly into the system, or make sure that we do not re-enforce clear instances of bias?

[6] Innovation, like evolution and creativity, do not proceed in a straight line -- in fact, the branching bush metaphor works very well here (even by using a somewhat tangential example from horse evolution). So the question is: how do we monetize and create a self-sustaining economy that values as many main branches and offshoots of this branching bush as possible?

[7] This content is cross-posted to my micro-blog, Tumbld Thoughts:


Ah, meritocracy [A]. Supposedly fair, examples throughout history have often produced a strange mix of outcomes [B] that are far from "the optimal outcome". Here is a comparison between the current system of meritocracy and an alternative model of power and reward, something I am calling "arcanocracy". An arcanocracy is an economy based on the rule of or ascendance of arcane [C] skills and ideas.

[A] By virtue of picking and rewarding the "best of the best", the meritocracy is supposed to offer an optimal system that transcends religious, crony, and ethnic bias. However, in practice it can also be quite brittle, with a number of faults that cannot be easily remedied.

[B] Here are two sets of opinion on the subject: "Down with Meritocracy" from The Guardian and  "Meritocracy and its Discontents" from The Economist. The second picture features covers from the following books: "The Rise of the Meritocracy" and "Twilight of the Elites".

[C] arcane skills and ideas exist when a person are very skilled in a single oddball activity, but are average overall. Averaged together across an economy of scale, these arcania converge to spread reward more evenly across society.

[8] Jabberwocky refers to a Lewis Carroll poem ("Through the Looking Glass") in which the protagonist (Alice) tries to navigate a strange and often nonsensical culture. Perhaps much of the jibberish that Changizi refers to in his post is actually Jabberwocky that needs to be viewed through an alternate looking glass.

[9] There is a significant literature on cultural practices related to the scientific method. The work of Bruno LaTour ("Science in Action") and Roy Bhaskar ("A Realist Theory of Science") are good starting points.

[10] According to the conventional rules of supply-and-demand, we are in a bust period for academic (and perhaps also industry) science. The conventional (or conservative) wisdom is that we have trained too many people, or have too many degree-granting outlets. But this ignores the simple fact that he have produced an enormous amount of expertise. While the conventional labor market might be unable to absorb the labor, they may still contribute to overall productivity with the proper kind of economic model underpinning their efforts.

March 11, 2013

Makin' Pha-ses

Makin' Pha-ses, indeed [1]. Here is some "interesting" trace from two forms of noise that demonstrates a phenomenon called 1-D stochastic resonance [2]. This will generate a noisy signal with three distinct phases from two sources of noise (white and black):


1) generate white noise (a randomized sine wave over a 1,000 point interval). White noise is the kind of static you may have encountered while tuning an analog radio or television [3].


2) generate black noise (1/f3 deterministic signal over a 1,000 point interval). Black noise is the kind of noise you might encounter when modeling the frequency of natural disasters or blackbody radiation, and consists of mostly silence.


3) convolve both sources. Results in a signal with three distinct phases (over a 2,000 point interval). Demo conducted and plots made in Matlab.

The resulting plots are shown below. The associated Matlab code is located in my Github repository [4].


Also, even though this is tangentially related (e.g. making independent components), I will nevertheless post it here. I have a new tutorial up on Figshare [5] concerning how to conduct independent component analysis (ICA) and how to extract ICs from real-time thermocycling curves (e.g. qRT-PCR).



In keeping with the theme of the post, however, the Fast ICA algorithm does use white noise (a whitening matrix) to derive the ICs.

NOTES:

[1] an obscure reference from Saturday Night Live ("Makin' Copies" with Rob Schneider).

[2] Wellens, T., Shatokhin, V., and Buchleitner, A.   Stochastic Resonance. Reports on Progress in Physics, 67(1), 45 (2004) AND Balenzuela, P. Braun, H., and Chialvo, D.R.   The Ghost of Stochastic Resonance: An Introductory Review. arXiv, 1110.0136 (2011).

[3] For an artistic take on television static, please see TV Static Photos by Tom Moody and Ray Rapp. Examples of white noise in 2-D.

[4] this is part of work I am doing on an idea I am currently calling "noise strategies", a hybrid approach that merges game theory with the physics of noise. More on this in future posts.

[5] Alicea, B.   Independent features of quantified thermocycling reactions (qRT-PCR). Figshare, doi:10.6084/m9.figshare.649432 (2013).

The analysis was done using FastICA 2.5 for MATLAB. For details, see the following paper: Hyvarinen, A. and Oja, E. Independent Component Analysis: Algorithms  and Application. Neural Networks, 13(4-5), 411-430. (2000).

March 3, 2013

On worms, cities, brains, and behavior

These features are being cross-posted from my micro-blog, Tumbld Thoughts.



Here is a link to an excellent review in the blog Neuroanthropology [1] on recent work by Cornelia Bargmann [2] using C. elegans [3] as a model for uncovering the "building blocks" of sensorimotor behavior -- and its relationship with experience [4]. This work advances the idea that neuropeptides (e.g. oxytocin and vasopressin) organize behavioral complexity in a manner similar to how Hox genes organize phenotypic development [5].


This is a brand new paper (as of 2/28/2013) from the Nicolelis Lab at Duke on brain-to-brain interfaces (BTBI). BTBIs [6] are similar to BCI/BMI technology [7], but instead of neural signals driving a computer interface or machine, they are used to stimulate the brain of a conspecific (in this case, another rat on another continent [8], as shown in above image and described in the paper).


Finally, here is an image and linkfest related to the pre-release hype surrounding SimCity5. Unique features of the updated release include the use of tilt-shift photography and the Glassbox game engine. The Sims (humanoid agents) are also a bit more human-like.

So to provide context, here are four Wired Science blog posts by Sam Arbesman. The first post [9] is the connection between LEGO and SimCity as modeling tools (e.g. modeling cities at small scales). The second and third posts [10, 11] involve understanding the scale of cities in the context of their importance and how Kolmogorov complexity [12] can characterize this scalar relationship.

The fourth post describes a paper by Mark Changizi [13] in which the number and frequency of distinct types of LEGO piece contribute to the overall complexity of objects being built [14]. These mathematical principles (known as scaling laws) can be applied to understanding both the top-down design and bottom-up emergent complexity of cities.

NOTES:

[1] Lende, D.   Cornelia Bargmann and the Building Blocks of Behavior. Neuroanthropology blog. February 20 (2013).


Also see an article from 2012, in which she weighs in on the idea of building a connectome: Bargmann, C.I.   Beyond the connectome: how neuromodulators shape neural circuits. Bioessays, 34(6), 458-465 (2012). 

[3] C.elegans has a complexity of just 959 cells (the brain has 302 cells). This makes this nematode a potentially tractable model for whole-organism understanding and emulation of simple behaviors (but see article [2] for difficulties in achieving this). 

See the following review article for more information: Ankeny, R.A.   The natural history of Caenorhabditis elegans research. Nature Reviews Genetics, 2(6), 474-479 (2001).

[4] Hall, S.S.   As the worm turns. Nature News. February 20 (2013).

[5] For more information on how neuropeptides may organize social and other behaviors, please see the following review in which neuropeptides are called the "dark matter" of social neuroscience: Insel, T.R.   The Challenge of Translation in Social Neuroscience: a review of oxytocin, vasopressin, and affiliative behavior. Neuron, 65(6), 768–779 (2010).

[6] Paper: Vieira, M-P., Lebedev, M., Kunicki, C., Wang, J., and Nicolelis, M.A.L.   A brain-to-brain interface for real-time sharing of sensorimotor information. Scientific Reports, 3, 1319 (2013).

In addition, here is a video of the experiment and an article in The Scientist.


[8] Rat-to-rat communication (North America to South America and back), referred to in the paper as a "rat dyad". Map courtesy Google Earth.

[9] Arbesman, S.   LEGO Meets SimCity. Wired Science, February 20 (2013).

[10] Arbesman, S.   The Scale and Context of Cities. Wired Science, March 12 (2012).

[11] Arbesman, S.   The Complexity of Cities and SimCity. Wired Science, June 14 (2012).

[12] Hutter, M.   Algorithmic Complexity. Scholarpedia, 3(1), 2573 (2008).

[13] Changizi, M.A., McDannald, M.A., and Widders, D.   Scaling of differentiation in networks: nervous systems, organisms, ant colonies, ecosystems, businesses, universities, cities, electronic circuits, and Legos. Journal of Theoretical Biology, 218(2), 215-237 (2002).

[14] Arbesman, S.   The Mathematics of Lego. Wired Science, January 6 (2012).

Printfriendly