Main Page

From Jonathan D. Lettvin

Jump to: navigation, search

Contents

Jonathan D. Lettvin

professional
links
preferred
dev tools
patents new directions

resumé
LinkedIn
github repos
google+
facebook
email
wikipedia
Writing Samples

Unix linux BSD
Python PHP C++
jquery HTML5 CSS
Node Socket
iAPX86 assembler

OpenCL, OpenCV

small fast correct
complete tested

antivirus

5559960

5826012
neuron modeling

7796173
7952626

Annotation
Visualization
framework
concurrency
gpgpu
monkey tests

traceability

Earlier material

Home page for a 愚公移山    见义勇为

Motto: Small, Fast, Correct, Complete, Tested

This describes my favored development goal. I have achieved 0 bugs by proof. I have also achieved 0 bugs reported after 5 years use by millions of customers. I like to:

  • remove excess code and shrink code to fit the solution
  • prefer documented complexity over sluggish simplicity
  • illustrate correct operation at extremes
  • solve stated problems with no missing cases
  • use TDD into traceability matrices to prove it

Simple Sample of Scalable Code Quality

Professional Skills

See my resumé. I search for new persisting (not always leading edge) technologies where I can develop useful lasting programming skills. I love bare-metal programming (machine and assembly code). I have found useful paradigms in existing languages and created new languages to suit my needs. I love back-end high-speed ingest and researching/discovering algorithms to solve difficult problems. I avoid being crippled by language constraints. Perhaps I can do some problem solving for you.

Annotation

I like to clarify scientific intent by producing audio/visual emulations of described experiments in neuroscience. These use javascript/HTML5/CSS3 and whatever other tools which I learn on an as-needed basis such as gnuplot, graphviz, and more.

Scientific Visualization

I like to produce both structural and functional displays of mathematical ideas sufficiently clearly to obviate the need for verbal/written explanation. Where necessary, I hand-draw GIF animations. Where possible, I compute visual displays with technologies like Three.js.

Frameworks

I use existing content frameworks like mediawiki where they are effective. When they are not, I produce my own wiki translation modules such as what I use in this project.

I find many frameworks have either insufficient support for my intended use or too much learning to become effective. I try to avoid time-wasting and choose existing frameworks where they are adequate, or I produce frameworks of my own to encapsulate the needs-based capabilities I seek.

My favorite stage of developing is early prototyping. The rules of prototyping are different from producing production-grade code.

  • The code is to be refactored
  • The language is not necessarily the target language
  • Proof of functionality is more important than user experience

I see development as a six stage sequence (6P):

  1. possibility (an idea for a solution is conceived for a problem)
  2. proof (the solution is proven to work)
  3. popularity (the solution is made available to internal users)
  4. practice (the solution is refined using internal feedback)
  5. private (the solution is refined using trusted customer feedback)
  6. public (the solution is maintained/repaired using anonymous customer feedback)

I see prototyping as stages 1-4 and maybe 5.

  • Python is good for rapid prototyping.
  • C++ is often good for high-quality backends.
  • jquery/HTML5/CSS3 appear to be good for high-quality front-ends.

I am looking into preferable resourcing for concurrency.

Concurrency

For my hobby, I am intensely interested in distributed non-parallel concurrency. Parallel programming is useful but limited when emulating neuron populations. Concurrent programming is much more consistent with the independence of neurons processing just exactly their inputs and producing just their outputs.

gpgpu

Parallel programming is very useful for processes like image processing and neuron emulation. I wrote a kernel in OpenCL to implement a fairly mature RPN (Reverse Polish Notation) language including access to the entire math library.

Monkey Tests

Netflix developed the "Chaos Monkey". This is a test of a distributed system to disrupt machines involved in concurrent support of network traffic. The goal was to persistently deliver to the end-user even in the presence of significant hardware failures.

The Chaos Monkey was just one approach to Monkey Testing. Numerous others have been produced such as bad-data-injection amongst others. A traceability matrix can have contributions from monkey tests to illustrate robustness of a distributed system.

I enjoy producing both monkeys and traceability matrix programs.

Traceability

I like TDD (Test Driven Design) with complete unit tests where possible using either existing or custom unit test frameworks as needed to suit the requirements.

A Traceability Matrix is a visualization of which unit tests PASS/FAIL during an Agile iteration. As the version of a product advances, a traceability matrix displays when a test finally gets to PASS and when a change of code causes a test to FAIL. This helps pinpoint the origin of a failure. If displayed on a "hot board" for all to see, the moment a FAIL occurs, it can be identified and addressed immediately.

Motto: Small Fast Correct Complete Tested

From time-to-time I have succeeded in producing both theoretically secure and measurably secure products.

Lotus™ Metro™

My first measurably secure product was Lotus Metro. When I began working on Metro, the team had 40 employees. The team was completely disbanded except for me. I proposed to finish the product on-time and under budget (about 8 months of work). I was given a product manager, a project manager three testers and a documenter.

The pile of documented bug numbered in the hundreds. I eliminated all of them and worked with the testers to do a complete smoke test for every conceivable thing a customer might do wrong. The product was packaged up and the project closed for good.

The product was finished early, under budget, and shipped millions of units to customers over the following 5 years with no customer bug report ever again.

ITG™ BATS/Pitch ingester

My first theoretically secure product was a high-speed lexer for ingesting a market feed. BATS/Pitch produced about 2TB a day. ITG purchased a copy of the data from two vendors. The data always had a little corruption from both vendors. The data specification had a peculiar but rigorous record structure.

The existing ingester used pattern-matching and error-detection both of which were prone to both completeness errors and programmer error.

I reviewed the specification in detail and determined that a pure LR1 lexer was sufficient to ingest the data. Bad records are easy to identify (corrupt data) and re-anchoring the LR1 on a new good record was trivial. Using the computed goto in the gnu C++ compiler I was able to reduce the time to ingest from 36 hours to 90 minutes with disk throughput being the limiting factor. The LR1 lexer identified every conceivable error and reported the record spans containing corrupt data. This code was mathematically pure with no potential for bugs.

Exhaustive Testing

In my github repository I have a Python Roman Numeral to Arabic converter. It operates in all bases from 7 to 60.and has both edge case testing and a separately executed exhaustive test. An exhaustive test handles every representable Roman number which is a finite set, unlike Arabic numbers.

Sufficient Testing

Also in my github repository I have a scoring program for tenpin bowling written in C++ with sufficient tests to prove functionality without being exhaustive.

HOBBY: Modeling nervous systems

A project for a 愚公移山. My goal is to assemble a Brain Building Kit Where there's a will there's a way.

My personal goal is to answer Cajal's three questions about nervous systems (ISBN 0-19-507401-7 Histology of the Nervous System): "Practitioners will only be able to claim that a valid explanation of a histological observation has been provided if three questions can be answered satisfactorily: what is the functional role of the arrangement in the animal; what mechanisms underlie this function; and what sequence of chemical and mechanical events during evolution and development gave rise to these mechanisms." Santiago Ramón y Cajal

Principally, I work on the first two questions: I model observed groups of shaped neurons. I model observed signal propagation and expression. I replicate observed functional roles. I have used C++, Python, OpenCV, OpenCL, gnuplot, a custom gpgpu language kernel, and hand-drawn animated GIFs to develop many models. I am now looking into concurrent and parallel programming using clojure (erlang, golang, haskell... still undecided) to implement rudimentary model neuron populations including multiple layers feeding multiple neuropils. I model neurons as discrete 3D convolution/correlation/coincidence kernels currently achieving far sub-pixel image feature detection in retina models, using methods learned during my MIT Physics training and early experience in a wet neuroscience lab.

Quora posting on Brain Information constraints

I am an amateur scientist (BS Physics) without official credentials in neuroscience. Yet I spent my entire grammar school through high school years apprenticing in a well-known wet neuroscience lab, so review the literature I reference if you do not believe what I say here. The views expressed here are entirely my own except where I make reference.

Let's say that we take 100% brain use seriously. All synapses of all neurons are activated at the same time. On the face of it, an undifferentiated global pulse in the brain has no value at all, informationally similar to no activity at all. So 100% must mean something else.

There must be patterns of activity. A pattern means a volume of activity next to a volume of inactivity. A naive view would choose 50%/50% active/inactive as 100% use of the brain. However, this fails horribly. Consider that an image filled with white noise is essentially as useful as no activity at all.

Then, what makes activity informationally rich? I propose that one should expect something like a volume diffraction pattern expressing activity identifying sharp boundaries between adjacent fields of undifferentiated inactivity. The ability for these boundaries to migrate as activity surfaces through a field of inactivity seems like a viable starting hypothesis for making best use of neural organs. So, if this hypothesis were reasonable, and we wanted to be able to move an activity surface over 10 times its surface depth as wiggle-room, that would already cut down useful activity to no more than 1/10th of the cells in the brain. But this presupposes that one has a laminated layer-cake of activity as best use. This makes no sense either since there is little informational richness in moving layers up and down relative to each other. So, a more informationally appropriate use would be bubbles of activity where the surface of activity surrounds a volume of inactivity and which bubble is sufficiently distant from other bubbles. Still using the factor of 10 suggested before, and a volume containing bubbles of radius 10 and kept a distance 10 away from nearest neighbors, the unit activity surface is r2 and the volume is 2 \frac{4}{3}\pi r^3 for a active/total population ratio of 1/60th of the cells in the brain. So, for any more than 1 out of every 60 cells to be active would support this second approximation of informationally useful activity. But this, also, is inappropriate because a box full of same size bubbles is still informationally poor. This is where we come back to the idea of "3D diffraction patterns". The original holograms had a peculiar zebra-stripe appearance where shining a reference laser at an angle caused a 3D image of a scene to be visible. These holograms were 2D diffraction patterns capable of storing 3D information. My third hypothesis is that 3D diffraction patterns supported by neural organs are capable of reproducing 3D images with acceptable time-transitions. Such a scheme would require the wiggle-room to be larger in places, such that the likely population of active neurons could be cut possible by an order of magnitude. So, now we are down to 1 in every 600-1000 brain cells active at any given time.

My guess is that more "intelligent" people are the ones capable of performing more transforms on these activity surfaces to achieve a more varied outcome while less "intelligent" people use a more limited set of transforms. But this is rank speculation with no foundation in existing literature. I am conducting experiments to collect anecdotal evidence that the hypothesis is plausible. The experiments involve rote training to install alternative dissimilar reflex pathways for people presented with situations in which their prior reflexes were monotonous.

Sherrington, who won the Nobel Prize for his work in neuroscience proposed that there is no single neuron unaffected by every other neuron in the nervous system. All activity is as a contributing member of a community and all neurons contribute. Since all cells are autonomous cells, they perform normal cellular functions as well as providing signals to distant cells when necessary and sufficient conditions are met. The notion of using only 10% is difficult to understand. "They also serve who only stand and wait" (Churchill).

The vast majority (perhaps 97% or more) of axons are fully insulated without nodes of Ranvier (no access to external ions needed for Hodgkin&Huxley membrane pulse propagation) and therefore not carriers of membrane pulses. To see how the ubiquitous "unmyelinated" axons are insulated, review the image in Grays Anatomy 35th British Edition, W.B. Saunders Company, Philadelphia, 1973, pg 782. To justify the 97%, see "Functional properties of regenerated optic axons terminating in the primary olfactory cortex", Scalia, Brain Research, 685 (1995) 187-197 (speculation: Lissauer's tract in the spinal cord is vanishingly small yet may contain more axons than the entire rest of the spinal cord). The reason investigators prefer experiments on myelinated axons is that they are larger and easier to investigate; which means only 3% of axons generate the "pulses" used for the modern practice of brain mapping. Measuring gross signals in the smaller axons is published in the Gasser and Erlanger 1944 Nobel lectures. Measuring them individually is published in "What the Frog's Eye Tells the Frog's Brain", Lettvin, Proceedings of the IRE, November 1959, pg 1940-1951.

From what I have been able to interpret, a lot of the activity of the brain is the attempt to inhibit activity (bulbar inhibitory system). The inhibitory system is extraordinarily powerful with much global general inhibition preventing excitations and more focal inhibitions constraining excitations in more specific ways. Strychnine, apparently, increases activity in the central nervous system. An animal with strychnine poisoning was relieved of seizures by stimulating the bulbar inhibitory system (private communication with J. Y. Lettvin).

It gets even more interesting when you consider that instead of the estimated 1e11 neurons it is reasonable to consider information to be expressed over the estimated 1e15 synapses. My "activity surfaces" could be as thin as two synapses deep, separating a featureless "more" field from a featureless "less" field. That is on the order of between 0.1micron to 1.0 micron according to the http://book.bionumbers.org/how-big-is-a-synapse/ web page.

All this is purely my own personal interpretation of informational necessities in nervous systems. It may be wholly unsupportable when reviewed by a professional, but then again, it may not. Errors in factual material are my own and I invite clear non-ad-hominem corrections.

I welcome feedback and discussion on this material.

Philosophy

A quote from Murray Bookchin
"I have always tried to look beyond ideas that people freeze into dogmas".

A quote from Jerry Lettvin making the same point
"If it does not change everything, why waste your time doing the study?"

A poem translated by Jerry Lettvin from Christiaan Morgenstern:
(I have had similar experiences.) describing the typical response.

Σ Ξ MAN MET A Π MAN

After many "if"s and "but"s,
emendations, notes, and cuts,

they bring their theory, complete,
to lay, for Science, at his feet.

But Science, sad to say it, he
seldom heeds the laity

abstractedly he flips his hand,
mutters "metaphysic" and

bends himself again to start
another curve on another chart.

"Come," says Pitts, "his line is laid;
the only points he'll miss, we've made."

--Jlettvin 17:30, 22 April 2013 (UTC)
Personal tools