Posted: Fri, March 08, 2013 | By: Franco Cortese
This is 6th installment of a 9-part essay describing my work in conceptually developing technological approaches to indefinite-longevity throughout 2006-2010 from ages 14-18. Part 1 is here, Part 2 is here, Part 3 is here, Part 4 is here
This part focuses on A.) possible physical bases for subjective-continuity through a gradual-uploading procedure and B.) design requirements for in-vivo brain-scanning and for systems to construct and integrate the prosthetic neurons with the existing biological brain.
The last installment described my continued work on cyber-immortality in 2008, focusing on A.) the design requirements for replicating the neural plasticity necessary for memory and subjectivity, B.) the active and conscious modulation and modification of neural operation, C.) ways to integrate new neural networks (i.e. mental amplification and augmentation) without disrupting the operation of existing neural networks and regions, and D.) a gradual transition from the physical (i.e. prosthetic) approach to the informational (i.e. computational, or mind-uploading proper) approach.
“Some believe what separates men from animals is our ability to reason. Others say it’s language or romantic love, or opposable thumbs. Living here in this lost world, I’ve come to believe it is more than our biology. What truly makes us human is our unending search, our abiding desire for immortality.”
Electromagnetic Theory of Mind:
One line of thought I explored during this period was concerned with whether it was not the material constituents of the brain manifesting consciousness, but rather the emergent electric or electromagnetic fields generated by the concerted operation of those material constituents, that instantiates mind. This work sprang from reading literature on Karl Pribram’s holonimic-brain theory , in which he developed a “holographic” theory of brain function. A hologram can be cut in half, and if illuminated each piece will still retain the whole image, albeit at a loss of resolution. This is due to informational redundancy in the recording procedure (i.e. because it records phase and amplitude, as opposed to just amplitude in normal photography). Pribram’s theory sought to explain the results of experiments in which a patient who had up to half his brain removed and nonetheless retained levels of memory and intelligence comparable to what he possessed prior to the procedure, and to explain the similar results of experiments in which the brain is sectioned and the relative organization of these sections are rearranged without the drastic loss in memory or functionality one would anticipate. These experiments appear to show a holonomic principal at work in the brain. I immediately saw the implications for and relation to gradual uploading, particularly the brain’s ability to take over the function of parts recently damaged or destroyed beyond repair. I also saw the emergent electric fields produced by the brain as a much better candidate for exhibiting the material properties needed for such holonomic attributes. For one, electromagnetic fields (if considered as waves rather than particles) are continuous, rather than modular and discrete as in the case of atoms.
The electric field theory of mind also seemed to provide a hypothetical explanatory model for the existence of subjective-continuity through gradual replacement. [Remember that the existence and successful implementation of subjective-continuity is validated by our subjective sense of continuity through normative metabolic replacement of the molecular constituents of our biological neurons – a.k.a. molecular-turnover]. If the emergent electric or electromagnetic fields of the brain are indeed holonomic (i.e. possess the attribute of holographic redundancy) then a potential explanatory model to account for why the loss of a constituent module (i.e. neuron, neuron cluster, neural network, etc.) fails to cause subjective-discontinuity is provided. Namely, subjective-continuity is retained because the loss of a constituent part doesn’t negate the emergent information (the big picture), but only a fraction of its original resolution. This looked like empirical support for the claim that it was the electric fields, rather than the material constituents of the brain, that facilitate subjective-continuity.
Another, more speculative aspect of this theory (i.e. not supported by empirical research or literature) involved the hypothesis that the increased interaction among electric fields in the brain (i.e. interference via wave superposition, the result of which is determined by both phase and amplitude) might provide a physical basis for the holographic/holonomic property of “informational redundancy” as well, if it was found that electric fields do not possess or retain the holographic-redundancy attributes mentioned (i.e. interference via wave superposition, which involves a combination of both phase and amplitude).
A local electromagnetic field is produced by the electrochemical activity of the neuron. This field then undergoes interference with other local fields; and at each point up the scale, we have more fields interfering and combining. The level of disorder makes the claim that salient computation occuring here dubious, due to the lack of precision and high level of variability which provides an ample basis for dysfunction (including increased noise, lack of a stable (i.e. static or material) means of information-storage, and poor signal transduction or at least a high decay rate for signal-propagation). However, the fact that they are interfering at every scale means that the local electric fields contain not only information encoding the operational states and functional behavior of the neuron it originated from, but also information encoding the operational states of other neurons by interacting, interfering and combining with the electric fields produced by those other neurons (by electromagnetic fields interfering and combining in both amplitude and phase, as in holography, and containing information about other neurons by having interfered with their corresponding EM fields; thus if one neuron dies, some of its properties could have been encoded in other EM-waves) appeared to provide a possible physical basis for the brain’s hypothesized holonomic properties.
If electric fields are the physically-continuous process that allows for continuity of consciousness (i.e. theories of emergence), then this suggests that computational substrates instantiating consciousness need to exhibit similar properties. This is not a form of vitalism, because I am not claiming that some extra-physical (i.e. metaphysical) process instantiates consciousness, but rather that a material aspect does, and that such an aspect may have to be incorporated in any attempts at gradual substrate replacement meant to retain subjective-continuity through the procedure. It is not a matter of simulating the emergent electric fields using normative computational hardware because it is not that the electric fields provide the functionality needed, or implement some salient aspect of computation that would otherwise be left out, but rather that the emergent EM fields form a physical basis for continuity and emergence unrelated to functionality but imperative to experiential-continuity or subjectivity – which I distinguish from the type of subjective-continuity thus far discussed – of a feeling of being the same person through the process of gradual substrate replacement – via the term instantive-subjective-continuity.
Thus I explored variations of NRU-operational-modality that incorporate this, particularly the informational-functionalist (computational) NRUs, as the physical-functionalist NRUs were presumed to instantiate these same emergent fields via their normative operation. The approach consisted of either a.) translating the informational output of the models into the generation of physical fields (either at the end of the process, or throughout by providing the internal area or volume of the unit with a grid composed of electrically-conductive nodes, such that the voltage-patterns can be physically instantiated in temporal synchrony with the computational model, or b.) constructing the computational substrate instantiating the computational model so as to generate emergent electric fields in a manner as consistent with biological operation as possible (e.g. in the brain a given neuron is never in an electrically-neutral state, never completely off, but rather always in a range of values between on and off (see Part 2 of this essay), which means that there is never a break [i.e. spatiotemporal region of discontinuity] in its emergent electric fields; these operational properties would have to be replicated by any computational substrate used to replicate biological neurons via the informationalist-functionalist approach if the premises that it facilitates instantive-subjective-continuity proves true).
. . .
No, no, not night but death;
Was it needless death after all?
We know their dream; enough
To know they dreamed and are dead;
And what if excess of love
Bewildered them till they died?
Are changed, changed utterly:
A terrible beauty is born
- W.B Yeats
Neuronal Data Measurement and NRU Construction:
I was planning on using the NEMS already conceptually developed by Robert Freitas  for nanosurgery applications, (to be supplemented by the use of MEMS if the technological infrastructure was unavailable at the time) to take in-vivo recordings of the salient neural metrics and properties needing to be replicated. One novel approach was to design the units with elongated, worm-like bodies, disposing the computational and electromechanical apparatus contained within the elongated body of the unit. This sacrifices width for length so as to allow the units to fit inside the extra-cellular space between neurons and glial cells as a postulated solution to a lack of sufficient miniaturization. Moreover, if a unit is too large to be used in this way, extending its length by the same proportion would allow it to then operate in the extracellular space, provided that its means of data-measurement itself weren’t so large as to fail to fit inside the extracellular space (the span of ECF between two adjacent neurons for much of the brain is around 200 Angstroms ).
I was planning on using the chemical and electrical sensing methodologies already in development for nanosurgery as the technological and methodological infrastructure for the neuronal data-measurement methodology. However, I also explored my own conceptual approaches to data-measurement. This consisted of detecting variation of morphological features in particular, as the schemes for electrical and chemical sensing already extant seemed either sufficiently developed or to be receiving sufficient developmental support and/or funding. One was the use of laser-scanning or more generally radiography (i.e. sonar) to measure and record morphological data. Another was a device which used a 2D array of depressible members (e.g. solid members attached to a spring or ratchet assembly, which is operatively-connected to a means of detecting how much each individual member is depressed - such as but not limited to piezoelectric crystals which produce electricity in response and proportion to applied mechanical strain). The device would be run along the neuronal membrane and the topology of the membrane subsequently recorded by the pattern of depression recordings, which are then integrated to provide a topographic map of the neuron (e.g. relative location of integral membrane components to determine morphology - and magnitude of depression to determine emergent topology). This approach could also potentially be used to identify the integral-membrane-proteins, rather than using electrical or chemical sensing techniques, if the topologies of the respective proteins are sufficiently different as to be detectable by the unit (determined by its degree of precision, which typically is a function of its degree of miniaturization).
The constructional and data-measurement units would also rely on the technological and methodological infrastructure for organization and locomotion that would be used in normative nanosurgery. I conceptually explored such techniques as propeller, pressure-based (i.e. a stream of water acting as jet exhaust would in a rocket), the use of artificial cilia and the use of tracks that the unit attaches to so as to be moved electromechanically (which decreases computational intensiveness [a measure of required computation per unit time]; rather than having a unit compute its relative location so as to perform obstacle-avoidance and not, say, damage in-place biological neurons, obstacle avoidance and related concerns are instead negated through the use of tracks which limit its degrees of freedom and thus preventing it from having to incorporate computational techniques of obstacle-avoidance (and their entailed sensing apparatus). This also decreases the necessary precision (and thus presumably the required degree of miniaturization) of the means of locomotion, which would need to be much greater if it were to perform real-time obstacle avoidance. Such tracks would be constructed in iterative fashion. The constructional system would analyze the space in front of it to determine if it was occupied by a neuron terminal or soma, and extrude the tracks iterative (i.e. add a segment) in spaces where it detects the absesne of biological material. It would then move along the newly-extruded track, progressively extending it through the spaces between neurons as it moves forward.
A novel avenue of enquiry that occurred during this period involves counteracting or taking into account the distortions caused by the data-measurement units on the elements or properties they are measuring and subsequently applying such corrections to the data they are measuring. A unit changes the local environment which it is supposed to be measuring and recording, which becomes problematic. My solution was to test which operations performed by the units have the potential to distort relevant attributes of the neuron or its environment and to build units that compensate for it either physically or computationally.
If we reduce how a recording-unit’s operation distorts neuronal behavior into a list of mathematical rules, we can take the recordings and apply mathematical techniques to eliminate or “cancel out” those distortions post-measurement, thus arriving at what would have been the correct data. This approach would work only if the distortions are affecting the recorded data (i.e. changing it in predictable ways), and not if they are affecting the unit’s ability to actually access, measure or resolve such data.
The second approach applies the method underlying the first approach to the physical environment of the neuron. A unit senses and records the constituents of the area of space immediately adjacent to its edges and mathematically models that “layer”; i.e. if it is meant to detect ionic solutions (in the case of ECF or ICF) then it would measure their concentration and subsequently model ionic diffusion for that layer. It then moves forward, encountering another adjacent “layer” and integrating it with its extant model. By being able to sense iteratively what is immediately adjacent to it, it can model the space it occupies as it travels through it. It then uses electric or chemical stores to manipulate the electrical and chemical properties of the environment immediately adjacent to its surface, so as to produce the emergent effects of that model (i.e. the properties of the edges of that model and how such properties impact/causally-affect adjacent sections of the environment), thus producing the emergent effects that would have been present if the constructional/integrational or data-measuring unit hadn’t occupied that space.
The third postulated solution was the use of a grid comprised of a series of hollow recesses placed in front of the sensing/measuring apparatus. The grid is impressed upon the surface of the membrane. Each compartment isolates a given section of the neuronal membrane from the rest. The constituents of each compartment are measured and recorded, most probably via uptake of its constituents and transport to a suitable measuring apparatus. A simple indexing system can keep track of which constituents came from which grid (and thus which region of the membrane they came from). The unit has a chemical store operatively connected to the means of locomotion used to transport the isolated membrane-constituents to the measuring/sensing apparatus. After a given compartment’s constituents and measured and recorded, the system then marks its constituents (determined by measurement and already stored as recordings by this point of the process), takes an equivalent molecule or compound from a chemical inventory and replaced the substance it removed for measurement with the equivalent substance from its chemical inventory. Once this is accomplished for a given section of membrane the grid then moves forward, farther into the membrane, leaving the replacement molecules/compounds from the biochemical inventory in the same respective spots as their original counterparts. It does this iteratively, making its way through a neuron and out the other side. This approach is the most speculative, and thus the least likely to be used. It would likely require the use of NEMS, rather than MEMS, as a necessary technological infrastructure because in order for the compartment-constituents to be replaceable after measurement via chemical store, they need to be simple molecules and compounds rather than sections of emergent protein or tissue, which are comparatively harder to artificially synthesize and store in working-order, if the approach were to avoid becoming economically-prohibitive.
In the next installment I will describe the work done throughout late 2009 on biological-non-biological NRU hybrids, and in early 2010 on one of two new approaches to retaining subjective-continuity through a gradual replacement procedure, both of which are unrelated to concerns of graduality or sufficient functional-equivalence between the biological original and the artificial replication-unit.
A sudden blow: the great wings beating still
And how can body, laid in that white rush,
But feel the strange heart beating where it lies?
- W.B Yeats
Technological Infrastructure: This term denotes any technological systems required for something to exist, and which provide the foundation for its possibility. The technological infrastructures of computing include electricity and the semi-conductor industry.
Methodological Infrastructure: This term denotes any methodology or procedural-knowledge that provides the foundation for another thing’s (i.e. another method or technology) existence. The methodological infrastructure of computing includes mathematical logic or Boolean Algebra. Any technological infrastructure almost always has a particular methodological infrastructure underlying it.
Operational Modality: A modality is a class or separate category within a larger class; different sensory modalities are the types of sense-experience corresponding with a different sense organ. While they all fall within the broader class of “sense organs” they are categorically different from one another, and so are distinguished as having or exhibiting different modalities. The operational modality of something is how it procedurally accomplishes something. Two things could have the same emergent effect (which I denote as a thing’s functional modality) while achieving it via very different ways, and thus have different operational modalities. Incandescent lighting and fluorescent lighting have the same functional modality while possessing different operational modalities.
Functional Modality: This extent with which the emergent effects or end products of multiple entities or processes are equivalent. This term is wholly unconcerned with individual operations and procedures used to create that emergent effect.
Subjective-continuity: this is what is sometimes called “stream-of-consciousness” by others. It denotes the feeling of “still being you”, despite being composed of different physical materials. The cells comprising our brain are not replaced wholesale, only their constituent molecules are. Thus, gradually all the physical material which once constituted our brains is replaced by different material, in the same organization and retaining the same pattern. Subjective-continuity is the sense of still being the same person, despite not having a single molecule from the ones that composed “you” seven years ago. If, instead of gradually replacing the neurons in the brain we replicated them and recreated the whole brain wholesale, at once, there would still be the original you, and now a second you. In such a case there would be no subjective-continuity through the procedure.
Graduality: This is a measure of how gradual the cumulative replacement of the components (e.g. neural regions, neural clusters, neural networks, neurons, sub-neuron components, etc.) comprising the brain is. Replacing whole populations of neurons has a lower degree of graduality than replacing individual neurons does. Replacing sub-sections of individual neurons has a higher degree of graduality than replacing individual neurons does.
Physicalist-functionalist: This class of NRU encompasses all types that use physically-embodied approaches to replicating the functional and/or operational modalities of biological neurons.
Informational-functionalist: This class of NRU encompasses all types that use computational approaches to replicating functional and/or operational modalities of biological neurons. This includes both computational simulation and emulation.
NRU: Neuron Replication Unit
1: My own.