Genetic information is stored in the form of structures of nucleic acids, molecules that occur in almost every living cell (mature red blood cells are an exception). The nucleic acids are bonded together into a continuous sequence, "spelling" out a set of instructions for the biochemistry of the organism.
Genetic information is associated with semantic information; as it has meaning. Bernd-Olaf Küppers wrote:
...biological information is associated with defined semantics. When in the rest of this chapter we refer to "biological" information, it will be precisely this semantic aspect of information that is meant. A theory of the origin of life must therefore necessarily include a theory of the origin of semantic information.
Genetic information as software
A number of people have likened the information in living things to computer software, including Paul Davies:
Instead, the living cell is best thought of as a supercomputer - an information processing and replicating system of astonishing complexity. DNA is not a special life-giving molecule, but a genetic databank that transmits its information using a mathematical code. Most of the workings of the cell are best described, not in terms of material stuff - hardware - but as information, or software. Trying to make life by mixing chemicals in a test tube is like soldering switches and wires in an attempt to produce Windows 98. It won't work because it addresses the problem at the wrong conceptual level.
David J D'Onofrio and others have also studied the operation of the genome and agree with it being like software:
Since all genes can be modeled using rules (be they grammar or logical) rather than physicodynamic determinism, we inductively assert that the operation and organization of the genome operate under the influence of a programming language.
They even say that 'there are "multiple programming languages" in the cell'.
The understanding of life is a great subject. Biological information is the most important information we can discover, because over the next several decades it will revolutionize medicine. Human DNA is like a computer program but far, far more advanced than any software ever created.
"All living cells that we know of on this planet are 'DNA software'-driven biological machines comprised of hundreds of thousands of protein robots, coded for by the DNA, that carry out precise functions," said Venter. "We are now using computer software to design new DNA software."
Various components of the genetic code and its products are as follows.
The nucleic acid sequence is "read" by other molecules in groups of three, known as codons. With a few exceptions, each codon translates into one amino acid. There are 64 different codons possible, due to the four possible nucleic acids at each of the three positions within the codon. Of these 64, three are "stop" codons, which tell the reading molecule to stop reading. Otherwise, each codon represents one of 20 different amino acids, with several different codons representing the same amino acid in most cases.
Peptides and polypeptides
A peptide is a short sequence of amino acids, and a polypeptide is a long sequence of amino acids.
A gene is a string of codons that ultimately encodes for a protein. Each protein is constructed from a series of peptides, translated from the codons of the gene, which bond together to form a polypeptide.
The cell must know how much of a protein to produce at different times and under different conditions. This is known as gene regulation and involves a number of different mechanisms. For example regulator genes may turn the transcription of neighboring genes on or off under certain conditions. Therefore the order of genes on a chromosome carries information in addition to the information carried by each gene itself.
The simplest form of regulation may be the number of copies there are of a gene. Generally, having more copies of a gene results in a higher production of the corresponding protein. Therefore the number of copies of a gene is an important piece of information describing the amount of protein that should be produced. In addition, mutations on a second copy of a gene do not completely remove the functional protein, so that most evolutionists consider gene duplication to be a frequent step in the development of new functions.
Genes, in eukaryotes, are grouped together into structures called chromosomes. Chromosomes are inherited from parent organisms via sexual reproduction. Individuals resulting from non-sexual reproduction (such as plants self-pollinating or self-cloning, or meiosis) have similar genetic material to their forebear.
The English language can be used as a very loose analogy of genetic information, where the English letters are roughly analogous to DNA "letters", English words are roughly analogous to codons, and sentences or paragraphs are roughly analogous to genes.
But there are differences. In English, words are various lengths, and not every combinations of letters makes a valid word. With DNA, codons are fixed length (three "letters") and every combination of letters makes a valid codon.
Also, in natural languages, there are subjects, verbs, and predicates. None of these occur in DNA.
However, just as English sentences have to be composed of a valid combination of words (they can't be just any words), genes have to be composed of a valid combination of codons. A mistake in the DNA might well produce a non-functioning protein.
Another limitation of the analogy is in the way that the information is parsed. English is read from left to right, top to bottom, from beginning to end. DNA can be read left to right and right to left (at the same time), as well as in other ways. This limits or constrains how readily the letters might be changed, severely limiting the possibilities that a mutation will do anything but damage or destroy the information content of the genome.
Geneticist John Sanford:
Most DNA sequences are poly-functional, and so must also be poly-constrained. This means that DNA sequences have meaning on several different levels (poly-functional) and each level of meaning limits possible future change (poly-constrained). For example, imagine a sentence which has a very specific meaning in its normal form, but has an equally coherent message when read backwards. Now let's suppose that it has a third message when reading every other letter, and a fourth message when a simple encryption program is used to translate it. Such a message would be poly-functional and poly-constrained. We know that misspellings in a sentence would not normally improve the message, but at least this would be possible. However, a poly-constrained message is fascinating, in that it cannot be improved. It can only degenerate. Any misspellings which might possibly improve the normal sentence form will be disruptive to the other levels of information. Any change at all will diminish total information with absolute certainty.
There is abundant evidence that most DNA sequences are poly-functional, and are therefore poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs that read in opposite directions (i.e. both DNA strands are transcribed–Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins).
As well as the code for protein production found in the genes, there are several other known and suspected codes in the chromosomes.
The DNA is coiled around histone molecules, and this coiling carries information for a code with the history of that cell's development in the organism, switching off certain parts of the DNA code.
Molecules attached to the "rungs" of the DNA "ladder" are used to disable or enable parts of the DNA code.
The DNA needs to be both read (to produce proteins) and replicated (for cell division), and reading can't be stopped while replication occurs, so there are also codes to control which parts get read while other parts are replicated. To help with this, the replication is done in sections, rather than progressing from one end to the other, and the system needs to keep track of which parts have been replicated and which still need to be, and to put it all back together at the end.
Scientists have also found what they describe as a "second code", or "second language", written "on top of" the first code or language. As several different codons can code for the same amino acid, different codons for the same amino acid can have two different meanings, and so control how the genes are controlled as well as how a protein is made.
David Klinghoffer sees in this evidence for intelligent design:
But think about how implausible this is. Let's say I write an article in English about Subject A. But ingeniously I choose my words in such a way that the article can also be read in a different language, providing information about a related but separate Subject B. Meditate for a moment on the ingenuity this would take -- not only that, but the forethought, with imagining a complex, distant goal being the first step in the process. The goal arises in a mind, then the mind translates it into reality.
When sexual species reproduce, each parent passes on half of its genetic material to its offspring, so each child gets a full set of genetic information, with half supplied by each parent.
However, which actual genes get passed on varies in every case, so that each child has a different set of genetic information than each of its siblings. In this way, no two children are alike, and no child is identical to its parents, although all the genetic information of the child is contained in the genetic information of its parents.
Further, each child has two sets of genes. This often provides redundancy so that if one gene is defective, the other, good, gene can be used instead.
These genes can be either dominant or recessive. This means that if a person has both a dominant and a recessive form of the gene, the dominant one will be expressed (used) and the recessive one ignored.
Differentiation and speciation
The creation account in the first chapter of Genesis repeatedly says that God created living things to reproduce "after their kind", and Biblical creationists take this to mean that each "kind" of living thing is genetically isolated from other "kinds". Early creationists generally equated a "kind" with a species, but present day creationists usually believe that a "kind" is a larger group containing many species. (See Baraminology.) Although a positive evolution, in the sense of increasing complexity and improving function, would in principle be possible within the kinds, most creationists see this as incompatible with the general degradation of the universe since the Fall of Man and the introduction of death. Consequently, most creationists, while seeing many possibilities for adaption and speciation within the Biblical model, believe that the direction of development is inevitably in the direction of less genetic information and less functionality.
Even starting with a single pair of animals, as after the Great Flood, the potential for differentiation is very great. Humans, for example, have about 23,000 protein-coding genes, which is broadly typical for both plants and animals. If each of these had four variants, corresponding to two chromosomes in each of two organisms, then there would be an astronomical 1023,000 possible combinations.[note 1] The potential for variation can also be seen in the fact that the differences between the human and the chimp genome are commonly estimated to be only 1-2%. If the 98-99% of the human genome that does not code for proteins is nonetheless capable of contributing to differentiation, then the potential is even larger. Creationists typically believe that God created the kinds with the ability to adapt to a wide range of environments, thus reconciling the small number of animals that survived the Flood with the large diversity of species observed today.
A single population of organisms can split into two or more distinctly identifiable populations if environmental pressures, chance, and natural selection cause a shift in the genetic information passed down within each group. (The problem of defining and quantifying genetic information is discussed in more detail in the following section.) This can be illustrated, in a simplified way, as follows. Suppose there are two dogs, each with one gene for short fur and one gene for long fur ("S" and "L" respectively in the diagram at right). In this example, these two genes are "co-dominant"; that is, neither is dominant over the other. So these two dogs have medium-length fur.
When these two dogs mate and produce offspring, they each pass on one of these two genes to each pup. So (on average) one pup will get a long-fur gene from each parent, another will get a short-fur gene from each parent, one will get a long-fur gene from the father and a short-fur gene from the mother, and one will get a short-fur gene from the father and a long-fur gene from the mother. So two out of the four pups will be like their parents in that they have one long-fur gene and one short-fur gene, and consequently will have medium-length fur. But one pup will have two short-fur genes, with the result that it has short fur. This pup, although it has the same number of genes as its parents, has less genetic information than its parents, in the sense that it does not have a gene for long fur. The remaining pup will have two long-fur genes, with the result that this pup has long fur. Again, however, this pup is missing some genetic information that its parents had—a gene for short fur.
Now suppose that the climate turns very cold. In this colder environment, the dogs with the best chance of survival are those with longer fur. In fact, the dogs with short and medium length fur die off, because they are not as fit for the changed environment. This is what is referred to as natural selection. The long-fur dogs, though, survive quite well, and breed more dogs. But now, when passing on their fur genes, they can only pass on long-fur genes, because that is all they have. We now have a population of long-fur dogs, whereas we started with a population of medium-length-fur dogs.
If the original population of medium-length-fur dogs had been split into two separate groups, and one group was in an area which became very hot, the opposite could have happened. That is, natural selection might have eliminated all the dogs except those with short fur. So we could have ended up with two separate dog populations, one group with short fur, and the other with long fur. Three separate dog populations might also have developed, with the mixed-length-fur group surviving as a separate group. In a more realistic situation, involving many genes and many characteristics, the differences between the populations may become so great that they can no longer interbreed, and would therefore be classified as new species.
Although the phenotype (external characteristics) of each population has changed, no new alleles (variant forms of genes) have been introduced. Indeed, the opposite has happened: The pure long-fur and the pure short-fur populations now have fewer alleles, and in that sense less genetic information than the parent population. The change may be called evolution in the broadest sense, but this process cannot account for the development of the complete evolutionary "family tree", which would require massive numbers of new alleles coding new information for features that did not exist before. The addition of genetic information required by large-scale evolution, if it is possible at all, would have to entail the creation of new alleles through mutations.
For more information, see Genetic mutation.
Mutations can also alter the genetic information. Most of these changes will be small, but some may result in a noticeable change in the appearance or function of an organism. A number of such changes could possibly even result in an organism that is classified as a different species. It is recognized by both evolutionists and creationists that the large majority of mutations are harmful, although almost all of these are nearly neutral.
Present-day creationists usually admit that mutations can sometimes be beneficial, but point out that even in those rare cases, there is either no change or a net loss of genetic information in the biosphere. (See the following section.) This would imply, even if the genetic information could occasionally increase by a small amount, that there is a steady decrease in genetic information over time, which would rule out the possibility of anything more than rare trivial positive developments through evolution, even if natural selection worked perfectly. Evolutionists consider the extremely small number of information-gaining mutations, when subjected to natural selection, to be sufficient to account for the slow increase in complexity and function of living things over the course of millions and billions of years.
The argument for creation
The creationist argument is that as essentially no process has ever been observed to increase genetic information, and as living things are full of information, therefore the available evidence supports the view that living things did not arise spontaneously and did not evolve. This formulation invites controversy over the proper use of the terms "genetic information" and "evolution". An alternative statement of the argument that avoids these side discussions might be
- The development of complex, multi-cellular organisms from simple, single-celled precursors and those from non-living matter, would have required certain types of processes, such as the origin of new gene sequences. If these processes occurred throughout the evolutionary history of life, then they should still be occurring and should be observed, at least occasionally, in the laboratory and the field. Since such processes are not observed, the development of complex organisms from simple ones is without foundation.
This argument requires two questions to be addressed. First, what "types" of processes are required for evolution. Second, what types of process have been observed. There is implicitly a third question, namely, how often should one expect to be able to observe such processes. This quantitative question is very difficult to answer precisely and is predictably answered differently by creationists and evolutionists.
Discussions of this argument have usually focused on changes at a chemical level, such as metabolism of food sources, resistance to antibiotics, and recognition of proteins by the immune system. Among the reasons for this are
- that nobody proposes that large changes occurred in single steps, so it's the smaller changes that need to be investigated,
- that the existence and usefulness of chemical reactions can be measured quite sensitively,
- that new chemical pathways can in principle develop rather quickly, at least in a primitive form, and
- that the discussion of changes in the amount of information are comparatively straightforward.
Morphological changes in contrast are thought to usually proceed over many generations via regulatory processes that are poorly understood, and it is often not easy to decide whether a particular change in morphology involves an increase in information or not. Although a human being is very different from a mouse, with a human foot compared to a mouse foot, for example, most of the biological differences are a matter of degree.
A gene pool may change as the result of natural selection or, in some cases, by transfer of genes from one species to another. While it will be debated in individual cases whether such a process constitutes evolution and whether or not it entails an increase in information, it is not controversial that evolution of current species from single-celled ancestors would require additional processes. Likewise, some mutations are rather straightforwardly degenerative, such as the loss of eyesight in animals that adapt to living in lightless caves. All of these processes might play an important role in the evolution of species, but they are certainly not sufficient by themselves. It is not until new gene sequences result in the expression of novel proteins that the controversy between creationists and evolutionists begins in earnest. Finally, creationists argue that the specificity of the enzyme produced must increase. While evolutionists may argue that specificity is either hard to define unambiguously or is not a good surrogate for genetic information, the evolutionary paradigm holds that the first cells produced enzymes with less specificity than present organisms, and it is hard to imagine evolution occurring without specificity increasing at least sometimes. (An example of a novel mutation that was useful despite a decrease in specificity is the enhancement of the rudimentary ability of bacteria to metabolize xylitol.) Most creationists would also insist that the new enzyme be useful. The possibility of multi-step mutations and "pre-adaptations" may make this requirement difficult to apply in some cases, but most evolutionists would probably accept it as a reasonable condition. Thus the argument for evolution based on genetic information stands or falls with the question of whether processes can be observed which
- introduce a new gene sequence coding for a new protein into the biosphere,
- where the new protein has a high specificity, and
- provides a fitness advantage to the organism in its environment.
Examples of changes claimed to have increased information
Despite evolution requiring many millions of mutations that increase genetic information, evolutionists have been singularly unable to provide more than a handful of such examples by natural processes, and none that unequivocally support the evolutionary view.
In a controversial documentary, Richard Dawkins was asked "Can you give an example of a genetic mutation or an evolutionary process which can be seen to increase the information in the genome?", and was unable to provide one such example.
The few examples offered by others include the following.
Metabolism of nylon
A strain of bacteria was observed to gain the ability to eat an unnatural product, nylon. There is evidence that the mutation responsible for the new enzyme was a "frame shift" mutation. In contrast to simpler mutations which only change a single amino acid in a protein, frame shift mutations result in a completely new sequence, which would explain why the sequence of the new enzyme shows no similarity to that of other digestive enzymes and why the new enzyme shows no ability to digest traditional substrates. The question, however, is not yet settled.
The mutation was not entirely random in the sense that it occurred on a plasmid, a non-chromosomal piece of DNA which is known for having a much higher mutation rate than chromosomes. There is no evidence that the changes in the base pairs leading to the new DNA sequence were directed in any way. The high mutation rate of plasmids is seen by evolutionists and creationists alike as a mechanism to allow rapid adaptation to new environments while protecting the core genome from excessive mutations.
Creationist agricultural scientist Don Batten argues that "plasmids are designed features of bacteria that enable adaptation to new food sources or the degradation of toxins". Batten does not attempt to provide a specific mechanism; his argument is simply that because the change appears to be non-random, it must somehow be a designed mechanism. For example, it could be analogous to an electrical device intended for international use which might be designed to try different input voltages to adapt to different sources of power. In this case, the particular input voltage might be chosen randomly, but the position of the change in the instructions (the input voltage) is non-random.
For a similar example see Long-term E. coli evolution experiment.
It is common knowledge that it is possible to acquire immunity to certain virus diseases like measles by having the disease once, or by receiving a vaccine made from similar substances. This protection is highly specific, to the extent that a new influenza vaccine must be developed every year because a vaccine effective against the strain present in one year will not be effective against the altered strain present the next year. In other words, the specificity of the interaction, which is one of the most important criteria for the information content of a gene in the model of Spetner, is extremely high. The genes responsible for producing the immunity are not present when the viruses are first encountered, but result from random mutations taking place over a period of merely days, at a rate perhaps one million times faster than in normal cells like the ones responsible for reproduction. This "hypermutation" occurs at the rate of roughly one per thousand replications per nucleotide base pair, where the normal mutation rate in eukaryotes is closer to one per billion replications per nucleotide pair. Hypermutation is restricted to particular regions in the genome of particular types of cells, otherwise the buildup of harmful mutations would be fatal. Accordingly, it is mediated by particular chemical mechanisms not normally active during replication.
There are many differences between the two systems, and it is not claimed that the specific mechanisms of acquired immunity are relevant to Darwinian evolution. However, in both cases, random mutations in the base pairs of genes result in different proteins being produced, and there is a biological selection mechanism operating on the cells in each case. In the case of Darwinian evolution, natural selection operates by influencing the survival and reproductive success of a whole organism; in the case of acquired immunity, specific mechanisms lead to the death of cells with a weaker response to the antigen and the enhance replication of cells with a stronger response. Evolutionists argue that acquired immunity is a model demonstrating that random mutations can and do produce new biological information in vivo.
The creationist response to this argument is usually to emphatically recount the many difference between the two systems, without specifically going into the question of the creation of information. Spetner argues somewhat differently in that he agrees "that these mutations add information to the B-cell genome [and] are random" in terms of the changes made in the base pairs, but then brings a mathematical argument that the time scales that would be required at normal mutation rates are much much longer than even the evolutionary time scale and therefore that the process cannot be of relevance. He argues that Darwinian evolution would require larger changes than a single base pair, as such small changes would not provide any adaptive advantage and would be lost to the population. So his calculation is based on at least three changes occurring during the same replication. The result is that such a triple change in a population of 1000 bacteria would take about 100 trillion years.
In the examples of the previous section, the simple formulation given here of the argument for creation is difficult to maintain. Consequently some creationists expand the argument to include additional factors, such as "randomness". Batten, for example argues (2003):
- Thwaites claimed that the new enzyme arose through a frame shift mutation. ... If this were the case, the production of an enzyme would indeed be a fortuitous result, attributable to ‘pure chance’. However, there are good reasons to doubt the claim that this is an example of random mutations and natural selection generating new enzymes, quite aside from the extreme improbability of such coming about by chance.
In a similar direction, Spetner writes (2002):
- It's interesting, first of all, that the URL you pointed to picked the "nylon bug" as an example of a random mutation yielding a gain of information. (The short answer is, the mutation does yield an increase of information, but was it random?) It's interesting because the "nylon bug" is exactly what I used ... as a possible example of a nonrandom mutation triggered by the environment.
- I must point out that the debate here is whether random mutations in the nylon bug generated the information that permitted it to metabolize the nylon waste or was there something nonrandom about it. By the latter I mean that either the correct mutations were induced by the environment or else the new adaptation was already built into the organism so that random mutations that would be likely to occur within the population could trigger the change.
There are several different ways in discussions of evolution in which mutations have been described as "non-random".
Spetner has proposed what he calls the "nonrandom evolutionary hypothesis" (NREH). He suggests (2004) that "either the correct mutations were induced by the environment or else the new adaptation was already built into the organism so that random mutations that would be likely to occur within the population could trigger the change". The first option, that the genetic machinery might detect which mutations would be advantageous for the current environment and induce these with a higher probability, has been discussed at various times by biologists, but there is no plausible mechanism by which that could occur, and, where biologists have been able to devise ways to test the hypothesis, they have not found any sign that it is true. Consequently it is not considered a viable alternative by the community of biologists. When considering the second alternative, one should distinguish between relatively dramatic mutation mechanisms, like recombination and frame shifts, and a slow accumulation of single-amino-acid mutations. In the first case it is hard to rule out on observational grounds alone that certain sequences that will eventually be needed are waiting to be activated, for example in the so-called "junk DNA" of a genome. Spetner himself suggests that not the digestion of nylon but the digestion of some other unspecified substance was anticipated.[note 2] It would be even harder to apply this idea to hypermutation in the immune response, both because it can be shown that the enzymes approach their final form as the result of many small mutations that gradually improve the specificity, and because the number of sequences that would have to be stored in anticipation of new and stochastic challenges year after year would be prohibitive.
Batten extends the argument in a different way. For Batten "non-random" does not mean that the sequences obtained by activation of hidden DNA or by substitution of base pairs is in any way pre-ordained, but rather that mutations have a tendency to occur in particular parts of the DNA of a cell. These parts may be the plasmids of a bacterium or the hypermutating segments in some types of immune system cells. From there, the argument can take two paths. One is to point to the complexity of the mediating mechanisms and assert that the mechanisms themselves must have been designed. This concedes that the existing genetic mechanisms in living cells can produce at least some new information, and proceeds to argue that these mechanisms themselves are irreducibly complex and therefore cannot have arisen by evolution. The other direction the argument can take again concedes that new information is sometimes produced, but only when the mutation rate is extremely high, so high that the species would not survive if all its genes mutated so fast. For certain restricted functions, like digestion or immune response, a high mutation rate is tolerable and beneficial, but the genes coding for the fundamental functioning of the cell must be protected from such a high mutation rate. Therefore examples of new and useful proteins produced by hypermutations are not applicable to the normal mutations required for evolution. One potential weak point in this argument is that a sequence generated anywhere can be transported into the germ line by various known mechanisms. However, this would also require that the new sequence was viable (made sense) in its new context, a highly questionable suggestion. If this did occur, this would be the flip side of the creationist argument that horizontal transfer of genes does not increase the information content of the gene pool. Also, it should not be surprising if the easiest place to observe evolution is where the mutation rate is especially high.
- Anon., Has evolution really been observed? (Summary article) (Creation Ministries International)
- Batten, Don, The adaptation of bacteria to feeding on nylon waste, Journal of Creation 17(3):3–5, December 2003.
- Batten, Don, Bacteria ‘evolving in the lab’?, 14 June 2008.
- Gitt, Werner, In the Beginning was Information, 2nd edition, CLV, 2000.
- Lamb, Andrew, How do we define information in biology?, 17 February 2007.
- Sarfati, Jonathan, Is the design explanation legitimate?, chapter 9 of Refuting Evolution, Master Books, 1999.
- Spetner, Lee, Not by Chance! Judaica Press, New York, 1998.
- Spetner, Lee, The Nylon Bug, 19 November 2002.
- Spetner, Lee, Nylon Bug 3. Spetner on Thomas, 27 October 2004.
- Wieland, Carl, Beetle bloopers; Flightless insects on windswept islands, Creation 19(3):30, June 1997.
- Wieland, Carl, Superbugs not super after all, Creation 20(1):10–13, December 1997.
- Wieland, Carl, Blind fish, island immigrants and hairy babies, Creation, 23(1):46–49, December 2000.
- ↑ With one of the gene variants A, B, C, and D on each of two chromosomes, the ten combinations are AA, AB, AC, AD, BB, BC, BD, CC, CD, and DD.
- ↑ Spetner wrote: "Now, why should there be a built-in capability to metabolize nylon, which did not exist until 1937 or so? The answer is there shouldn't be. But there could have been a built-in capability to metabolize some other substrate. Kinoshita et al. (1981) tested enzyme 2 against 50 possible substrates and found no activity, but that does not mean that it doesn't have activity on some substrate not tested. The activity of enzyme 2 was small, but enabled the bacteria to metabolize the nylon waste."
- ↑ Küppers, Bernd-Olaf, Information and the Origin of Life, MIT Press, 1990, ISBN 9780262111423, p.31
- ↑ Davies, Paul,How we could create life, The Guardian, 11 December 2002.
- ↑ David J D'Onofrio, David L Abel, and Donald E Johnson, March 14 2012, Theoretical Biology and Medical Modelling, 14 March 2012.
- ↑ Gates, Bill, The Road Ahead, Penguin Group, New York, p. 188, 1995, quoted in Billionaire on biological information, Creation 23(3):22, June 2001.
- ↑ Claire O'Connell, Passing the baton of life - from Schrödinger to Venter, New Scientist, Fri. 13th July, 2012Fri. July 13th, 2012.
- ↑ Kimball, John W., The Genetic Code
- ↑ What is a gene?, Genetics Home Reference.
- ↑ Sanford, John C. , "Genetic Entropy & the Mystery of the Genome", FMS Publications, Waterloo, New York, Third edition, ISBN 978-0-9816316-0-8, p.131-132.
- ↑ Williams, Alex, Astonishing DNA complexity demolishes neo-Darwinism, Journal of Creation 21(3), December 2007, pp. 111-117.
- ↑ Stephanie Seiler, Scientists discover double meaning in genetic code, University of Washington, Thu. 12th December, 2013Thu. December 12th, 2013.
- ↑ David Klinghoffer, Genome Uses Two Languages Simultaneously; Try That Yourself Sometime, Why Don't You, Evolution News and Views, Fri. 13th December, 2013Fri. December 13th, 2013.
- ↑ Genesis 1:11-12,21,24-25
- ↑ S. A. Lerner, T. T. Wu and E. C. C. Lin, Evolution of a Catabolic Pathway in Bacteria, Science 4 December 1964: Vol. 146 no. 3649 pp. 1313-1315.
- ↑ Spetner, Lee, A Scientific Critique Of Evolution (in an exchange with Dr. Edward E. Max). 2000.
- ↑ Musgrave, Ian, Information Theory and Creationism: Spetner and Biological Information. July 14, 2005.
- ↑ Kinoshita, S.; Kageyama, S., Iba, K., Yamada, Y. and Okada, H. "Utilization of a cyclic dimer and linear oligomers of ε-aminocaproic acid by Achromobacter guttatus". Agricultural & Biological Chemistry 1975, 39(6), 1219−23.
- ↑ Batten, 2003.
- ↑ Spetner, 2002.