Pages


Sunday, March 1, 2026

The Fluid Mosaic Membrane Model: A Dynamic Approach

    Biological membranes underpin all forms of life, playing vital roles as permeability barriers, transport mediators and structures enabling cellular complexity. Due to their prevalence and importance, membranes have been subject to extensive scientific research; this has lead to the development of a range of models over time. Proposed in 1972 by SJ Singer and GL Nicolson, the Fluid Mosaic Model (FMM) has become the most widely used, nanometer-scale description of its structure. The FMM model depicts biological membranes as lipid bilayers containing a homogenous and asymmetrical array of amphipathic protein, lipid and glycoprotein components - these are generally laterally mobile in the structure's fluidity. The model has been supported by a variety of experimental evidence, although - comparably to biological membranes themselves - the FMM remains dynamic and subject to adaptation as understanding of membrane organisation advances. 


Biological Membranes are Lipid Bilayers


The central basis of the fluid mosaic model of membrane structure is the lipid bilayer. This acts as a matrix throughout which its lipid and protein components are arranged. Both in nature and in vitro, bilayers and micelles are observed to self-assemble due to the amphipathic nature of their constituent phospholipids. The spontaneity of this organisation indicates that a favourable minimisation of free energy (G) drives lipid bilayer formation, where ∆G depends on an entropic increase (+∆S) through the relationship ∆G = ∆H - T∆S. The structure of the bilayer shields hydrophobic fatty acid tails from the surrounding polar solvent, causing water molecules to adopt a more disordered and entropically favourable arrangement. This driving force is known as the hydrophobic effect, and is also observed in the folding of globular proteins in an aqueous environment. Despite the ability of other classes of lipid to self-assemble into structures, phospholipids are used preferentially in biological systems. Phospholipids favour the formation of bilayers over micelles to minimise hindrance between the pair of fatty acid moieties, whereas other lipids only contain one hydrophobic tail (Berg et al., 2015). As extensive, sheetlike-structures, lipid bilayers support the formation of larger biological compartments in comparison with size-restricted micelles. 


Spontaneous lipid bilayer formation was proven by the black-lipid membrane painting experiment - even before the development of the FMM model - in 1961 by Müeller et al., and evidence from this procedure was later found to be consistent with Singer and Nicolson's propositions. A small aperture was prepared in an inorganic Teflon support and was pre-painted with brain lipid solution to create a hydrophobic edge. They dissolved the same brain lipid sample in an organic methanol solvent painted the solution over the aperture. Formation of a black spot over the aperture was observed over time as the lipid film thinned down to a double layer and destructive interference masked light. Analysis of the resultant, self-arranged bilayer returned a thickness of 40-80 Å. The BLM-painting experiment therefore provides evidence aligning with the FMM model. 


Lipid bilayers have unique properties - as described by the FMM model - which make them well-suited to their roles in organisms. They are cooperative structures, held together by many strengthening noncovalent interactions. This gives them a tendency to be extensive and eventually close in on themselves to form cells and organelles without free energy input. Bilayers are also self-sealing due to the energetic unfavourability of exposing gaps to the surrounding polar medium. Solvent molecules are excluded from the core of the membrane, creating a hydrophobic central permeability barrier that restricts the movement of large and polar molecules. An ion passing through the bilayer's core must initially shed its hydration shell in a highly unfavourable process, significantly slowing the passage of charged/polar species between compartments separated by membranes (Berg et al., 2015).


Amphipathic Components Contribute to the Mosaic Structure of Biological Membranes


Another pertinent aspect of the FMM model related to the basic bilayer structure is its 'mosaic'-like arrangement. This refers to the random distribution of proteins, lipids and glycoproteins throughout the membrane bilayer. These molecules share a common amphipathic nature, despite having different potential means of interaction with the membrane, and are directed to the membrane by a specific sorting signal co-translationally. Singer and Nicolson outlined the distinction between two principal classes of membrane protein: integral and peripheral. Globular integral proteins undergo extensive stabilising interactions with the hydrophobic core and tend to span the membrane through one or multiple transmembrane helices. Further to the initial fluid mosaic model, it has been suggested that the lipid bilayer's varying degrees of thickness and curvature are able to allosterically modify the function of integral proteins (Stillwell, 2017). Peripheral proteins are attached to the membrane surface via weaker electrostatic interactions. In rarer cases, peripheral proteins are anchored to the membrane by a covalently-attached hydrophobic chain, such as a cysteine-linked palmitoyl group. Integral proteins are generally attached more strongly, requiring detergent to remove; peripheral proteins can be released without such disruption to the structure of the membrane. 


Proteins have a range of functions as components of biological membranes. Integral proteins may establish channels lined with hydrophilic amino acid residues to mediate controlled flow of water and polar solutes. These membrane-spanning proteins are also involved signal transduction. Peripheral proteins act as sites of communication between the cell and its environment through an array of functions: signal particle reception, enzymatic reactions, cell-cell adhesion, stability, and structural support. Glycoproteins are essential for cell identity and antigen recognition. The importance of these proteins as initially outlined by the FMM model has been demonstrated by experimental analysis of the composition of various biological membranes to relate their structure to function. For example, myelin membranes surrounding nerve cell axons are only 18% protein as high lipid content more effectively insulates action potential transmission. Energy-transducing inner mitochondrial and chloroplast membranes are 75% protein, in contrast. 


More modern analysis of membrane protein composition has required modification to the initial FMM model. The original approach classified the vast majority of membrane proteins as amphipathic in nature. Contrarily, a new class of protein - membrane-associated proteins (Nicolson, 2023) - surfaced in 1976 research, leading to a revision of the original model. These proteins are only functionally and not structurally integral to the membrane and are therefore not required to be amphipathic. Membrane-associated proteins play a role in controlling the stability and mobility of different regions without participating significantly in hydrophobic or electrostatic interactions with the membrane. Instead, these proteins regulate the membrane by indirect interactions via integral/peripheral protein links. Examples of membrane-associated proteins are broad and varied, and this category encompasses even the cell's cytoskeleton and the extracellular matrix. The importance of these proteins to the structure of the membrane means that the first proposed fluid mosaic model no longer completely and accurately applies to current understanding of membrane structure. 


Biological Membranes are Fluid and Allow the Lateral Diffusion of their Components


A particularly interesting interpretation of the FMM model is that biological membranes behave as two-dimensional solutions. This refers to the ability of the protein and lipid components to laterally diffuse throughout the fluid bilayer structure, while being simulatneously restricted in their rotation as a result of their amphipathic quality. A third dimension of lipid movement is permissible, known as transverse diffusion (or 'flip-flopping'), in which lipids switch positions from one membrane leaftlet to the other (Stillwell, 2017). This transition is catalysed by a range of lipid translocase enzymes, and requires transient unfavourable interactions between both the hydrophilic head and the hydrophobic core, and the hydrophobic tail and polar medium. For this reason, the transverse diffusion of proteins has never been experimentally observed. In essence, membrane component movement is restricted to the two lateral dimensions. Importantly, this allows for the preservation of membrane asymmetry: the key property of membranes described by the FMM model that each opposing leaflet has a different composition. Therefore, cell polarity and different microenvironments supporting diverse functions are established either side of a biological membrane. Antiporters are notable examples of why this asymmetry is important, as they must face in a defined direction - random rotation perpendicular to the plane of the bilayer would prevent meaningful concentration gradients from being established. As new membranes are synthesised as part of pre-existing structures, membrane asymmetry is preserved. 


Lateral mobility is quantified using the formula S = √(4Dt), where S is the rate of lateral diffusion and D is a diffusion constant. D for lipids is consistent at ~ 1 µm2 s-1, whereas the diffusion constant for proteins is more variable. The G-protein-coupled receptor rhodopsin has a D value around 3 orders of magnitude larger than that of fibronectin, a structural membrane-associated protein part of the extracellular matrix. This aligns with the different roles of the two proteins, and thus their differing mobilities in the membrane, although the relative rigidity of fibronectin to some extent contradicts the initial FMM model stating that proteins are arranged and free to move at random. 


Membrane fluidity is also quantified in the FMM model through the specific melting temperature, Tm, referring to the temperature above which the membrane undergoes the sharp transition from a rigid to a fluid arrangement. Tm depends on the properties of the constituent phospholipid fatty acid tails. Fatty acid cis-unsaturation introduces kinks into the hydrophobic tail which push phospholipids further apart. Shorter fatty acid tails experience weaker van der Waals interactions and are held less tightly together. Both of these properties increase membrane fluidity and lower Tm. Cholesterol, an amphipathic steroid lipid, regulates fluidity within a narrow range. At T < Tm, it pushes fatty acid tails apart to maintain fluidity; at T > Tm, cholesterol interacts with the tails to prevent them from drifting apart. 


The lateral mobility of membrane components has been elucidated through the technique Fluorescence Recovery After Photobleaching (FRAP) in 1976. Surface membrane components are fluorescently labelled, before a laser beam photobleaches a small membrane window. This prevents fluorescence in this region. The fluorescence of the photobleached section is then recorded as a function of time. It was observed through the FRAP procedure that membrane components drift rapidly and eventually recover the fluorescence in the bleached area. The lateral diffusion constant of the labelled membrane component is calculated using the formula D = W2/4t1/2, where W is the radius of the bleached region and t1/2 is the time taken for fluorescence to recover to 50% of the starting value. This experimental technique provides clear evidence for lateral diffusion in biological membranes, acting as a two-dimensional solution. 


When Singer and Nicolson initially proposed the FMM model, it was believed that lipids and proteins were organised entirely homogenously and randomly as a consequence of this lateral mobility. The later discovery of lipid rafts has contradicted this statement (Nicolson, 2023). Lipid rafts are complexes formed from sphingosine-based lipids and are concentrated to dynamic, local regions. It is thought that they contribute to the control of fluidity by resisting phase transitions and are involved in cell signalling. Lipid rafts are more orderly and heterogenous than predicted by the fluid mosaic model, although the model has not been completely overturned by their discovery. Instead, biological membranes can be divided into discrete membrane domains with varying degrees of structure and sorting of lipids and proteins. Lipid rafts are liquid-ordered (L0) domains, while the bulk bilayer is liquid-disordered (LD) and resembles the original FMM model more closely. 


Conclusion


The fluid mosaic model remains arguably the most accurate and detailed description of membrane structure, in comparison with earlier, more static theories. The FMM model has been supported by experimental evidence focusing on a range of properties, from the self-assembly of lipids into bilayers to the lateral diffusion of components of the membrane. In the decades following the original proposition of the FMM model by Singer and Nicolson, additional discoveries about membrane structure have revealed new information - such as the existence of lipid rafts - that have altered understanding of key ideas and assumptions made by the model. Even Singer and Nicolson themselves have published revisions to their previous literature. Despite this, the model still stands to describe and explain the structure, properties and functions of biological membranes in most cases, with numerical approaches allowing comparison between different membrane types. Overall, it is not a case of the fluid mosaic model being 'inaccurate'; it is simply a dynamic model that adapts to match the precision of contemporary research techniques and discoveries. 


Bibliography


  1. Berg, J. M., Tymoczko, J. L., Gatto, Jr. G. J., Stryer, L. (2015). Biochemistry (8th ed.). W. H. Freeman & Company
  2. Nicolson, GL. (2023). The Fluid–Mosaic model of cell membranes: A brief introduction, historical features, some general principles, and its adaptation to current information. Biochimica et Biophysica Acta (BBA) - Biomembranes 1865(4), doi: https://doi.org/10.1016/j.bbamem.2023.184135 
  3. Stillwell, W. (2017). Introduction To Biological Membranes. Elsevier Science.
  4. Müeller, P., Rudin, D. O., H. Ti Tien, & Wescott, W. C. (1962). Reconstitution of Excitable Cell Membrane Structure in Vitro. Circulation, 26(5), 1167–1171. https://doi.org/10.1161/01.cir.26.5.1167

Wednesday, October 30, 2024

How Proteins Fold, and why you Should Never Eat Brains

Fold a paper aeroplane exactly according to the instructions: it flies, hits your (unsuspecting) target perfectly, lands. 

Folding a protein is like folding an aeroplane. In your ribosomes, a specific set of instructions is followed to translate your genes into an amino acid chain, which then undergoes more complex folding into a certain secondary and tertiary structure. It is then 'flown' by motor proteins along the cytoskeleton, possibly out of the cell itself, to reach its target location or substrate. However, this analogy does not exactly line up - make a mistake while folding a paper aeroplane and perhaps it flies slightly to the left, or lands slightly too early. You put it in the recycling bin and start again. But what if, instead of simply being discarded, the faulty plane began to make copies of itself. This happens again and again, overriding other perfectly-folded models until your entire room is filled with deformed and vengeful wads of paper. 

While this may seem like a ridiculous spiral caused by just one badly-folded plane, something similar can actually happen in your brain. Prion (misfolded protein) diseases are most commonly known because of scandals: burgers that cause hallucination or cannibals that die while laughing uncontrollably. What connects these incidents? Eating brains. 

Aside from upsetting vegans, eating brains is not very advisable because it provides the perfect way for misfolded proteins to end up in your own brain to cause these horrific diseases. In England in the 1990s, the agricultural industry looked for a better way to maximise the output of meat from livestock, by using as much of the animal as possible. Therefore, cow brains - prions and all - began to be added to some commercial meat products, allowing the transmission of bovine spongiform encephalopathy from cow to human. Similarly, among the Fore tribe of Papua New Guinea, it was common practice for women and children to consume the brains of the deceased as a way of paying respect - and a fast way to transmit the prion disease kuru between members of the community. This outbreak was imaginably worsened as those who contracted kuru would die soon after, so other people would eat their brains and catch the disease and die, so the cycle continued until the practice was banned in the 1950s.

Outside of these events, prion diseases still have an immense impact upon our lives. Surprisingly, Alzheimer's disease is not caused by any external pathogen. The degeneration of brain matter is inflicted by a simple error that happens from within: the misfolding of amyloid-beta proteins. These aggregate into malicious plaques, like the abnormal planes slowly filling the room, taking over the brain and leaving holes in the cerebrum. 

Prion diseases give us some insight into how important it is that proteins are folded correctly in the body (and how important it is not to eat brains). But the true mechanism by which proteins 'know' how to fold into their unique shapes still remains a mystery.

Like all other biochemical process that occur, the folding of proteins is governed by thermodynamics. Proteins tend towards the most energetically favourable form with the lowest possible free energy. We can call this its native state: the final product of folding a protein. The tighter a protein folds, the lower its free energy will be. This is why bonds tend to be formed between the constituent amino acids, which each have a different variable group that changes the way that it can interact with different elements within the chain. For example, cysteine is an amino acid which contains a thiol sulphur-hydrogen group that allows it to form disulphide bridges that hold the protein closer together.

So, why is there any issue in working out how proteins fold - surely they can just form any necessary bonds and go about their business as intended? The problem is that, when an amino acid chain has just been synthesised, it is flat and has no 3D strucutre. This means that the components of the final protein structure are often situated too far away from each other for these interactions to even occur. As the protein gets closer to its native state, these interactions become increasingly easy as the parts of the chain are held closer and closer together, funnelling the chain down a free energy minimum until it is stable. This is why the largest barrier to protein folding is never the final state: it is always the very first bond, when the amino acids are still so far away from each other, and the free energy is still so high. 

Furthermore, every bond that is made equals a decrease in entropy, or disorder, which is considered to be a sin against the laws of thermodynamics - reactions tend towards a higher disorder as, counterintuitively, this is more stable. Proteins must overcome this instability with a greater decrease in free energy for the reaction to even be feasible.

There are three leading theories as to how proteins manage to overcome this barrier to reach their native state:

  • nucleation-condensation: an initial, central 'nucleus' forms in the chain, around which other amino acids are able to form bonds 
  • hydrophobic collapse: all hydrophobic bonds form first between amino acid residues, pulling certain parts of the chain closer until further interactions can occur
  • diffusion-collision: small nuclei form throughout the structure at first, which then interact among themselves to construct a larger tertiary structure
While we are busy choosing which of these hypotheses is most likely to be accurate, folding proteins also have to make a critical thermodynamic decision. We have already established that proteins are fighting to reach their stable native state, but now we must address yet another (another!) obstacle. The pathway of protein folding isn't always a simple funnel from unstable to stable. There are often other little 'dips' in the energy landscape, which offer a different low free energy state that is not the native conformation. A fraudulent energy minimum! These act as mini funnels that trip up the protein as it journeys towards its intended state - and because these dips are also stable, it is very hard to get back out of them. The protein ends up...misfolded. 

Considering the massive impact that misfolded proteins have on the body that we examined earlier, it would make sense for cells to employ their own protein 'bodyguards' designed to help with the folding process. These are known as chaperone proteins which:
  1. help the proteins to fold into their correct native state and not some other random energy minimum
  2. stop proteins from aggregating - meaning that they stick to each other and form nasty plaques, like we saw in the example of Alzheimer's disease
The helpful presence of chaperones is quite relieving, after you realise that misfolded proteins such as prions have the ability to deform other proteins as they are folding to amass troops of infectious conformations.

The significance of folding your proteins right - and what happens when you don't - is evidently food for thought. Although this should definitely not be confused with 'thought for food', which was the grave mistake that led us to consider this in the first place...

References:


Lapidus, L., et al. (2007) 'Protein Hydrophobic Collapse and Early Folding Steps Observed in a Microfluidic Mixer' PubMed Central Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC1914423/#:~:text=It%20has%20been%20postulated%20that,state%20to%20its%20native%20conformation. accessed: 30/10/2024

Beck, C., Siemens, X., Weaver, D. (2001) 'Diffusion-Collision Model Study of Protein Misfolding in a Four-Helix Bundle Protein' Science Direct Available at: https://www.sciencedirect.com/science/article/pii/S0006349501759486#:~:text=The%20diffusion%2Dcollision%20model%20proposes,in%20coalescence%20of%20the%20helices. accessed: 30/10/2024

Sunday, September 22, 2024

GEO-metry at the Brussels Institute of Natural Sciences

This July, I spent a week in Brussels - a capital well-known for its striking silver Atomium structure that stands out as a futuristic symbol on the city skyline. Beyond the Atomium, Brussels is home to a richer science scene: including its Institute of Natural Sciences. This museum is most visited for its impressive collection of dinosaur remains and early hominid ancestors, yet hidden in the basement is a dazzling exhibition of mineral specimens that revealed the complex molecular geometry behind the facets and vertices of crystal topology. 

One thing that I noticed as I walked around the exhibition was the fact that so many of the samples - disguised in a full array of colours, shapes and sizes - were all ultimately the same mineral: quartz. Quartz is the second most abundant mineral in the Earth's crust after feldspar, but this does little to account for the fact that every variety of quartz is so strikingly unique - even the majority of sand grains on the beach are, in reality, miniature quartz crystals. So what makes each quartz deposit so different?

To understand this, we must first distinguish between the theoretical crystal form of quartz and the true crystal habits it exhibits in real life as a result of variations in its formation. The form of quartz is a model of how the 'perfect' crystal should look in its pure form, unaffected by any chemical or physical factors. 

Quartz also goes by the name of silica, or silicon dioxide, with a molecular formula of SiO2. Since silicon is tetravalent like carbon, it also takes on a similar molecular shape when it is in a lattice. Each oxygen atom in the crystalline lattice is shared between two silicon atoms, so that each silicon atom has a coordination number of four, meaning that there are four oxygen atoms around each silicon despite the overall empirical formula being SiO2. The compound therefore takes on a tetrahedral arrangement with a bond angle of 109.5 degrees between each Si-O covalent bond. This shape produces a non-planar hexagon which can be visualised in this simplified representation of a section of the structure that I generated using MolView 3D. 
The agglomeration of the tetrahedral units produces a hexagonal prism shape. This has 4 distinct axes in its geometry: 3 are of equal length and on the same plane and 120 degrees apart from each other, with a fourth perpendicular to this from the point of intersection and at a different length. As crystallisation occurs (in ideal conditions), the molecules are added on the same plane as existing molecules, causing the formation of these neat edges and faces instead of a curved or irregular structure. 

Another element to consider is the actual process of crystallisation, or nucleation. At a high temperature and pressure, the silica is able to dissolve. Then, as the solution cools, it enters a state known as metastable supersaturation in which the concentration of the solution is higher than the solubility yet the mineral still remains in solution. This is considered metastable because the solution is currently in a local minimum energy - not the lowest possible energy state (achieved by overcoming the nucleation barrier) but still a stable intermediate. Upon further cooling, the boundary of this metastable state is reached and as it is overcome, crystallisation occurs. A true theoretical crystal achieves this homogenously without a scaffold of foreign particles. In reality, it is much faster and easier to form a nucleus on a heterogenous base of an existing impurity (link how a snowflake forms its crystal structure on the nucleus of a dust particle in the atmosphere). Nucleation - in spite of the initial energy barrier that must be overcome in crystallisation - is energetically favourable and facilitates further crystal growth. Because supersaturation makes it easier to form nuclei, the crystals formed in a highly supersaturated condition are smaller than those formed under less supersaturation because nucleation is favoured over growth of existing crystals. Likewise, for low supersaturation, the crystals will be larger because growth is more favourable than forming an entirely new crystal.

The habit of a crystal is how it actually forms if the conditions and specific differences of formation are taken into account. It is the variation in habit of quartz which results in the formation of several different kinds of appearance and structure that are visible in nature. 

Structural deformations to crystals often include interstitial defects (when extra atoms are present in a space that they are not typically found) and the inverse, vacancy defects. These often occur at the same time within the structure. For example, the Frenkel defect is marked by the movement of an atom from one place to another, causing an interstitial defect in the destination and a vacancy defect in the position it used to occupy. 

Impurities are the result of unwanted chemicals that are present in the structure during formation. However, in the commercial world these are rarely unwanted changes - in fact, the wide spectrum of quartz colours is desired and manufacturers often treat certain kinds of quartz with heat to cause them to change colour. An example of this is the production of citrine from amethyst: amethyst is a variety of quartz given its striking purple colour by the presence of iron (IV) ions within its lattice. Transition metals such as iron are able to absorb and reflect certain wavelengths of light, giving them bright colours. The particular wavelength of light that is emitted is relative to the oxidation state the ion is in, and changes as the ion is either oxidised or reduced. In the formation of golden citrine from amethyst, high temperatures are used to reduce the iron (IV) ions to iron (III) ions, which causes a shift in the wavelength of light reflected, thus changing the mineral colour. This is a widely-used manufacturing process because citrine is so rare in nature, since it requires the natural amethyst to undergo similar naturally-occurring high levels of heat. 

Finally, another kind of deformation is twinning. This is when multiple crystals join together as they form. The Brussels Institute has handily colour coded the faces in the diagram below of a quartz that shows how multiple quartz crystals can form together and still retain the consistency of their planes. Even though this diagram shows quartz crystals all facing the same direction attached by the red-coded faces, twinning can also happen as another crystal joins to one of the green or yellow-coded faces - during which the crystal will grow at a sharp angle to create an interesting (and much spikier) structure. 







Above is a quartz crystal in my collection - you can easily see how its faces at the perfectly clear tip match up with the shaded diagram in the exhibition and the hexagonal model presented earlier. This proves just how predictable a natural phenomenon can be with the use of geometric and thermodynamic rules. The other end of the crystal, however, has become crushed and deformed over many years of weathering and wear into a unique and disorderly shape. Using mathematics can only do so much to predict how they will form: as I observed each specimen in the Institute's collection, I realised that we must instead learn to translate the intricate stories about the past written directly on the facets of each individual crystal. 


References:
 
All accessed 22nd September 2024

https://www.unearthedstore.com/blogs/geology-unearthed/how-do-crystals-form-into-different-shapes?srsltid=AfmBOoru01fMjNVAAtup8PlD9HHJOs6kmzet4s5pqo4xen0BOzXW9Bpi

https://www.mt.com/gb/en/home/applications/L1_AutoChem_Applications/L2_Crystallization/Supersaturation_Application.html

https://en.wikipedia.org/wiki/Crystal#Defects,_impurities,_and_twinning

https://en.wikipedia.org/wiki/Crystal_growth










Wednesday, April 10, 2024

How the Solar Eclipse Changed your Perception of Colour

 Monday 8th April marked a rare astronomical event across Western America - the first total solar eclipse since 2017. Despite living in England, where the solar eclipse was not visible, I still managed to view photographs taken by a friend who witnessed it at the time that captured the white halo of light hugging the dark silhouette of the passing moon. Although the beauty of the phenomenon was transmitted through the images, my friend spoke of another - somewhat supernatural - effect sparked by the eclipse that could not be captured by even the most high-resolution lens: a silvery-blue cast had descended over the scene, tainting the surroundings with its dusty tinge. 

This effect is known as the Purkinje shift. To explain it, we must examine the anatomy of both the human eye, and the composition of the sunlight that reaches Earth. 

The eye contains two types of specialised cells tasked with translating light input into an electrical output, a language understood by the brain that allows us to perceive the colour, shadow, and contour of the world around us. These cells are called rods, which detect light and dark, and cones, which pick up colour signals. Cones are able to detect the longer wavelengths of red and green, and the shorter, blue wavelengths; rods receive signals over a smaller wavelength range that encompasses the longer portion of blue light and short portions of red and green. However, the optimal wavelength for rod vision covers non of these zones - this is the reason that scotopic vision tends to be perceived in minimal colour. 

The structure of the rods and cones are relatively similar - a region of isks which detect the ultraviolet light, followed by a mitochondrial cytoplasm zone and nucleus region linked by a cilial bridge, ending in a synaptic area that transfers the resulting signal. Ultraviolet light at a certain wavelength is detected by proteins called opsonins, which exhibit cis-trans isomerism upon the incidence of this light. These proteins exist alongside a side-chain of retinal which can absorb a photon, leading to the activation of transducin as the next step in the protein cascade. This is bound to guanosine diphosphate, which differs from adenosine diphosphate by only its nitrogenous base, This is changed into guanosine triphosphate, which causes the activation of a phosphodiesterase enzyme which hydrolyses cyclic guanosine monophosphate, lowering its overall concentration in the region of the cell. The falling concentration results in the closure of cation channels, so cations build up next to the membrane between the disks and cell body. This leads to the setup of a charge difference: an electrical signal, which passes over the terminal synapse through the optic nerve to the occipital lobe of the brain for further processing. 

Photopic vision lies in the range of the cone wavelengths, and these are the cells which are thus activated more during the day. However, during an eclipse, a new type of vision distinct from either scotopic at night and photopic in the day is employed: mesopic vision. This vision range lies between scotopic and photopic and therefore uses both rod and cone output simultaneously. As covered earlier, the rod range falls over blue, red and green areas, yet predominantly lies overlapping with the blue wavelength field. Thus during an eclipse, when both rods and cones are being fired at the same time, the rod output tends to align with the blue wavelength output and the main signals received are towards the blue end of the spectrum: the Purkinje shift explained biologically.

Additionally, this phenomenon can be explained using the physics of the light that reaches our planet from the sun. Direct sunlight composes all visible (and some invisible) wavelengths of light and these create the white light of daytime. The explanation for the sky's blue wash is simple: blue wavelengths are shorter and therefore more easily scattered by the randomly moving air molecules. It is this scattered light which reaches our eyes as a reflection, as red and green light is more easily transmitted to make sunlit objects look more yellow, rather like a child's drawing of a bright, dandelion sun in the corner of a blue page. On the other hand, during an eclipse, we do not receive direct light - instead all light that passes to us reaches the Earth indirectly as the direct light has been shielded by the moon. And this indirect light - the same as the blue of the sky - consists of the most easily scattered blue wavelengths. With a reduction in the amount of red and green light reaching the ground, my friend noticed a slight blue shroud to the landscape: a physical product of our unique anatomy and the diversity of spectrum wavelength that can only be admired in person, with no records existing as no camera has yet been able to perfectly replicate the intricacies (and natural flaws - wavelength 'blind spots') in our rod and cone cells that allow this change to be detected.

Thursday, February 15, 2024

When Politics and Particles Collide

 Although quantum mechanics is typically mislabelled as a very recent concept, it actually dates back to 1900 - an entire 53 preceding the landmark discovery of the double helix DNA structure made by Franklin, Crick and Watson. Its initial reputation as a branch of science set to revolutionise and rebel against the classical principles of physics, yet over time quantum mechanics has become deeply integrated into the fundamental laws of life. For example, the device that you are currently reading this on would not be able to exist without the quantum principles of superconductors and electronics. 

Having outlined its importance, it is still crucial to understand what 'quantum' actually means in the context of science. Energy can be seen not as a continuous transferral, but as something passed on in indivisible chunks labelled 'quanta'. These quanta take many forms, but the most widely-recognised is the photon, which acts as a vector of light. Quantum mechanics deals with the interactions between these quanta and their environment, on an imperceptibly small scale of subatomic particles and fields of charge. 

The electron plays a critical role in both the understanding and the complexity of quantum mechanics, as a result of the theory that electrons exhibit properties not exclusively like particles, but also oftentimes as waves in a state similar to energy itself. These waves are of interest as they show some quantum properties. These include the ability to 'exist' in more than one place at once; this is due to the fact that the location of an electron in its orbital at any given time can be reduced to a probability rather than a certain coordinate. It is thought that while an electron is not being observed, the probability mechanisms work in a way that the electron could feasibly exist in multiple spaces within its orbital. This changes as the electron is measured, or theoretically observed, however, since its existence in two places at once is not definite and is instead based on abstract probability and as soon as it is defined as in one place, there is now a 100% probability of it occurring in this location at the exact time of observation.

This principle is applicable in theory, but on a larger, multi-particle scale system, the randomness of these probabilities have the tendency to cancel out and the overall disorder minimises the effect of any quantum events such as tunnelling in which particles seem to be able to 'jump' an energy barrier without overcoming it and instead skipping directly through. While this can happen on a smaller level, it is relatively impossible for a whole human being to experience this tunnelling effect as this would require the alignment and coordinated tunnelling of so many subatomic particles that the chances of this happening are far lower than either of us ever winning the lottery. 

One of the pioneers of this theory was Pascual Jordan, a German-born theoretical physicist who published a groundbreaking research paper on the matter in 1932, titled 'Quantum Mechanics and the Fundamental Problems of Biology and Psychology'. However, this paper was shocking in a more unexpected way - while the scientific concepts he presented were factual and researched, he presented his findings rather controversially. 

At the time in Germany, something else was brewing: this time period of the early 30s to mid-40s marked the rise and fall of the Nazi empire. And alongside inspirational ideas surrounding quantum biophysics, Jordan fell into the trap of these radical and authoritarian views still condemned globally to this day, gradually succumbing to intensifying political beliefs that infiltrated his papers. In this unsettling line from his paper, the prioritisation and deification of the government was set out clearly: '...absorption of a light quantum in the steering centre of the cell can bring the entire organism to death and dissolution - similarly to the way a successfully executed assault against a leading statesman can set and entire nation into a profound process of dissolution.' (Jordan, 1932)

In comparing the risk of damage to the central 'authority' of a cell, Jordan promoted the argument that submitting to higher power was biological, a somewhat natural and science-defined order of life. Submit to a higher power was exactly what Jordan would consequently go on to do as the year directly following the publication of his striking paper, he joined the Nazi party himself. The reasons for this were largely left up to debate as in his own defence, he frequently claimed that he only joined the party in a bid to prevent Nazi regimes from colliding with the world of science (Dahn, 2023); this is exactly what he ended up doing himself. His papers in the years that followed grew more and more littered with references - implicit and implicit - towards the agendas of the party, as he fell into frequent correspondence with many other individuals more closely linked to Hitler himself. 

Until 1933, Jordan adopted a pseudonym to write under called 'Domeier' that he utilised to conceal his Nazi involvement from the rest of the scientific community, including those that he has collaborated with in the years before. Just eight days before he made his move to join the party, he finally published a propaganda-rich paper under his own name, forever linking himself, and the scientific community he stood for, to Nazi ideology and contemporary politics. In this paper, he urged the University of Rostock to take on a 'militant character' (Dahn, 2023) in a way overtly supportive of the party's regimes. 

Despite some accuracy to his statements - particularly the notion that living organisms are distinct from organic matter in their centralisation of key molecules (such as proteins and DNA) (Al-Khalili and McFadden, 2014) - Jordan's work was ultimately dismissed by his contemporaries and thus rarely referenced in the current scientific world save for in the context of his political involvement. Perhaps the most controversial outcome of this was the criticism received by the men he used to collaborate on research with, Wigner and von Neumann. Many argue that they should have cut ties with him following the surfacing of the scandal, but fail to realise that their own credit for papers was on the line - alongside their lack of clear knowledge regarding the situation as a direct result of the pseudonym he wrote under. The public at the time were physically, if not mentally, subject to many sources of influence and propaganda dictating the way they should think, interact and live and showing active disrespect for Nazi ideologies may have turned many of their alliances against them. 

After all, should we really allow politics to infiltrate the world of science? Perhaps it is best to try and differentiate Jordan's scientific accomplishments from his political shortcomings. However, doing so would completely disregard the fact that he himself was incapable of removing governmental influences from his writings and let explicit biases slip through. Pulling the life out of a scientist's life work is, by definition, impossible, and to learn from and truly appreciate scientific history, we must understand the context in which it was written.

https://pubs.aip.org/physicstoday/article/76/1/44/2877362/Nazis-emigres-and-abstract-mathematicsToday-Jordan 

date accessed: 15/02/2024 

J. Al-Khalili and J. McFadden, Life on the Edge (Bantam Press, 2014)

Die Naturwissenschaften, vol. 20 (1932), pp. 815-2

Saturday, January 27, 2024

Right Now, your Body Just Fought a Cancer Cell

While cancer is widely attributed to genetic malfunctions, it also your genes that you have to thank for combatting these faults before damage is caused. The genes that claim this title are the tumour suppressor genes, which do exactly as on the tin: suppress tumours and prevent uncontrolled cell division.

To understand and appreciate the importance of the role of this family of genes, a foundation of knowledge of the cell life cycle is essential. The cell cycle is what allows the organs and tissues of your body to work in perfect harmony, enabling growth and repair to maintain balance throughout the fabric of your body. At the most simple level it is divided into three main stages: interphase, a long period of cell activity and preparation for division; mitosis, the precursor to the split of the mother cell into two identical daughter cells; and cytokinesis, the breathtaking moment when the cytoplasm pinches off into its two distinct successors to continue the ever-ebbing flow of cellular life. This entire process takes on average 24 hours - as the sun rises and sets around us, as do our cells.




During interphase, many critical tasks are undertaken alongside the cell's particular function in anticipation of a successful future division. The first step of the interphase is known as G1 - the first growth - and plays the role of increasing the space within the cell to maximise its capacity for division, while upregulating the synthesis of proteins that aid organelle production. This stage relies on a perfect balance of free nucleotides, amino acids, temperature and nutrients to provide the right conditions to engender life. The DNA only commences replication during the following S (synthesis) stage to form two sister chromatids for every homologous chromosome pair, provided that materials and space are sufficient for this to be carried out effectively. The final stage of interphase is G2 (the second growth) in which organelles are replicated to be distributed between the two daughter cells using the materials collected during the first growth. Energy stores are also increased to enable the motor proteins to carry out the processes on the spindle fibers in the following mitosis phase, which is typically an active process. Finally, DNA is checked for errors before initiating the mitotic division.

Mitosis in eukaryotes - a group of nucleus-containing organisms to which humans belong - involves four main stages. The first is the prophase, during which the chromatin genetic material condenses within the nucleus into chromosomes, which literally means 'colour bodies' since these more compact forms of DNA are clearly visible when using staining techniques such as giemsa staining, or when viewing material on a spectral karyotype. The centrosomes organise themselves to either pole of the cell where they form the tubulin spindle which will later play a pivotal role during the mitotic phase. Finally, the nucleoli break down, which ends the production of ribosomes. By metaphase, the kinetochores have attached to the centromeres on the chromatids and the spindle joins onto the centre of these kinetochores for the purpose of pulling the chromatids towards the equator of the cell. In anaphase A, the centromeres split and the microtubule spindles depolymerise and shorten, causing them to retract and pull each chromosome from the pair of sister chromatids to either pole of the cell. This is followed by anaphase B, involving the movement of the centrosomes closer to the cell membrane to ensure that segregation is easy. The final stage of mitosis is telophase, which finishes off the last of the spindles by full depolymerisation. A nuclear envelope is assembled surrounding the genetic material located on each side of the cell.

The last part of the cell cycle involved in its renewal for further cell activity and growth is cytokinesis, which causes the cell membrane to pinch the cytoplasm into two separate, yet identical, daughter cells, each containing their own full set of genetic information. Since plants have a cell wall, this stage instead involves the formation of a cell plate through the middle of the cytoplasm, which eventually segregates into the product cells.

With a process so complex and laced with so many fine yet vital details, it is unsurprising that there is the potential for errors to occur. Common faults in this system are marked by product cells with the wrong amount of genetic material (aneuploid cells) or with damaged DNA. If sister chromatids fail to separate during anaphase, one daughter will have monosomy as it loses material while the other has too much and experiences trisomy. Binucleated cells also have the potential to occur if during cytokinesis, the cell fails to divide the nuclei evenly between the two product cells. However, the error most commonly tied to cancer occurs as the cell divides too effectively and loses control of its own mechanisms, propelling it into a state of frenzied division and propagation, and planting faulty gene-containing seeds throughout the tissues. This erroneous division is known as a tumour and is often benign, yet can be secretly perfidious if it metastasises and is carried as a vector of malfunction throughout the bloodstream to germinate elsewhere. There are many theories as to how cancer arises, and there is no direct answer. The sparks that ignite this disease vary not only between cancer types, but also from individual to individual, which is what sets it apart from other conditions in its unique difficulty to be studied. A common hypothesis that marries genetic causes with environmental factors is known as the 'two-hit' theory, which suggests that heritable mutations may be passed on in one copy of a tumour suppressor gene. This means that - despite this inheritance - the other copy of the gene is still present to encode the cell-regulating protein and thus the single mutation at first has no effect. This is often coupled with genomic instability, meaning that - in a twisted draw of probability - it is highly likely that environmental factors cause a second hit which eliminates the other copy of the gene and prevents the cell-controlling protein from being synthesised at all, thus resulting in the breakdown of any barriers to incessant replication.

Fortunately, your cells are equipped with reinforced mechanisms to prevent the spread of damaged DNA, in the form of many regular checkpoints throughout the normal cell cycle.

The first of these is known as the restriction point and occurs between the G1 and S phases of the cycle. This confirms that cell conditions are optimal for the cell cycle to proceed, including having the right balance of nutrients and extracellular conditions. Until this point, the cell cycle has relied upon external growth factors to move forwards; following the restriction point, the cell becomes committed to the cycle and no longer requires growth factors, but is also unable to leave the process. If the cell fails at restriction point, the cell cycle is terminated and the cell is instead sent into the G0 phase. This is not a mechanism solely used for errors detected - for example, neurones are fully differentiated and therefore do not need to remain in the cell cycle. Other potential causes for G0 initiation include a lack of nutrients. The G0 phase may be quiescent, allowing it to be reversed, or irreversible such as during senescence and terminal differentiation. Senescence is a way to allow cells to continue their typical function until they die, without further divisions. If a grave mistake in the DNA or irreparable structural damage is present, the cell may undergo a more brutal process of apoptosis, eradicating the cell and physically preventing it from differentiating or carry out erroneous functions which may lead to cancer. Evidently, mutations which cause loopholes in the restriction point may allow damaged cells to slip past these regulations and commence cancerous divisions. 

The second checkpoint of the cell cycle is  the G2 point which ensures that there are no further damages to the DNA before mitosis is initiated, and gives cells the opportunity to repair and re-synthesise any faulty sections of DNA. 

Another checkpoint is present during the phase of mitosis itself, known as the spindle checkpoint since it occurs during metaphase, when the microtubule spindle is in use. It makes sure that each chromosome is properly connected tot he spindle by its kinetochore, thus confirming that division during the succeeding anaphase A will segregate chromatids properly and ensure that the daughter cells are truly diploidal and identical. Defects in this checkpoint, as with all other checkpoints, run the risk of cancer arising from aneuploids. Cancer is not the only impact triggered during this stage: Down syndrome is engendered by trisomy in the 21st chromosome if the chromatids are not split evenly across the two daughter cells during embryonic development. 

Every day, your body's complex machinery creates around 330 billion cells each day, many of which are secret precursors to cancer. Thanks to the beautiful and carefully refined mechanisms that act within each cell's cycle, you are constantly halting the formation of tumours before they could even be noticed under a microscope. While these systems are not without their faults and occasional environmental and genetic factors have the potential to tip it into a precarious state of imbalance, it is still worth admiring our capacity to regain order, if not with the help of our ever-advancing oncological technology. It is important that we stop perceiving cancer in a single-faceted manner, as a simple fight between humanity and the human body. We are equipped with nature's all-encompassing user manual deep within our cells, and it is up to us to step up to the task of decoding it. 

https://en.wikipedia.org/wiki/Mitosis#Further_reading 
27.01.2024

Thursday, December 28, 2023

Gene Mapping with Yeast

The human genome is often compared to a cell instruction manual – if each page were to represent one of the estimated 25,000 distinct genes, it would claim the title of the longest book ever to be written. These gene units are coded for by a unique fingerprint sequence of up to several million bases, each referred to by one of four basic letters: A, T, C or G. Despite this manuscript of life existing within most somatic cells, it was only in 2003 that the completion of the thirteen year-long Human Genome Project was able to cast light upon a genetic map that would later prove essential in navigating scientific research through the labyrinth of the genome. While the success of this project is widely lauded, the pivotal and unexpected role of S. cerevisiae (yeast) often remains in the shadows. 

The Importance of Yeast 

Despite the title of the Human Genome Project implying that research focused on human cells, yeast rapidly rose to the centre of mapping techniques for its unique properties that distinguished it from other candidates more similar to humans. While human genetic pedigrees – genetic trees used to display the Mendelian patterns of trait heritance - may be useful in analysis, the crevasse of time between generations stunted its potential applications and called for a faster-reproducing organism. The budding time of yeast averages 90 minutes, meaning that the trends in genetic composition and the occurrence of de novo (new) mutations from generation to generation could easily be observed. Furthermore, yeast has the potential to exist in both a diploid and a haploid form in relation to the environmental conditions; this permits researchers to initiate either sexual or asexual reproduction in a certain colony to monitor differences between these modes of replication. 

Tetrad Formation 

Yeast genes are mapped while it is in haploid – halved genetic material - form, requiring the yeast to sporulate under nitrogen-deficient conditions and create a tetrad of meiotically-divided haploids. This is performed via the following general method: 

The yeast sample is first streaked upon a petri dish and incubated, allowing for budding. Each colony appears as a distinct patch of yeast growth; a single colony is then isolated and swirled in a minimal media consisting of salts, minerals, a sugar source, and the absence of nitrogen. Under this lack of nitrogen, the yeast colony undergoes the evolutionary process of sporulation in response to stressful conditions to form an ascospore which would – in the wild – be able to drift to a more nitrogen-rich location. This is achieved as the cells exit the mitotic cell cycle of normal cell division and initiate meiosis within the nuclear envelope. During meiosis, the genetic material divides twice in succession, resulting in four daughter cells, known collectively as a tetrad. The membrane of the mother cell persists around the tetrad, acting as a protective ascus coating around the four inner spores.  

To reach the haploids for study, enzymes are employed for the dissolution of this ascus. The cells may then be observed using a powerful tetrad-dissecting microscope equipped with a fine glass needle designed to isolate the individual haploids from the tetrad.  

Using Tetrads to Measure Gene Linkage 

During the meiotic process, the genes do not segregate into identical cells as they would during mitosis. Instead, genetic variety of offspring is caused by recombination: this is the crossing over of DNA between different chromosomes to exchange genetic material at a certain point. Two genes are usually selected for observation to determine their genetic distance, and thus position within the yeast genome. The closer together the genes are, the more likely they are to remain on the same chromosome and in the same daughter cell following recombination.  

Without recombination, all haploids have what is known as the ‘parental ditype’ genotype; this is identical to that of the mother cell. A potential genotype of the mother cell could be AB/ab, in which A and a are two alleles of the same gene (as are B and b) and AB and ab represent the combinations of these alleles present on each chromosome belonging to a pair. If recombination does not occur between the loci of the two genes on the chromosomes, all haploid daughter cells have either an AB or ab genotype, which matches that of the mother chromosomes.  

However, in the event of recombination, one of two different offspring types may arise. The first is the non-parental ditype, in which none of the daughter cells have chromosomes that match the mother cell, as recombination has switched the arrangement of the two genes. This would be represented by a mix of Ab or aB haploids. The second is the tetratype, with four different possible genotypes – two of which are recombinant and two are parental. Therefore, the daughter cells would exhibit a mix of AB, ab, Ab and aB genotypes. Both these types of tetrads show that the chromosomes have crossed over and swapped material at some point between the genes A and B.  

Observing these haploids is critical in the measurement of genetic distance between yeast genes, since the relative numbers of each type of tetrad (parental ditype, non-parental ditype and tetratype) can be directly input into this formula, from which genetic distance measured in centimorgans (cM) can be derived:  

Genetic Distance = 100 x (T + 6NPD)/(2E) 

where T corresponds to the number of tetratypes, NPD to the number of non-parental ditypes and E to the total number of haploid cells in the sample. As recombination events increase in frequency, the numerator of the fraction rises since the number of tetratypes and non-parental ditypes increases in relation to the total number of cells. This causes the overall fraction to increase, displaying a proportional increase in genetic distance.  

Applications 

Although yeast seems an unlikely subject to map genetic distances and determine the degree to which genes are linked, it is to this unique organism that the Human Genome Project owes its success. While morphologically, humans and yeast are highly distinguishable, 23% of genes are homologous between these two species and observations of genetic distances in yeast are frequently mirrored in the human genome. These genetic distances can – like distances on a geographical map – be used to physically place the genes relative to each other to construct a highly accurate sequencing of bases.  


The most notable application of yeast technology resides in the study of genetic markers: these more visible and easily identifiable ‘flags’ are linked to and signal the presence of other, more significant alleles and mutations close by on the same chromosome. Among the marker loci identified using yeast are even genes which point towards antibiotic resistance in bacteria – a corner of research with the future potential to revolutionise healthcare and accelerate pharmaceutical evolution.  


References 

MITx 7.03.1 Genetics: The Fundamentals  

accessed: 19th November 2023 

https://www.uvm.edu/~dstratto/bcor101/mapping3.htm  

Accessed: 27th December 2023 

K-State Parasitology Laboratory: Mendelian Genetics Problems 

Accessed: 

https://www.k-state.edu/parasitology/biology198/answers2.html 27th December 2023 

A. Neiman: Ascospore Formation in the Yeast Saccharomyces cerevisiae 

Accessed: 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1306807/#:~:text=The%20presence%20of%20a%20poor,%2C%20and%20sporulate%20(40). 27th December 2023