The post Problem of the Week (36) appeared first on EtonSTEM.

]]>*Source: *Purple Comet Math Meet, 2012

Happy problem solving!

Feel free to email your solution to STEM@etoncollege.org.uk to check that it’s right, or check out our solutions post next week!

The post Problem of the Week (36) appeared first on EtonSTEM.

]]>The post Turbomachines appeared first on EtonSTEM.

]]>On the 25th of August 2020, a Tokyo-based engineering company, SkyDrive Inc., carried out the first ever successful test of a piloted eVTOL (electric vertical take-off and landing vehicle). Not only did this flight represent humanity taking one step closer towards a completely novel era of personal transportation, but it also served to remind us about the significance of a not-so-novel and oft taken for granted – but nonetheless brilliant – feat of engineering: the propeller. The very first propellers came into use on ships in the early 19th century, and now, after 200 years of innovation and advanced scientific understanding, the technology behind them can be found everywhere from planes to wind turbines. Given all of this, a deeper look into the physics behind propellers and turbines – more specifically, ‘the transfer between rotational power and linear motion in air’ – is very topical and is hence the subject of this essay.

The key concepts on which both propellers and turbines work are those described by Newton’s Third Law and the Conservation of Momentum. During any interaction between two bodies with mass, the total momentum within a system will be conserved if no external forces act on the system. Additionally, if either of the bodies exerts a force on the other, by the very nature of physics, an equal force in the opposite direction will be exerted upon it. Therefore, described in the very simplest of terms, a propeller is a device which causes the air around it to move backwards resulting in a forward force on it. A turbine involves the same transfer but occurring in the opposite direction: it gets in the way of moving air and causes it to slow down and the momentum loss of the air allows power to be generated.

Therefore, the fact that turbines work on the same principle as propellers, and also that they are inherently easier to model, means that they provide the perfect basis on which we can begin to mathematically analyse the transfer between the linear motion of air, and rotationalpower. (Note that the reason that turbines are easier to model is because turbines are stationary and so we only need to deal with the change in momentum of the moving body of air. In addition, unlike propellers, the design of turbine blades can more easily be thought of as a simple mechanism for slowing down air without compromising the accuracy of our model.)

As previously stated, a wind turbine generates power by getting in the way of moving air. The kinetic energy of a mass, m, of air moving at velocity, v, is given by

E = \frac{1}{2}mv^2

Power is the rate of change of energy and hence:

P = \frac{dE}{dt} = \frac{1}{2} \frac{dm}{dt}v^2

where \frac{dm}{dt} describes the mass flow rate of the air, in other words, the rate at which a certain mass of air travels past the turbine. The mass flow rate can also be written in terms of the density of air, \rho, the sweep area of the turbine, A, and the velocity of the air, v, as follows:

\frac{dm}{dt} = \frac{d(\rho V)}{dt} = \frac{d(\rho Ax)}{dt} = \rho A \frac{dx}{dt} = \rho Av

Thus, substituting this into the equation for power, we get that the power of wind travelling through a turbine is:

P = \frac{1}{2} \rho Av^3

From this, it is evident that the power increases with the cube of the wind speed, hence why it is so importance to position turbines in windy locations. However, the effective useable wind power is less than indicated by the above equation because the wind speed behind the turbine can never be zero, since no air could follow. Thus, there is a limit on the power that can be generated, and this is called the Betz Limit after Albert Betz, the German physicist who first described it.

The wind speed in front of a wind turbine is larger than the wind speed behind it, and as the mass flow must be continuous, the area A_2 must be bigger than the area A_1 (See Figure 1).

The power that can be generated by the turbine, P_{gen}, is equal to the difference between the wind power in front of the turbine and the wind power behind it. Therefore:

Thus, we can work out the power coefficient, which is defined as the ratio between the generated power and the power of the moving air:

C_p = \frac{P_{gen}}{P_{wind}} = \frac{(v_1 + v_2)(v_1^2 - v_2^2)}{2v_1^3}

We can now find the greatest possible power coefficient (the ideal power coefficient) by letting x be the ratio \frac{v_2}{v_1}, and then finding the maximum value of the function f(x) = \frac{(1+x)(1-x^2)}{2}.

Therefore, according to the Betz Limit, the theoretical maximum power efficiency of any design of wind turbine is approximately 59%, and this occurs when the speed of air behind the turbine is \frac{1}{3} of the speed of air in front (see Figure 2). However, the real-world limit is well below this, with power coefficient values typically in the range of 35% – 45% due to the strength and durability requirements of various designs.

Propellers and turbines are described by the same basic theoretical principles (See Figure 3). However, propellers, unlike turbines, have to be able to ‘suck in’ air and speed it up to produce thrust.

This leads to a number of key differences in design. Firstly, turbine blades are designed to rotate in large volumes of slow-moving air and to create as little turbulence as possible that could foul the next blade; aircraft propellers are designed to move in high velocity air.

Secondly, turbines use blade pitch control to keep the rotor speed within operating limits as the wind speed changes. Pitch control is also an important part of the operation of propellers but it is used as key factor in optimising the thrust rather than keeping the rate of rotations low.

Perhaps the most important distinction to make between the design of turbines and propellers has to do with the shape of the blades. In wind turbines, the blade design has the primary purpose of providing the best possible lift-to-drag ratio so that the blade can move as easily as possible through the air, maximising the efficiency of the turbine. Turbine blades are twisted so they can always present an angle that takes advantage of the ideal lift-to-drag force ratio.

A propeller’s blades have an even more distinctive aerofoil profile which is necessary to create a pressure gradient between air in front of the propeller and air behind it. Although flat propellers are still able to produce thrust, they are very inefficient, and so an aerofoil shape is crucial to create high air pressure behind the propeller and therefore generate ‘lift’ in the direction of movement (See Figure 4).

In summary, the most important parameter for wind turbine blades is the lift-to-drag ratio. In addition, an aerofoil profile may be used at a high angle of attack, as is the case with stall-controlled turbines*. In contrast, in aircraft design, the objective of the design process is usually quite different, for example, to decrease the drag for a fixed lift coefficient. It is worth noting that many of the differences described above come about as a result of the Reynolds number** being much lower for wind turbines than they are for aeroplanes’ propellers, which may change the flow behaviour of the air considerably.

We have now established that aerofoils make up a crucial part of propeller design. In order to understand how they have the effects described in 3.1. it is first necessary to understand Bernoulli’s principle (see Figure 5).

Bernoulli’s principle is based on the conservation of energy. As described by Swiss mathematician and physicist, Daniel Bernoulli, in 1738, it states that there is a positive correlation between the pressure and the speed of a fluid at a specific point. Thus, aerofoils work by causing air that travels over the foil to speed up and thus to have a reduced pressure compared to air under the foil. This results in a force, ‘lift’, which acts perpendicular to the direction of airflow (See Figure 6).

Numerous theories have been developed to allow physicists and engineers to mathematically model the transfer of rotational power into linear thrust. The details of propeller propulsion are very complex because a propeller can be thought of as a rotating wing. The blades are typically long and thin and have an aerofoil profile. In addition, they are usually twisted to increase efficiency. On top of this, the angle of attack (the angle between the oncoming air flow and the orientation of the propeller blades) at the tip is lower than at the hub because the tip is moving at a higher velocity. (This is one of the key concepts in rotational motion. By observing the equation v = \omega r, where \omega is the angular velocity and r is the distance from the centre, it is clear that the further out one goes from the hub, the faster the velocity of the blade.) All together, these characteristics make analysing the airflow through the propeller a very complex task.^{ }

However, we can attempt to create a very basic model of propeller thrust by using the simplified momentum theory and by assuming, as we did when modelling the turbine, that a spinning propeller acts like a disk through which the surrounding air passes (see Figure 7).

The engine turns the propeller and does work on the airflow. This results in an abrupt change in pressure across the propeller disk. From Bernoulli’s principle, we know that the pressure over the top of an aerofoil wing is lower than the pressure below the wing and therefore, a spinning propeller sets up a pressure lower than free stream in front of the propeller and higher than free stream behind the propeller.

In our model, the thrust generated by a propeller disk, F, can be worked out by multiplying the pressure differential across the disk, \Delta p, by the area of the disk, A:

F_{thrust} = A \Delta p

The total pressure in front of the propeller disk is equal to the static pressure, p, added to the dynamic pressure. In fact, these are respectively the first two terms in Bernoulli’s equation, contained within Figure 5. Looking at this equation, we can see that the dynamic pressure of a moving body of gas is given by \frac{1}{2} \rho v^2, where \rho is the density of air and v is the velocity. Therefore, we can find an expression for the total pressure in front of the propeller disk and as well as behind it. (Note that I have used subscript 0 for quantities in front of the propeller, and subscript e for quantities behind it in order to be in accordance with Figure 7.)

p_{total_0} = p + \frac{1}{2}\rho v_0^2

p_{total_e} = p + \frac{1}{2}\rho v_e^2

Therefore, we can work out the change in pressure across the disk:

\Delta p = p_{total_e} - p_{total_0} = p + \frac{1}{2}\rho v_e^2 - (p + \frac{1}{2}\rho v_0^2) = \frac{1}{2}\rho (v_e^2 - v_0^2)

Finally, we substitute this into our equation for thrust to give:

F_{thrust} = \frac{1}{2} \rho A(v_e^2 - v_0^2)

where v_0 is the velocity of the oncoming airflow relative to the propeller, and v_e is the exit velocity of the air from the propeller.

It is worth noting that this value is an ideal number and that it does not account for the many losses that occur in practical, high speed propellers, such as tip losses and slip. It is, however, what we set out to achieve: a model of how the thrust produced by a propeller is related to the linear motion of air.

Below is some more information about a couple of extremely interesting concepts which I mentioned briefly in this article.

* Stall is a sudden reduction in the lift generated by an aerofoil when the critical angle of attack is reached. Around two thirds of the wind turbines currently being installed in the world are stall-controlled machines.

If the reader would like to find out more about stall-controlled machines, I highly recommend this article*.*

** The Reynolds number of a system is a dimensionless value used to categorise the behaviour of fluids in that system: it is the ratio of inertial forces to viscous forces. It is proportional to the density and flow speed of the fluid, and the linear scale of the system, and inversely proportional to the dynamic viscosity. I highly recommend this link and this link to read more about the effect of the Reynolds number on the thrust produced by a propeller.

Abdallah, G. G. (2017). Propeller Force-Constant Modeling for Multirotor UAVs from Experimental Estimation of Inflow Velocity. *International Journal of Aerospace Engineering*.

Engineering, R. A. (Accessed: 2021, January 31). *Wind Turbine Power Calculations*. Retrieved from https://www.raeng.org.uk/publications/other/23-wind-turbine

*Flying Car Gets Off The Ground*. (2020, September 1). Retrieved from Born To Engineer: https://www.borntoengineer.com/flying-car-gets-off-the-ground

Layton, J. (Accessed: 2021, January 31). *Wind Power*. Retrieved from How Stuff Works: https://science.howstuffworks.com/environmental/green-science/wind-power.htm

Moog Inc. (2016, September 27). Retrieved from Pitch Control System: https://www.moog.com/news/operating-group-news/2016/New_Pitch_Control_System.html

NASA. (Accessed: 2021, January 31). *Propeller Propulsion*. Retrieved from https://www.grc.nasa.gov/www/k-12/airplane/propeller.html

NASA. (Accessed: 2021, January 31). *Propeller Thrust*. Retrieved from https://www.grc.nasa.gov/www/Wright/airplane/propth.html

Petrov, M. (Lecture within the course Fluid Machinery (4A1629), Accessed January 2021). *Aerodynamics of Propellers and Wind Turbine Rotors.* Stockholm, Sweden: Royal Institue of Technology.

Ribeiro, A. G. (2012, October). *An airfoil optimization technique*. Retrieved from Science Direct.

Wikipedia. (Accessed: 2021, January 31). *Lift*. Retrieved from https://en.wikipedia.org/wiki/Lift_(force)#Increased_flow_speed_and_Bernoulli’s_principle

*Wind Power*. (Accessed: 2021, January 31). Retrieved from Independent: https://www.independent.co.uk/climate-change/news/wind-power-uk-record-high-2020-b1777082.html

The post Turbomachines appeared first on EtonSTEM.

]]>The post Problem of the Week (35) appeared first on EtonSTEM.

]]>*Source: *Gardner, Martin. My Best Mathematical and Logic Puzzles (Dover Recreational Math). Dover Publications.

Happy problem solving!

Feel free to email your solution to STEM@etoncollege.org.uk to check that it’s right, or check out our solutions post next week!

The post Problem of the Week (35) appeared first on EtonSTEM.

]]>The post Food Quality Testing using Spectroscopy appeared first on EtonSTEM.

]]>**What is spectroscopy?**

In short, spectroscopy allows us to see the chemical composition of an object by seeing how it reacts with electromagnetic radiation. So how does this specifically work?

Broadly, when light interacts with a surface, it can either be absorbed, transmitted (goes straight through), or reflected. It is worth noting that when light is absorbed, it can also be re-emitted. The crucial thing here is that different frequencies of light can have different interactions when hitting the same material (i.e. some light could be absorbed while some could be transmitted). This means that different frequencies can be observed in different locations: light that would usually be transmitted can be seen directly behind the material while light that was reflected can now be observed from a different angle. Therefore, if we can find a way to spread this light out, we can see how different frequencies interact.

This is obviously very useful because when we shine white light on an object, as certain wavelengths are absorbed and re-emitted, we can create an emission spectrum by observing which wavelengths are re-emitted. Conversely, we can create an absorption spectrum, observing wavelengths that aren’t. As different materials absorb different frequencies, we are able to understand the chemical composition of a material by observing these spectra. Nowadays, we have spectra libraries that detail the spectra of various molecules which we can compare to the spectra we observe, making it easy to determine which materials are present.

Now that we understand the key principles behind spectroscopy, we can examine how a spectrometer works. A spectrometer typically observes light that is dispersed through diffraction gratings, refraction, or another method of dispersal. If we were to use a diffraction grating, we can use the double-slit formula: d sinq = ml, where d is the distance between slits, q is the angle from the centre, m is the order of interference and l is the wavelength of the light. So when the light is re-emitted from an object, we can observe the emission spectrum using a spectrometer that can then calculate which wavelengths have been re-emitted. Using this information, we can then work out the chemical composition of the object.

**NIRS**

Instead of using visible light, Near-Infrared Spectroscopy (NIRS) uses the near-infrared region on the electromagnetic spectrum (780nm – 2500nm). NIRS allows us to look at the molecular bonds present in an object. We use the same principles as before – we pass infrared light through the material – but this time we focus more on which wavelengths of infrared light are absorbed as opposed to emitted. So how do we know what is being absorbed?

In order to this, we use overtones and combinations. Molecular bonds have vibrational frequencies (so-called ‘fundamentals’). Overtones operate in a similar way to harmonics where there is a series of absorptions for each fundamental. Combinations occur when near-infrared energy is shared between fundamental absorptions. We can use these overtones or look at the combinations to plot a graph where we should observe peaks at certain wavelengths. This gives us the spectra detailed earlier and again we can use these to identify certain molecular groups present in the substance we are examining.

**Applications in the food industry**

What makes spectrometry so useful is that it doesn’t damage or alter the material examined, making this a non-destructive food analysis method. This means that we can use spectroscopy to see how much fat, water, and protein (among various other substances) are present in food items. This is immensely important as the chemical composition of food has a massive impact on taste/quality as well as potential uses for the food (for example salmon with less fat might be ideal for smoking while salmon with high fat content is good for making sushi).

One of the advantages of NIRS, in particular, is that it has a high penetrative power and so we can see inside the foods much more. While this means we lose some fine detail, this is useful for examining larger quantities. From this, we can broadly see the percentages of different substances in large amounts of food, giving us useful information about their quality. In addition, as it uses reflected energy, we don’t need to put much effort into preparing the sample for analysis.

Obviously, with foods, they are not just one type of molecule and so the spectra/graphs we get are slightly more complex and may not always fit with existing libraries to identify the chemical composition of a material. Therefore, we need statistical techniques, common sense, and machine learning to identify the substances present in the food sample. Unfortunately, the mathematics of this is not fully understood as the field is relatively new and untouched (it was largely dismissed for a few decades and even somewhat today). However, using these processes, we’re still able to get very useful results.

By using spectroscopy to create visual diagrams of the composition of foods, manufacturers can not only assort food to be used for different purposes (or be thrown away for being low quality) but also to perform statistical analysis and refine stages of the manufacturing process in the era of big data.

**References**

McQuarrie and Simon. 2020. *Physical Chemistry: A Molecular Approach*. Libre Texts.

Davies, *A. An introduction to near infrared (NIR) spectroscopy*. https://www.impopen.com/introduction-near-infrared-nir-spectroscopy [retrieved 1st February 2020]

Calibre. *What is NIR and how is it used in food testing?*. https://www.calibrecontrol.com/news-blog/2019/10/1/what-is-nir-and-how-is-it-used-in-food-testing [retrieved 30th January 2020]

Crook, S. Gordon, T. 2019. *Spectroscopy explained – with Crooked Science and USyd Kickstart*, https://www.youtube.com/watch?v=cMCzA9rqJy8

The post Food Quality Testing using Spectroscopy appeared first on EtonSTEM.

]]>The post Problem of the Week (34) appeared first on EtonSTEM.

]]>*Source: *AMC 12A 2010

Happy problem solving!

Email your solution to STEM@etoncollege.org.uk

The post Problem of the Week (34) appeared first on EtonSTEM.

]]>**Introduction**

A hot cup of coffee in snow will always cool down. The First Law of Thermodynamics states that energy cannot be created or destroyed, only converted from one form to another, however, simple observation reveals this law alone does not describe the complete situation, otherwise we would see the cup of coffee warm up and the snow cool down in some instances. The term entropy has been assigned various meanings in a variety of contexts, from steam engines to information theory. It is a law that seems so intuitive that we take it for granted, despite its profound implications for systems such as the universe.

**Beginnnings**

The history of the Second Law of Thermodynamics begins in the early 19^{th} Century, before even the First Law of Thermodynamics had been shown. As an engineer, Sadi Carnot observed that heat engines of his day were remarkably inefficient: ‘If, some day, the steam-engine shall be so perfected that it can be set up and supplied with fuel at small cost, it will combine all desirable qualities, and will afford to the industrial arts a range the extent of which can scarcely be predicted.’ Consequently, Carnot set about investigating whether these engines’ efficiency could be improved. Steam engines incur mechanical power by using heat to manipulate the temperatures and, therefore, pressures of gases, a similar process to modern-day steam turbines; by transferring heat, work can be done. In his book, *Reflections on the Motive Power of Fire*, Carnot established the Carnot Cycle. It described the physical cycle of compression and expansion of the gases that ultimately drove the engine, and allowed for the calculation of the useful work done. It became clear that one ratio had importance to the process:

*Q _{1}/T_{1} = Q_{2}/T_{2}*

Where Q is the heat and T is the temperature of the respective heat reservoirs within the engine, the first being the hot and the second being the cold.

Given that the heat, Q, describes thermal energy, the work done is equal to the difference in heats of the two reservoirs, due to the conservation of energy, the First Law of Thermodynamics. While using the First Law is certainly a simpler explanation, it is still possible to demonstrate this effect without the First Law, such as with the subtle reasoning employed by Carnot himself.

*W = Q _{1} – Q_{2}*

By substitution:

*W/Q _{1} = (T_{1} – T_{2})/T_{1}*

Recall that:

*Efficiency = Useful Work Done ÷ Total Input Energy*

Notice that *W/Q _{1}* describes exactly this, and therefore:

*Efficiency = (T _{1} – T_{2})/T_{1}*

At this point, it should be said that these temperatures are measured in Kelvins, and therefore for there to be 100% efficiency, *T _{2}*, the temperature of the cold reservoir, would have to be 0K; an impossibility.

**Birth**

The story of entropy is resumed 40 years after the work of Sadi Carnot by Rudolph Clausius who investigated this inherent incapability of doing work. The Carnot Cycle dealt with reversible processes, stating that the ratio of heat to temperature is the same within the various states of this process. Clausius built upon the work of Carnot, defining entropy to be this ratio of heat to temperature, which conveys a measure of how much energy is unavailable to do work, stating that the total change in entropy for a reversible process in a closed system is 0.

Real processes, however, are never perfectly reversible. As Feynman puts it, if you drop a cup and it breaks, you can wait a long time for the pieces to come back together, but they never will. It is this idea that led Clausius to state that ‘heat will not pass spontaneously from a colder to a hotter body’. If you did work on the system, you could cause heat to flow from cold to hot, such as with a refrigerator, but this would no longer be a closed system. This leads us to the conclusion that the entropy of a system never decreases, and only in a perfectly reversible process will there be conservation of entropy.

**Disorder**

Despite a birthplace in the mechanics of heat, entropy can equally describe the disorder of a system. Entropy relates the lack of ability of a system to do work, which is ultimately linked to a system being ordered or not.

This can be thought of as a simple turbine. The hot air on the left is allowed to mix with the cold air on the left, the flow of air could turn a turbine in the middle and output useful work. Before the two compartments are allowed to mix, the system is ordered: particles with higher thermal energy on the left, lower on the right. If this is a closed system, then eventually it reaches thermal equilibrium and no more work can be done. Recall that entropy told us about the ability of a system to do work: since the system on the right can no longer do any work, and therefore has maximal entropy. In fact, an ordered system is potential energy in disguise. For example, a solid held by bonds is very ordered, which, when broken, releases energy and becomes disordered. However, when there is only disorder, there is no potential energy and therefore no ability to do work on something.

**Heat Death**

It is important to recall the conclusion that Clausius came to: the entropy of a closed system always increased. In order to decrease entropy one would have to do work on that system, meaning it would no longer be closed. A room, for example, will become disordered. The only way to make this room ordered again, would be to do work on it, by tidying up. To our knowledge, the universe is a closed system in which time only flows in one direction. This leads us to the conclusion that the entropy of the universe itself will eventually be maximised and reach thermal equilibrium: nothing interesting will ever happen again. This is the so-called ‘heat death of the universe’. Considering the humble beginnings in heat engines, the conclusion of the universe ‘dying’ is a profound one.

**Bibliography**

Rovelli, C., Segre, E., & Carnell, S. (2019). The Order of Time. UK: Allen Lane.

Feynman, R. P., Leighton, R. B., & Sands, M. L. (2011). Volume 1: The Laws of Thermodynamics. In *The Feynman lectures on physics*. San Francisco, CA: Addison-Wesley.

Feynman, R. (2018, July 11). Richard Feynman’s Lecture: Entropy (Part 01). Retrieved from https://www.youtube.com/watch?v=ROrovyJXSnM

Published By : Epicurus Of Albion. (2017, February 24). What is Entropy? Retrieved from https://www.thestandupphilosophers.co.uk/what-is-entropy/

Carnot, S. (n.d.). Reflections on the Motive Power of Fire. Retrieved from https://www.pitt.edu/~jdnorton/teaching/2559_Therm_Stat_Mech/docs/Carnot%20Reflections%201897%20facsimile.pdf

Libretexts. (2020, August 25). 7.2: Heat. Retrieved February 01, 2021, from https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/07%3A_Thermochemistry/7.2%3A_Heat

OpenStax. (n.d.). Entropy and the Second Law of Thermodynamics: Disorder and the Unavailability of Energy. Retrieved February 01, 2021, from https://courses.lumenlearning.com/physics/chapter/15-6-entropy-and-the-second-law-of-thermodynamics-disorder-and-the-unavailability-of-energy/

]]>

The post Problem of the Week (33) appeared first on EtonSTEM.

]]>*Source: HMMT Guts Round, Spring 2021*

Happy problem solving!

Email your solution to STEM@etoncollege.org.uk

The post Problem of the Week (33) appeared first on EtonSTEM.

]]>The post Problem of the Week (32) appeared first on EtonSTEM.

]]>*Source: Purple Comet Math Meet, 2008, High School Round*

Happy problem solving!

Email your solution to STEM@etoncollege.org.uk

The post Problem of the Week (32) appeared first on EtonSTEM.

]]>Graphene is an allotrope of carbon that has a single layer of atoms that is arranged in a two-dimensional honeycomb lattice. When layered together, then it forms graphene, which is a material that has a number of exciting properties. Graphene itself is the thinnest compound known to man, the strongest compound ever discovered, the best conductor of heat at room temperature and the best conductor of electricity known. Besides this impressive array of properties, it also has a uniform absorption of light across both the visible and near infra-red part of the spectrum, which contributes to its huge potential in future use.

**How can we make it?**

Originally, it was thought that the only way to make graphene was to grow graphene in a single layer by exposing Platinum, Nickel or Titanium Carbide to ethylene or benzene at high temperatures and getting the graphene off the metal board through chemical vapour decomposition, a complicated, expensive and inefficient process. Nevertheless, in 2012 it was found that graphene could be extracted straight from the metal board, and that the board could be thereafter reused for layer upon layer of graphene, making the process much cheaper and easier. In addition, it was made easier to create high quality graphene too, as chemical vapour decomposition was prone to damaging the graphene when taking it off the metal board, while the newer method of taking it straight off left much higher quality graphene that could actually be used in electronics. Our huge leaps and bounds in graphene development have ultimately left graphene’s quality as a non-factor in our creation of graphene, and instead it is only bounded by cost nowadays.

**How can we use it?**

Graphene also has many exciting possibilities. First, with graphene, there is now a chance that we could create super-capacitators which could be the biggest leap forward in electronic engineering. Compared to our existing batteries and even the best lithium-ion batteries, laser-inscribed graphene super-capacitators performs just as well in terms of power capacity and efficiency, while also being flexible, light and quick to charge. Furthermore, graphene is also able to increase the longevity of batteries, as while lithium after every use is less and less good at holding charge, graphene can hold charge in the batteries for far longer periods of time (thought to be up to ten times as long as lithium-ion batteries). They also hold roughly the same amount of battery after every charge, while normal lithium-ion batteries store less and less charge after every use. In particular, this makes innovations such as electric cars so much more viable, as with graphene super-capacitators we would be able to have batteries that work far better and that could realistically make electric cars more appealing as they are smaller so you can hold more charge, and do not lose charge as quickly when just left idle. This would also decrease charging times, and make electronic devices be chargeable in seconds or minutes instead of hours, which would further make electric cars an appealing option, as well as just being generally useful to us as a whole, as we use more and more electronic devices from phones to laptops. Second, graphene could help spearhead the push to foldable gadgets, as foldable batteries mean we are more likely to get advances in foldable phones and foldable laptops, or even potentially foldable televisions in the future. Third, graphene is a promising opportunity in the world of telecom photodetectors. This seemingly innocuous technology is actually key to a lot of the way in which we live our lives. It helps with our connectivity, and graphene could be key to helping to speed up data transmission. Graphene is ideal because it absorbs light from a large bandwidth and is an excellent conductor of heat, which reduces heat consumption in graphene-based photodetectors. Consequently, it is incredibly helpful in our optical communications industry.

Graphene is one of the most exciting innovations in the last 50 years, as it opens up a world of possibility. From batteries that could revolutionize the way we travel, to foldable technology that would change the way we live, to the huge increases in data-transmission, it is clear that graphene is not only here to stay, but is also a technology that opens up possibilities that would never even have been dreamed of before.

]]>The post Irrational Powers (Part 1) appeared first on EtonSTEM.

]]>First, let’s take a detour through rational powers. As a reminder, rational numbers are those of the form m/n, where m, n are integers. So what do we mean when we say:

x^{m/n}

For the numerator, we just mean multiply x by itself m times. Fair enough. What about the denominator? We mean “take the nth root of x^m” i.e. find the number that when multiplied by itself n times equals x^m (of course, you could take the root of x first and then exponentiate too).

As far as calculation goes, it is possible to just approximate the nth root by ‘guessing’ values. However, there is a far more efficient approach: the Fractional Binomial Theorem. I won’t go into much depth here, but in short, the normal binomial theorem tells you how (x+y)^n expands i.e. what it looks like when you multiply (x+y) by itself n times. Note that this is true when n is a positive integer. The Fractional Binomial Theorem provides a way of extending this to negative and/or rational values of n. Considering negative/rational exponents of binomial expressions shouldn’t be too unfamiliar e.g. \sqrt{1+x} is an example of something you could expand with the Fractional Binomial Theorem. The ‘expansion’ in this case just means finding an infinite sum of terms that are numerically equal to the expression.

4^2 = 16

4^{0.5} = 2

4^{\sqrt{2}} = \, ?

There are two ways you can think about irrational powers. The first involves something like approximation. The second involves the function e^x and logarithms.

Using the example above as a stimulus, we know that \sqrt{2} is 1.414213562… . Let’s take a step back now. We know what irrational numbers are, and we know that their decimal value is made up of an infinite string of digits. If we ‘cut’ the decimal representation at any point, discarding all digits beyond that point, we are left with a rational number, which we can have as a power. With this in mind, we can construct an infinite sequence of rational numbers:

y_0, y_1, y_2, ...

where y_i includes all digits up to and including the ith digit after the decimal point (so when i=0, we only include the part of the number before the decimal point).

What we now do is set the value of x^y for any irrational y equal to \lim_{i \to \infty} x^{y_i}. This just means ‘whatever x^{y_i} tends to’ i.e. is becoming closer and closer to as the value of i increases. This value might seem difficult to pinpoint with this loose reasoning, especially when the actual value is irrational. Don’t worry: there is a way to formalize this process. If you’re interested in finding out more, check out this playlist on Khan Academy.

Another approach involves defining what exponentiation (raising to a power) means for a single base (the number being raised to a power), when raised to any real power, and then extending this so we can use it for any base.

The base mathematicians like to choose is e = 2.71828..., an irrational number associated with growth processes, among other things. One possible reason for this is that all powers of it are reasonably straightforward to define:

e^a = \lim_{n \to \infty} (1+\frac{a}{n})^n

What this definition means, similar to before, is that e^a takes on the value that (1+\frac{a}{n})^n tends to as you increase n.

Once you define this function, which works for any real number a, you can plug in whatever you want for a. What you would do to obtain x^y is plug in a = y \times ln(x), where ln is the natural logarithm function. This function outputs the number that you would need to raise e to in order to get x.

There are two ways to think about irrational powers. The first is:

x^y = x^{y_i}

where y_i is rational and approaches the value of y. Intuitively, this is kind of like we are approximating the irrational power with closer and closer rational powers.

The second is:

x^y = (1+\frac{y \times ln(x)}{n})^n

where n approaches infinity.

Hopefully the notion of irrational powers isn’t so outlandish for your intuition anymore.

The post Irrational Powers (Part 1) appeared first on EtonSTEM.

]]>