Pi (π) has always been a source of interest and frustration to mathematicians. The duality of this constant lies in the simplistic beauty of its definition paired with the frustration of its mathematically proven transcendental nature. For centuries mathematicians have been devising ever more clever and accurate approximations for pi, but did you know that you can approximate the value for pi without the need for complex mathematical equations? By using a random sampling of numbers coupled with elementary statistics you can execute a Monte Carlo simulation to approximate the value of pi with a few short lines of programming.
If you’re like me you’ve been keeping abreast of the recent developments regarding the fail0verflow team’s reverse engineering of Sony’s root signature key. This feat allows the generation of signed homebrew code which can run natively on the PS3 without the need for an existing jailbreak which bypasses the signature check. However, beyond the implications of this feat how did the fail0verflow team accomplish the impossible task of reverse engineering a private key from publicly available data? The answer lies in Sony’s botched implementation of Elliptic Curve Digital Signature Algorithm (ECDSA).
All code which executes on the PS3 requires a valid signature in order for the hardware to allow its execution. In the case of SELF (Signed Executable and Linkable Format) executable files Sony requires a signature within the file to be present which is an ECDSA signature of the file’s header utilizing Sony’s root signing key as one of the private variables. Sony’s crucial mistake comes in their implementation of the ECDSA algorithm which requires that all signatures be calculated with some unique random number k. Instead Sony used a fixed value for k across all of their application signatures which in turn has rendered the ECDSA algorithm effectively useless.
In the case of ECDSA when the random seed k is constant across more than one signature ECDSA hashing function can be solved for the private key d in the form d = (s*k – z) / r where s, z, and r are either publicly known values or are calculated as part of the ECDSA algorithm from publicly known values.
With the private key d now known SELFs may be generated which pass the security validation on the PS3 hardware and may run as native code without restriction. Furthermore, with this method duplicated across all levels of the PS3′s security layer less scrupulous members of the community may use the same method to trivially generate the private signing keys for game encryption, firmware validation, and even the system’s bootloader.
So with PS3 custom firmwares and native homebrew already starting to show up where does Sony go from here? Only time will tell. However, looking back you can say that you fully understand how it all began.
So you encrypt your top secret files with RSA using extreme 4096 keys, but do you really understand the core of what makes this and other types of public key cryptography so secure? As it turns out generating prime numbers with a large key length (more than 64 bits) is computationally easy. Conversely, factoring a large number into its prime factors is computationally very difficult. This is especially hard when your chosen number only has two prime factors.
Don’t believe the facts? Take the work of mathematical geniuses who showed that the number of prime numbers between 1 and a large number n is approximately equal to n / ln(n). So, choosing a 128 bit key we end up with a number between 1 and 340,282,366,920,938,463,463,374,607,431,768,211,456 (approximately 3.40282367 x 1038). Therefore factoring this number into its prime factors requires testing 3.40282367 x 1038 / ln(3.40282367 x 1038) prime numbers which is only roughly 3.83534128 x 1036 terms. Assuming you can check over one trillion factors per second you’re still looking at over 3.83534128 x 1036 year’s worth of computation time: roughly 2.95026252 x 1014 times longer then the estimated age of the universe.
While the mathematical specifics of different encryption algorithms expand on this concept to create much more cryptographically secure systems this unique mathematical concept provides the strength behind public key encryption. That is, as long as P != NP…
Now you know!
Ever since there has existed an interest in anime outside of the language bounds of Japanese-speaking locales there has existed a fansubbing scene. Short for “fan subtitling”, fansubbing typically involves the translation of Japanese animation episodes into target languages and overlaying said titles into a distributable video format.
The practice of fansubbing dates back to the late 1970s and early 1980s when outside interest in the anime phenomenon drove the importing of programs in various formats for viewing at a growing number of anime clubs. As many did not know the Japanese language hardcore fans sough to increase the accessibility of the programming through the process of subtitling the episodes for further distribution. The initially extensive and expensive process of subtitling an episode required first acquiring raw episodes in Betamax, VHS, or Laserdisc format then extracting the audio to text via manual or paid translation. The next assigned task involved timing the subtitle’s appearance to the audio through a process dubbed (no pun intended) timing which may or may not have the use of custom software or professional hardware. Finally, a master of the episode was created by feeding the timed script and raw video feed through a genlock to a high quality medium. Finally the episode was copied from the master and distributed to anime groups through the use of distribution runners or postal mail.
Modern technology has vastly increased the quality and decreased the time-to-distribution of fansubs. Raw video acquisition has moved to digital capture direct from television signals and ripping from DVD or Blu-ray for internet transmission directly to fansubbing groups. Translation may be performed by oversea team members, and timing, styling, and encoding occur through more modern software equivalents of the once expensive hardware and genlocks. Finally, distribution has moved to much more accessible and reliable methods such as IRC and Bittorent.
The recent history of fansubbing has expanded the practice beyond merely the anime genre and into other foreign language programming including animation, television episodes, and movies. While the legality of fansubbing still remains a hot topic of debate amongst the community the rapid increase in the level of technical sophistication and friendly global collaboration serves as one of the most successful examples of what can be accomplished by the passion of true fans uniting with a common goal.
Now you know!
In the world of basic sorting algorithms Quicksort is a hard one to beat. Although daunting to those who may have lived solely in the world of Bubblesort the Quicksort algorithm is both a simple and novel use of recursion with an average complexity of O(n log n).
The basis of Quicksort lies in the recursive application of a split-and-apply approach. Basically within the Quicksort routine you pick an element from the list of elements and use mathematical value of that element as a pivot point. Next split the remaining elements into two lists: one whose values are mathematically less than the pivot and one whose values are mathematically greater than the pivot. At this point we now know that we have an element that is mathematically sorted between two sets of unsorted lists.
Now simply apply Quicksort to the less-than list and the greater-than list and recursively return the results with the pivot in the middle of the two lists and you have implemented Quicksort. How easy was that?
Now you know!
Throughout history few microorganism families are more beloved for their benefits to humanity than those of the order Saccharomycetales, or as they are more commonly known: yeasts. With their use being dated back as far at 6000 B.C. yeasts can be thought of as one of the most ancient classes of tools still in use today.
Yeasts are amazing little organism as they can live, reproduce, and die without ever needing the benefit of light to fuel their life cycle. Instead most yeasts derive the carbon needed for life functions through the metabolism of simple sugars- most notably glucose and fructose. Not only do they require so few mechanisms to sustain their life but almost all species of yeast reproduce asexually by simply cloning itself through a process known as budding. As a result yeasts are one of the easiest biological tools for both hobbyists and commercial entities to farm and cultivate.
The best part is that as the yeasts continuously energize themselves through their programmed cycle of birth and reproduction they are constantly converting those simple sugar molecules, through fermentation, into carbon dioxide and the more enjoyable ethanol. With these two biological gifts human kind has learned to develop such riches as bakery, brewery, and large-scale ethanol production.
So next time you bite into a fluffy loaf of bread, pour yourself a nice glass of Cabernet Sauvignon, or fill up your flex fuel vehicle know that the basis of these modern luxuries lies in an amazing little biological wonder. With all due respect to benefits offered the lever and screw neither fill the stomach with as much sustenance and good cheer as those brought to us by yeasts. For that I salute you o’ magnificent microorganisms.
Now you know!
Modern graphics cards have the ability to process massive amounts of graphical data into a beautifully rendered scene with previously unheard of frame rates. But what secrets do these graphics cards hide beneath their data bus interface? The math behind rendering these scenes are relatively straightforward, and as such the necessary set of rendering instructions can be reduced to a limited set of mathematical operations. With this fact in mind dedicated graphics processors were introduced as a supplemental coprocessor to the CPU dedicated to performing this set of graphics rendering operations.
However, as monitor resolutions increased the demands on GPUs increased as well. In order to keep up with the growing graphics processing requirements companies took a very bold approach and began to implement a SIMD (Single Instruction, Multiple Data) architecture in which multiple copies of the simple graphics processing cores were replicated on a single chip. As history progressed GPUs increased in capability and core numbers with higher clock speeds, massive amounts of available memory, and multiple threading capabilities. With the highly specialized hardware designed for lightning fast floating point computational abilities GPUs quickly approached a theoretical operating mode of one trillion floating point operations per second (1 TFLOP) outpacing some CPUs performing similar operations by orders of magnitudes. Although some research dating back to the early 1970s explored the use of graphics processors to aid in calculations the performance gap between CPUs and GPUs for specific tasks firmly launched the wide scale adoption of GPGPU technology.
To leverage GPGPU technology traditional parallel processing concepts are applied to the operational logic to create a kernel: a piece of code that operates on a single element of data independently from other kernels. Multiple instances of the kernels are spawned on the GPU in order to simultaneously operate on a set of data. For speed purposes the data is usually cached all or in part from the host system to the local device memory on the GPU where data access times are typically orders of magnitude faster for both reading and writing. Once the data is cached on the GPU system the data is then continuously streamed to the kernels which process the data in a massively parallel fashion thanks to the many present graphics cores. Once the stream is exhausted the result is then copied from the graphics device back to the host system for further use by the host system.
Mature APIs such as CUDA, Stream, and OpenCL exist to aid with the initialization and utilization of GPU architectures for developers, and many wrapping libraries such a PyCUDA and JOCL make binding library calls to multiple programming languages possible. So next time you’re developing a data intensive application take a moment to see if your graphics card supports some flavor of GPGPU, and if so try your hand at getting the full processing capability out of your system. It’s not that hard, and the potential for massive gains in computational power is well within your grasp.
Now you know!
If you’ve ever made a measurement on a system’s output and subsequently made a change on the input to achieve a more desirable output then you’ve effectively implemented what is known in control theory as a feedback loop. For most applications simply adjusting your input based on a fixed proportional adjustment is adequate. However, when you want to control the overshoot, settling time, and oscillation frequency of the output error this is where a simple PID controller may be implemented.
A PID controller is basically a set of discrete components or software algorithm that is constantly performing a Proportional (fixed magnitude adjustment), Integration (total summation), Derivative (how much it changed from the last measurement), and on the measured output error. These values are then added together with a tuned proportional, integral, and derivative constants in order to minimize the undesirable overshoot, settling time, and oscillation frequency of the output error.
Implementation can be achieved trivially in a microcontroller by using a constant multiplicative proportional value on the error, an accumulator for integral error measurement, and a difference between the previous measured error and the current error for the instantaneous differential error measurement. Alternately the use of discrete analog components such as op-amps, capacitors, and inductors may provide the tunable mechanisms for the PID loop.
Now you know!
If you’ve hovered anywhere around the open source scene for a while you’re probably familiar with, or at least recognize the GPL, LGPL, BSD License, and the like. These free software licenses form a legal basis for the open source software movement and permits differing levels of distribution and modification of software source code. Did you know that there is also a license for artistic work as well including but not limited to text, audio, images, and videos? If you’re interested in distributing your artistic works under such a license look no further than the Creative Commons licenses.
Creative Commons has four sets of conditions which may be applied in several combinations to produce a set of valid licenses as follows:
Attribution (Abbreviated “cc by”) – The most free CC license available which allows anyone to do almost anything with your work as long as they credit you.
Attribution Share Alike (Abbreviated “cc by-sa”) – As with Attribution this license allows anyone to do almost anything with your work as long as they credit you as the original author and any derivative works carry the same license. The addition of the “Share Alike” clause prevents individuals from freely using your works, then in turn licensing their derivative works in a way that does not allow further derivative works.
Attribution No Derivatives (Abbreviated “cc by-nd”) – This license allows anyone the ability to use or distribute your work as a whole for any use without modification as long as they give you credit.
Attribution Non-Commercial (Abbreviated “cc by-nc”) – This license allows anyone to do anything with your work as long as proper credit is given to you and it is not used for commercial purposes. However, any derivative works do not have to carry the Non-Commercial portion of the license.
Attribution Non-Commercial Share Alike (Abbreviated “cc by-nc-sa”) – The “hippie license”. This license allows anyone to do anything with your work with proper credit to you as long as it is not for commercial purposes and any derivative works carry the same license.
Attribution Non-Commercial No Derivatives (Abbreviated “cc by-nc-nd”) – The “free advertising license”. This license, which imposes the most restraint, allows anyone to redistribute your work for non-commercial purposes as long as they do not modify it it any way, and credit is given to you as the original author.
So there you go. If you’re developing any sort of artistic work be it of the text domain or otherwise, briefly consider this article, and releasing it under a Creative Commons license. The world may just thank you for it later.
Now you know!
You get in your car, start it up, and if you’re anything like me within seconds you’re listening to the atestrelease by Generic Auto-Tuned Pop Sensation through your radio. Most geeks worth their salt at least know that AM stands for Amplitude Modulation and FM represents Frequency Modulation. However, the mechanics behind these technologies may not be immediately known or understood.
Amplitude Modulation is the easiest form of modulation of an analog signal. With AM a constant known carrier frequency sequence is transmitted with a fixed amplitude. In the simplest case to produce the AM signal the carrier frequency’s amplitude is proportionally increased or decreased in relation to the amplitude of the information’s signal wave. Demodulation of an AM signal can be accomplished by a simple envelope detector made from a single diode coupled with an RC circuit.
Frequency Modulation is a bit more complicated but is less susceptible to naturally occurring terrestrial interference. As with AM a constant known carrier frequency sequence is transmitted with a fixed amplitude. To encode the informational signal onto the carrier signal the frequency of the carrier signal is increased or decreased in relation to the amplitude of the information’s signal wave. Demodulation of an FM signal is a bit more complicated and requires the implementation of a discriminator or a phase locked loop detector in order to recover the carrier signal’s frequency drift and subsequently the amplitude of the information signal.
Now you know!