This is called the "binomial confidence interval," and there are a few solutions. Wikipedia discusses this here: Binomial proportion confidence interval
Friday, December 21, 2012
Friday, November 30, 2012
Q: Event A has a probability of 70% of happening within the year and event B, 40%. The events are independent and uniformly distributed through the year. What is the probability that they will occur within 3 months of each other?
There is more than one way to answer this question!
The ambiguity comes down to exactly how we interpret the statement that "Events A and B are uniformly distributed across the year."
FIrst interpretation: Events A and B are produced by a memoryless process with uniform hazard function. Every day, we wake up and Event A has a certain uniform probability of happening that day, the same as every other day. Event B is independent and has its own uniform probability of happening that day. If the event doesn't happen, we go to bed and wake up the next day, and it's the exact same story, with the same probabilities, all over again, just like "Groundhog Day."
The ambiguity comes down to exactly how we interpret the statement that "Events A and B are uniformly distributed across the year."
FIrst interpretation: Events A and B are produced by a memoryless process with uniform hazard function. Every day, we wake up and Event A has a certain uniform probability of happening that day, the same as every other day. Event B is independent and has its own uniform probability of happening that day. If the event doesn't happen, we go to bed and wake up the next day, and it's the exact same story, with the same probabilities, all over again, just like "Groundhog Day."
Monday, October 29, 2012
Q: What is it like to have the process of video encoding or transcoding bring you to tears?
In 2005, we built a system to do the first amateur ATSC HDTV broadcasts. We broadcast MIT sporting events and the January "Integration Bee" (with play-by-play and color commentary, etc.) on the MIT cable TV system. (http://broadcastengineering.com/newsrooms/mit-sportcast-plan-calls-hd-telecasts, http://sportcast.mit.edu)
The plan called for us to build our own live HD switcher with wipes and other transitions, score overlay and ATSC encoder -- a commercial system was several hundred thousand dollars at that point. We had four HDV cameras connected via Firewire to a head-end computer, and used an array of 1U servers with nVidia cards to render an arbitrary scene with OpenGL (using a pipeline through four machines to paint the video from four cameras on the frame), then overlay the score and time, and finally encode with A/52 audio in ATSC-conformant MPEG transport stream and send out in QAM256 over the MIT cable system (http://web.mit.edu/sportcast/checkout/).
Some of the technical challenges proved especially difficult and I was up all night with them before a game. Getting four Firewire HDV cameras to talk to the same computer was a real pain, since these cameras generally have cheap chipsets and all want to talk on the same broadcast channel. (We got this working, but it was quite brittle since unplugging one camera would freeze the whole Firewire bus for about 1 second. In year 2 we switched to connecting the cameras to small Mini-ITX computers and then running them to the switcher over UDP on Ethernet.) Showing up at 10 a.m. at a volleyball game and having to explain to my colleagues that we STILL didn't quite have working video was painful!
The most challenging part was writing the MPEG systems stream encoder that would be ATSC-complaint -- in other words, writing a program to stitch together Dolby AC-3 audio and MPEG-2 video that would actually PLAY in sync on a real store-bought television without glitches. (The MPEG-2 video elementary stream was compressed by libavcodec, and the audio by liba52, but getting a compliant audio-video multiplex is a different story.)
This was difficult because those TVs do not exactly give you a lot of helpful diagnostic information. If you do it wrong, you see glitches, but these can be very rare (like once every 20 minutes!) and it's not like you get a debugging trace.
ATSC has a lot of requirements you have to comply with, like you can't send a frame of video more than 0.5 seconds before it will be displayed, and if you break these you will see undefined behavior from the TV.
Getting all those pieces put together, so we could actually WATCH our sports broadcasts on a real HDTV in 2005 via the cable TV system without hiccups, was very satisfying and a great payoff for all our work. After spending umpteen all-nighters doing it and breaking numerous promises to my friends and colleagues to have it running earlier, I am sure I was brought to tears (of exhaustion and/or joy) when it finally worked.
The plan called for us to build our own live HD switcher with wipes and other transitions, score overlay and ATSC encoder -- a commercial system was several hundred thousand dollars at that point. We had four HDV cameras connected via Firewire to a head-end computer, and used an array of 1U servers with nVidia cards to render an arbitrary scene with OpenGL (using a pipeline through four machines to paint the video from four cameras on the frame), then overlay the score and time, and finally encode with A/52 audio in ATSC-conformant MPEG transport stream and send out in QAM256 over the MIT cable system (http://web.mit.edu/sportcast/checkout/).
Some of the technical challenges proved especially difficult and I was up all night with them before a game. Getting four Firewire HDV cameras to talk to the same computer was a real pain, since these cameras generally have cheap chipsets and all want to talk on the same broadcast channel. (We got this working, but it was quite brittle since unplugging one camera would freeze the whole Firewire bus for about 1 second. In year 2 we switched to connecting the cameras to small Mini-ITX computers and then running them to the switcher over UDP on Ethernet.) Showing up at 10 a.m. at a volleyball game and having to explain to my colleagues that we STILL didn't quite have working video was painful!
The most challenging part was writing the MPEG systems stream encoder that would be ATSC-complaint -- in other words, writing a program to stitch together Dolby AC-3 audio and MPEG-2 video that would actually PLAY in sync on a real store-bought television without glitches. (The MPEG-2 video elementary stream was compressed by libavcodec, and the audio by liba52, but getting a compliant audio-video multiplex is a different story.)
This was difficult because those TVs do not exactly give you a lot of helpful diagnostic information. If you do it wrong, you see glitches, but these can be very rare (like once every 20 minutes!) and it's not like you get a debugging trace.
ATSC has a lot of requirements you have to comply with, like you can't send a frame of video more than 0.5 seconds before it will be displayed, and if you break these you will see undefined behavior from the TV.
Getting all those pieces put together, so we could actually WATCH our sports broadcasts on a real HDTV in 2005 via the cable TV system without hiccups, was very satisfying and a great payoff for all our work. After spending umpteen all-nighters doing it and breaking numerous promises to my friends and colleagues to have it running earlier, I am sure I was brought to tears (of exhaustion and/or joy) when it finally worked.
Sunday, October 21, 2012
Q: What are France's most remarkable contributions to the modern world?
How about:
Aviation (Montgolfier brothers, Robert brothers)
Moving pictures (co-invented by the Lumière brothers)
Electromagnetism (many contributions by Coulomb and Ampère)
Photography (co-invented by Niépce, also Daguerre)
Pedal-driven bicycle (Pierre Michaux, Pierre Lallement, and the Olivier brothers)
Inflatable automobile tires (Michelin)
Stethoscope (Laennec)
Gyroscope (Foucault)
Fresnel lens (Fresnel)
Calculator, probability, and much mathematics (Pascal, Fermat)
Galois theory (Galois)
Chaos theory, much mathematics (Poincaré)
Wavelets (Mallat, Meyer) and fractals (arguably Mandelbrot)
Baudot code and "baud" rate (Baudot)
Aqualung/SCUBA (Gagnan, Cousteau)
Fourier transform (Fourier)
Much music (Ravel, Debussy, Bizet, Saint-Saëns, Berlioz, Stravinsky, arguably Chopin)
Much, much art (Monet, Renoir, Cezanne, Seurat, Degas, Gauguin, Caillebotte, Rodin, Pissarro, Signac, arguably van Gogh)
More here: http://www.quora.com/France/What-are-Frances-most-remarkable-contributions-to-the-modern-world
Friday, October 5, 2012
Q: How many unique Tweets can ever be posted?
At least (a number with 1,264 decimal digits), using the contents of the tweet alone. Plus there is all the metadata, which probably adds at least a thousand bits.
See https://blogs.oracle.com/ ksplice... . It turns out that Twitter allows almost 2^31 choices per "character," at least when a tweet is first posted. (They decay over time...)
Unicode itself is a 20.1-bit system, but Twitter doesn't allow literally all Unicode scalar values. (E.g. it messes with < and >.) On the other hand, Twitter does allow the huge characters above the first 2^20, that is to say not Unicode, but below 2^31. (This is almost 31 bits anyway.)
Disclaimer: I have not checked this myself since writing that blog post in March 2010.
See https://blogs.oracle.com/
Unicode itself is a 20.1-bit system, but Twitter doesn't allow literally all Unicode scalar values. (E.g. it messes with < and >.) On the other hand, Twitter does allow the huge characters above the first 2^20, that is to say not Unicode, but below 2^31. (This is almost 31 bits anyway.)
Disclaimer: I have not checked this myself since writing that blog post in March 2010.
Thursday, August 30, 2012
Q: Why is the SI second defined as 9,192,631,770 periods of the transition between two states of cesium-133?
In 1895, the astronomer Simon Newcomb published his "Tables of the Sun," based on observations of the sun's position from 1750 to 1892. (http://en.wikipedia.org/wiki/Newcomb%27s_Tables_of_the_Sun) These calculations turned out to be reliable enough that astronomers continued to use them until the 1980s.
Before 1956, the second was defined as the mean solar second, or in other words, 1/86,400 of the time the earth takes to spin around on its own axis and see the sun again each day. But because the moon's gravity and the tides are slowing down the earth's spin, this is not a stable quantity.
From 1950-1956, the international authorities agreed to redefine the second to be the "ephemeris second," based on the speed of the earth's orbit around the sun in 1900, as predicted in Newcomb's tables. The earth's orbit around the sun is not slowing down, at least not on anything like the effect on the earth's spin around its own axis. (In practice the ephemeris second is measured by looking at the moon's orbit around the earth and taking pictures of what stars the moon is near.)
Because Newcomb's tables cover observations from 1750 to 1892, the "ephemeris second" corresponds to the mean solar second at the middle of this period, or about 1820. (http://tycho.usno.navy.mil/leapsec.html)
Meanwhile, from 1952 to 1958, astronomers from the U.S. Navy and the British National Physical Laboratory measured the frequency of cesium oscillations in terms of the ephemeris second. (http://www.leapsecond.com/history/1958-PhysRev-v1-n3-Markowitz-Hall-Essen-Parry.pdf) Cesium is even more stable than the orbit of the earth around the sun.
There are a few ways to do the calculations that they show in the paper (having to do with exactly what period they observed over and whether they corrected for some subtleties re: the moon's orbit), giving results between 9,192,631,761 and 9,192,631,780. The average was 9,192,631,770.
In 1967, this became the official definition of the SI second, replacing the ephemeris second. But the reason the number is what it is is because Newcomb analyzed observations from 1750 to 1892, and the middle of that period is 1820, and that's how fast the earth was spinning on its axis in 1820.
Before 1956, the second was defined as the mean solar second, or in other words, 1/86,400 of the time the earth takes to spin around on its own axis and see the sun again each day. But because the moon's gravity and the tides are slowing down the earth's spin, this is not a stable quantity.
From 1950-1956, the international authorities agreed to redefine the second to be the "ephemeris second," based on the speed of the earth's orbit around the sun in 1900, as predicted in Newcomb's tables. The earth's orbit around the sun is not slowing down, at least not on anything like the effect on the earth's spin around its own axis. (In practice the ephemeris second is measured by looking at the moon's orbit around the earth and taking pictures of what stars the moon is near.)
Because Newcomb's tables cover observations from 1750 to 1892, the "ephemeris second" corresponds to the mean solar second at the middle of this period, or about 1820. (http://tycho.usno.navy.mil/leapsec.html)
Meanwhile, from 1952 to 1958, astronomers from the U.S. Navy and the British National Physical Laboratory measured the frequency of cesium oscillations in terms of the ephemeris second. (http://www.leapsecond.com/history/1958-PhysRev-v1-n3-Markowitz-Hall-Essen-Parry.pdf) Cesium is even more stable than the orbit of the earth around the sun.
There are a few ways to do the calculations that they show in the paper (having to do with exactly what period they observed over and whether they corrected for some subtleties re: the moon's orbit), giving results between 9,192,631,761 and 9,192,631,780. The average was 9,192,631,770.
In 1967, this became the official definition of the SI second, replacing the ephemeris second. But the reason the number is what it is is because Newcomb analyzed observations from 1750 to 1892, and the middle of that period is 1820, and that's how fast the earth was spinning on its axis in 1820.
Saturday, August 25, 2012
Q: If one day is not exactly 24 hours and is in fact 23 hours, 56 minutes, shouldn't the error add up, and shouldn't we see 12 a.m. becoming noon?
You're right that a "sidereal" day is about 23 hours, 56 minutes, 4 seconds. But this is not a day in the everyday sense.
A sidereal day is how long it takes the earth (on average) to make one rotation relative to the faraway stars and other galaxies in the sky.
If you find a star that is directly above you at midnight one night, the same star will be directly above you again at 11:56:04 p.m. the next evening.
Similarly, if you were sitting on the star Proxima Centauri looking through a powerful telescope at earth, you would see Toledo, Ohio, go by every 23 hours, 56 minutes, and 4 seconds.
However, we don't keep time by the faraway stars -- we measure time by a much closer star, the sun! And we are actually in orbit around the sun, orbiting in the same direction that the earth is spinning on its own axis. From our perspective, the sun goes a little slower in the sky because we are also orbiting around it.
How fast are we orbiting around the sun? We make one full orbit every year, or roughly 366.25 sidereal days.
So after a year, the faraway stars will have done 366.25 rotations around the earth, but the sun will only have done 365.25 rotations. We "lose" a sunset because of the complete orbit. (The extra quarter day is why we need a leap year every four years.)
So there are 365.25 "mean solar days" in 366.25 "sidereal" days. How long is a "mean solar day"? Let's do the math: One sidereal day is 23 hours, 56 minutes, 4 seconds, or 86164 seconds. Multiply this by 366.25 sidereal days in a year, and you get 31557565 seconds. Divide by 365.25 solar days, and we get that a solar day is.... 86,400 seconds. That's 24 hours exactly!
It's this "mean solar day" (24 hours) that is the normal definition of day.
If you want to do the math more exactly, a sidereal day is 86164.09054 seconds, and a tropical year is 366.242198781 sidereal days. That works out very closely.
(P.S. Unfortunately, the earth's spin has been slowing down because the moon is sucking away the earth's energy. Every time the high tide of the Atlantic Ocean slams into the east coast of North America, the earth slows its spin a little bit. The definition of the second is based on the speed the earth was spinning back in 1820, and we have slowed down since then. As a result, we occasionally have to add in a "leap" second to the world's clocks. See http://online.wsj.com/article_email/SB112258962467199210-lMyQjAxMTEyMjIyNTUyODU5Wj.html?mod=wsj_valetleft_email)
A sidereal day is how long it takes the earth (on average) to make one rotation relative to the faraway stars and other galaxies in the sky.
If you find a star that is directly above you at midnight one night, the same star will be directly above you again at 11:56:04 p.m. the next evening.
Similarly, if you were sitting on the star Proxima Centauri looking through a powerful telescope at earth, you would see Toledo, Ohio, go by every 23 hours, 56 minutes, and 4 seconds.
However, we don't keep time by the faraway stars -- we measure time by a much closer star, the sun! And we are actually in orbit around the sun, orbiting in the same direction that the earth is spinning on its own axis. From our perspective, the sun goes a little slower in the sky because we are also orbiting around it.
How fast are we orbiting around the sun? We make one full orbit every year, or roughly 366.25 sidereal days.
So after a year, the faraway stars will have done 366.25 rotations around the earth, but the sun will only have done 365.25 rotations. We "lose" a sunset because of the complete orbit. (The extra quarter day is why we need a leap year every four years.)
So there are 365.25 "mean solar days" in 366.25 "sidereal" days. How long is a "mean solar day"? Let's do the math: One sidereal day is 23 hours, 56 minutes, 4 seconds, or 86164 seconds. Multiply this by 366.25 sidereal days in a year, and you get 31557565 seconds. Divide by 365.25 solar days, and we get that a solar day is.... 86,400 seconds. That's 24 hours exactly!
It's this "mean solar day" (24 hours) that is the normal definition of day.
If you want to do the math more exactly, a sidereal day is 86164.09054 seconds, and a tropical year is 366.242198781 sidereal days. That works out very closely.
(P.S. Unfortunately, the earth's spin has been slowing down because the moon is sucking away the earth's energy. Every time the high tide of the Atlantic Ocean slams into the east coast of North America, the earth slows its spin a little bit. The definition of the second is based on the speed the earth was spinning back in 1820, and we have slowed down since then. As a result, we occasionally have to add in a "leap" second to the world's clocks. See http://online.wsj.com/article_email/SB112258962467199210-lMyQjAxMTEyMjIyNTUyODU5Wj.html?mod=wsj_valetleft_email)
Tuesday, July 24, 2012
Q: Why don't we see green stars?
Stars are black bodies in thermal equilibrium (http://en.wikipedia.org/wiki/Black-body_radiation). Their spectrum depends only on their temperature, and the shape of the spectrum is described by Planck's law (http://en.wikipedia.org/wiki/Planck%27s_law).
As a result, only some colors are possible: the ones that can be formed by a black-body radiator with this shape of spectrum. The line in the CIE diagram below shows the possible colors of black-body radiation, depending on the temperature:
(from Wikipedia's http://en.wikipedia.org/wiki/File:PlanckianLocus.png)
You will see essentially the same colors from incandescent light bulbs and toaster heating elements as from a star -- a 2700K tungsten filament will radiate light that appears to the human eye with the color corresponding to 2700K on the above diagram.
The "black body" curve does not go through anything you could really call green.
Qualitatively, for something to appear green, it essentially needs to stimulate the medium-wavelength cones more than the long- and short-wavelength cones in the human eye. Black-body radiation is too broadband to do this.
Here, the colored lines represent the sensitivities of the three kinds of cones in the human eye. The dashed line is black-body radiation from a 5400K star, obeying Planck's law. Black-body radiation is way too broad to hit the "green" cones without also hitting the "red" and "blue" ones. That's why this light appears white.
As a result, only some colors are possible: the ones that can be formed by a black-body radiator with this shape of spectrum. The line in the CIE diagram below shows the possible colors of black-body radiation, depending on the temperature:
You will see essentially the same colors from incandescent light bulbs and toaster heating elements as from a star -- a 2700K tungsten filament will radiate light that appears to the human eye with the color corresponding to 2700K on the above diagram.
The "black body" curve does not go through anything you could really call green.
Qualitatively, for something to appear green, it essentially needs to stimulate the medium-wavelength cones more than the long- and short-wavelength cones in the human eye. Black-body radiation is too broadband to do this.
Here, the colored lines represent the sensitivities of the three kinds of cones in the human eye. The dashed line is black-body radiation from a 5400K star, obeying Planck's law. Black-body radiation is way too broad to hit the "green" cones without also hitting the "red" and "blue" ones. That's why this light appears white.
Friday, May 4, 2012
Q: What is the safest, simplest, and most effective method to anchor an average-size sailboat?
There is technique (and many strong feelings) to anchoring, but I don't think there are any great secrets beyond what is taught in sailing classes. Just practice and patience and a lot of small things.
Usually anchoring can be done simply and without drama -- the need for experience comes when things go wrong. (Like pretty much 95% of things in sailing.) Of course it's a lot easier to anchor a Rhodes 19 in a nice lake with friends you sail with every weekend, versus anchoring an unfamiliar Beneteau 50 you just chartered off an unfamiliar island with inexperienced crew you are sailing with for the first time.
The way to be safe is to practice adequately, build up experience, and to keep learning from other sailors. US Sailing and ASA both teach "Basic Coastal Cruising" classes that including anchoring, and many sailors are flattered to help others learn this kind of thing. NauticEd (http://www.nauticed.org/sailingcourses/view/anchoring-a-sailboat) also has a $17 online course that is probably not bad.
Here are some general tips that they would teach you in a class:
Usually anchoring can be done simply and without drama -- the need for experience comes when things go wrong. (Like pretty much 95% of things in sailing.) Of course it's a lot easier to anchor a Rhodes 19 in a nice lake with friends you sail with every weekend, versus anchoring an unfamiliar Beneteau 50 you just chartered off an unfamiliar island with inexperienced crew you are sailing with for the first time.
The way to be safe is to practice adequately, build up experience, and to keep learning from other sailors. US Sailing and ASA both teach "Basic Coastal Cruising" classes that including anchoring, and many sailors are flattered to help others learn this kind of thing. NauticEd (http://www.nauticed.org/sailingcourses/view/anchoring-a-sailboat) also has a $17 online course that is probably not bad.
Here are some general tips that they would teach you in a class:
- Pick an appropriate anchorage, based on (a) shelter from wind and waves and lee shores (b) good holding ground [generally mud or sand will be preferred] (c) adequate scope, swing room, and depth under the boat.
- Coordinate and practice with the crew. Arrange hand signals if necessary. Do not raise your voice. Speak in complete sentences. Use headsets if helpful.
- Use an appropriate anchor. There are many strong feelings on this. For muddy or sandy bottoms, a lightweight-type (Danforth) anchor, appropriately sized to the boat, is generally fine. Of course there are many fancy anchors now available (Rocna, etc.) that are fine too.
- Use appropriate rode. In the Caribbean, all-chain rode (with nylon snubber) is typically used because coral reefs can chafe nylon rode, and all-chain rode typically requires less scope. If using an all-nylon rode, allow 7:1 scope for overnight anchoring.
- Consult a coast pilot (or cruising guide or similar publication) and nautical charts for information and warnings about the anchorage. In the BVI, the aerial photographs in "Virgin Anchorages" are very helpful for the first-time visitor to unfamiliar islands.
- Realize that the difficulty varies based on the conditions and the time of day. Anchoring in pleasant weather in a familiar spot with the sun up is one thing. An unfamiliar anchorage at night in a gale with cold rain or spray and a slippery deck is different and calls for much more caution.
- Cruise through the anchorage once before picking a spot to anchor. Don't just anchor in the first place that looks good. If there are other vessels already anchored, they have the right to set the anchoring method in use -- single bow anchor, Bahamian moor, two anchors off bow, etc. They have the right to ask you to move if you anchor too close. Feel free to slow down and ask the other vessels how much scope they have let out, etc.
- When you do pick a spot, allocate appropriate swing room for changes in wind and tide. Confirm appropriate depth with your depth sounder and charts.
- Assuming you are anchoring with a single anchor off the bow (the most common method): As helmsman, point the vessel into the wind and wait until ALL headway has stopped. Instruct the crew to begin LOWERING (not dropping or throwing) the anchor. Hopefully you have a working motorized windlass and have marked every 10 feet of the rode with little indicators -- these are both great conveniences.
- For all-chain rode, I like to first pay out 3:1 scope, then back down on it with the engine at 2,200 RPM. Then I pay out to 5:1 scope. For nylon rode, I generally pay out 5:1 rode, then back down on it, then pay out to 7:1 scope.
- With practice, you can confirm that the anchor has set by looking at how the rode "skips" across the surface of the water when it gets tensed up. In any event, don't leave the boat right away after anchoring. Confirm that you are not dragging. One classical technique is to sight a pair of objects off the beam and confirm that they retain their alignment (i.e. that the wind isn't pushing you back). Of course there are now GPS alarms for this kind of thing.
- If the water is clear and warm enough, DIVE the anchor to confirm it has set. Sailors in the BVI swear by this. The corollary is that you should plan to arrive in the anchorage before the sun gets low in the sky, so you can still see the coral heads and the ground and your anchor.
- If the anchor doesn't set, the first response should be to pay out more rode and see if it eventually sets. If that doesn't work, just pull it up, circle around, and do it again. Speak in complete sentences to the crew and explain that you are going to do it over again. Don't get angry if it doesn't work -- there's no shame in repeating the process 2 or 3 times. You'd much rather get it right than wake up with a "bump in the night" at 2 a.m.! If you still can't get it to set, you may have bad holding ground and have to pick a different spot.
- Sometimes, in a crowded anchorage, when you are trying to do this, the proprietors of nearby vessels will come out on deck and look at you with the death stare. And they will bring their fenders out and tie them on. In a truly obnoxious anchorage, they will even talk loudly about the "amateur" or the "credit card captain" in their midst. These people are dicks and you can't let them get to you, but the way to be a responsible citizen is to (a) know your own capabilities and those of your vessel [i.e. practice maneuvering when you are out in the open!], (b) don't attempt anything unsafe or beyond your ability (c) don't hit anybody (d) keep your calm with the crew. No jumping around, no yelling, no waving your arms angrily. Speak in complete sentences.
- If a nearby, previously-anchored vessel says you are too close and you have to move, you have to move. If they just give you the death stare and the full complement of fenders, consider yourself warmly welcomed. Dinghying over with treats and/or drinks can be a good way to introduce yourself.
Friday, April 20, 2012
Q: Does all "white noise" sound like the same hiss?
The answer is no: not all white noise sounds alike!
White noise can sound like "hissing" of a shortwave radio or it can sound like a Geiger counter (click..... clickclick............ click).
More:
A noise process is "white" if every frequency has the same power spectral density.
Any process where any two samples taken at different times will be statistically independent is white in this sense. In other words, if knowing the amplitude of the noise at time x tells us nothing about the amplitude at any other time, then the noise must be "white."
But there are many different-sounding processes that have this characteristic, because just knowing that two samples are independent does not tell us the distribution of the individual samples.
White noise can sound like "hissing" of a shortwave radio or it can sound like a Geiger counter (click..... clickclick............ click).
More:
A noise process is "white" if every frequency has the same power spectral density.
Any process where any two samples taken at different times will be statistically independent is white in this sense. In other words, if knowing the amplitude of the noise at time x tells us nothing about the amplitude at any other time, then the noise must be "white."
But there are many different-sounding processes that have this characteristic, because just knowing that two samples are independent does not tell us the distribution of the individual samples.
- One classical example is "thermal" noise, in which the samples are distributed according to a normal, or Gaussian, distribution. This is known as "white gaussian noise," and typically in communications will have been added to the signal we are interested in: hence, Additive White Gaussian Noise (AWGN). This sounds like "hissing."
- Another kind of white noise is "shot" noise, which can come from any Poisson process, including the particle decays heard by a Geiger counter. Here the individual samples aren't Gaussian deviates; they are impulses, either zero or big, and most of the time they're zero. But since knowing the time of one "click" tells us nothing about any other (and because each click carries all the frequencies), this is also white noise.
Thursday, February 9, 2012
Q: What were some surprising court decisions?
Thursday, February 2, 2012
Q: What are the most impactful inventions created in Boston?
I think the telephone is probably the all-time top Boston invention, but also these:
1802 -- Modern navigation -- Bowditch
1886 -- Management consulting -- Little
1901 -- Disposable safety razor -- Gillette et al.
1914 -- "Tech"nicolor -- Founded in Boston by Kalmus et al.
1919 -- Trans-Atlantic aircraft -- Hunsaker et al.
1929- -- Instant photography (Polaroid) -- Land
1931 -- Stroboscopy -- Edgerton, Germeshausen et al.
1937 -- Use of Boolean logic to design "digital" circuits -- Shannon
1940-45 -- Practical radar -- Anglo-American military collaboration at MIT
1944 -- Mark I/II computers and first computer "bug" -- Aiken, Hopper et al.
1945 -- Hypertext -- Vannevar Bush
1951 -- Huffman code
1951 -- Random access memory ("core")-- Project Whirlwind
1953 -- PET scan -- Brownell
1953- -- Doppler radar -- Gordon
1956- -- Chomsky hierarchy
1957- -- Generative grammar -- Chomsky
1957 -- Confocal microscope -- MInsky
1957-61 -- Time-sharing (and some of what we now call virtualization) -- Project MAC
1958 -- LISP -- McCarthy
1961 -- Chaos theory -- Lorenz (and many others)
1961-2 -- Digital videogame (Spacewar!) -- Graetz, Russel, Wiitanen, Kotok
1963 -- CAD -- Sutherland
1964 -- Minicomputer -- DEC
1964-5 -- Electronic mail -- Van Vleck / Morris on CTSS (also network email, Tomlinson in 1971)
1969 -- Apollo guidance computer that navigated to and landed on moon -- Instrumentation (now Draper) Laboratory
1970-90 -- Object-oriented programming and data hiding -- Liskov (and many others)
1972 -- Packet-switching and ARPANET -- Kahn, BBN, etc.
1973 -- Black-Scholes option pricing model -- Black, Scholes, Merton
1978 -- Practical public-key cryptography (RSA) -- Rivest, Shamir, Adelman
1979 -- Spreadsheet -- Bricklin and Frankston
1981-89 -- Copyleft/sharealike, GNU and free software movement -- Stallman
1995- - E-ink -- Jacobsen et al.
2000 -- Zipcar -- Danielson, Chase
1886 -- Management consulting -- Little
1901 -- Disposable safety razor -- Gillette et al.
1914 -- "Tech"nicolor -- Founded in Boston by Kalmus et al.
1919 -- Trans-Atlantic aircraft -- Hunsaker et al.
1929- -- Instant photography (Polaroid) -- Land
1931 -- Stroboscopy -- Edgerton, Germeshausen et al.
1937 -- Use of Boolean logic to design "digital" circuits -- Shannon
1940-45 -- Practical radar -- Anglo-American military collaboration at MIT
1944 -- Mark I/II computers and first computer "bug" -- Aiken, Hopper et al.
1945 -- Hypertext -- Vannevar Bush
1951 -- Huffman code
1951 -- Random access memory ("core")-- Project Whirlwind
1953 -- PET scan -- Brownell
1953- -- Doppler radar -- Gordon
1956- -- Chomsky hierarchy
1957- -- Generative grammar -- Chomsky
1957 -- Confocal microscope -- MInsky
1957-61 -- Time-sharing (and some of what we now call virtualization) -- Project MAC
1958 -- LISP -- McCarthy
1961 -- Chaos theory -- Lorenz (and many others)
1961-2 -- Digital videogame (Spacewar!) -- Graetz, Russel, Wiitanen, Kotok
1963 -- CAD -- Sutherland
1964 -- Minicomputer -- DEC
1964-5 -- Electronic mail -- Van Vleck / Morris on CTSS (also network email, Tomlinson in 1971)
1969 -- Apollo guidance computer that navigated to and landed on moon -- Instrumentation (now Draper) Laboratory
1970-90 -- Object-oriented programming and data hiding -- Liskov (and many others)
1972 -- Packet-switching and ARPANET -- Kahn, BBN, etc.
1973 -- Black-Scholes option pricing model -- Black, Scholes, Merton
1978 -- Practical public-key cryptography (RSA) -- Rivest, Shamir, Adelman
1979 -- Spreadsheet -- Bricklin and Frankston
1981-89 -- Copyleft/sharealike, GNU and free software movement -- Stallman
1995- - E-ink -- Jacobsen et al.
2000 -- Zipcar -- Danielson, Chase
Wednesday, January 11, 2012
Q: If I mix 700nm (red) light and 400nm (green) light, is the result a color that can't be made by a single wavelength?
The answer is yes -- if you mix light from a laser (monochromatic light) at 700 nm with another laser at 400 nm, the resulting radiation will be different from any monochromatic light.
That's true in two ways:
You can see this on the CIE standard observer colorimetry diagram:
(from http://en.wikipedia.org/wiki/CIE_1931_color_space)
This horseshoe-shaped figure represents the human perception of color near the area of focus (where cones predominate), once overall brightness (luminance) is factored out.
The top outside of the horseshoe (with the numbers going from "380" on the lower left to "700" on the lower right) is known as the "spectral locus": it represent the colors you can get with monochromatic light, e.g. by varyting a laser in wavelength from 380 nanometers to 700 nanometers.
The bottom line that directly connects "380" and "700" is known as the line of purples. These colors (all shades of purple) cannot be made by any single laser! And the entire interior of the horseshoe, including the middle where "white" is, also requires more than one laser.
Your color -- a combination of light at 400 nm and 700 nm -- will be found somewhere very close to the line of purples. (The more 400 nm, the more it will be closer to that side, and vice versa.) You can tell from the diagram that these colors aren't on the spectral locus, and therefore can't be made with a single laser.
======
The standard "R'G'B'" color spaces work by picking three illuminants from the inside of this diagram. (These can be three phosphors on a CRT, three filters on an LCD, three slices on the color wheel of a DLP display, three layers of emulsion on a piece of color film, etc.) Each illuminant's color has a point within the horseshoe, and the three points form a triangle. By varying the amount of R, G, and B, we can make a color that is perceived the same as any color that lies within the triangle.
Here's one of the most popular triangles, known as the ITU-R Rec. BT.709 or sRGB primaries. The three colors on your computer monitor are probably close to these points on the triangle, meaning your monitor can make any color within the triangle. But as you can see, it takes three illuminants to have any nonzero area in this "perceptual" space of colors (again, with luminance already factored out).
No single laser can do it.
That's true in two ways:
- The resulting radiation is radiometrically (or physically) distinct from any monochromatic light. Adding two sine waves of different frequencies won't make a sine wave.
- The resulting radiation is photometrically (or perceptually) distinct from any monochromatic light, when observed by a human with normal (trichromatic) vision.
You can see this on the CIE standard observer colorimetry diagram:
This horseshoe-shaped figure represents the human perception of color near the area of focus (where cones predominate), once overall brightness (luminance) is factored out.
The top outside of the horseshoe (with the numbers going from "380" on the lower left to "700" on the lower right) is known as the "spectral locus": it represent the colors you can get with monochromatic light, e.g. by varyting a laser in wavelength from 380 nanometers to 700 nanometers.
The bottom line that directly connects "380" and "700" is known as the line of purples. These colors (all shades of purple) cannot be made by any single laser! And the entire interior of the horseshoe, including the middle where "white" is, also requires more than one laser.
Your color -- a combination of light at 400 nm and 700 nm -- will be found somewhere very close to the line of purples. (The more 400 nm, the more it will be closer to that side, and vice versa.) You can tell from the diagram that these colors aren't on the spectral locus, and therefore can't be made with a single laser.
======
The standard "R'G'B'" color spaces work by picking three illuminants from the inside of this diagram. (These can be three phosphors on a CRT, three filters on an LCD, three slices on the color wheel of a DLP display, three layers of emulsion on a piece of color film, etc.) Each illuminant's color has a point within the horseshoe, and the three points form a triangle. By varying the amount of R, G, and B, we can make a color that is perceived the same as any color that lies within the triangle.
Here's one of the most popular triangles, known as the ITU-R Rec. BT.709 or sRGB primaries. The three colors on your computer monitor are probably close to these points on the triangle, meaning your monitor can make any color within the triangle. But as you can see, it takes three illuminants to have any nonzero area in this "perceptual" space of colors (again, with luminance already factored out).
No single laser can do it.
Subscribe to:
Posts (Atom)