An enlightening demonstration of the motivations behind Wikileaks, from the transcript of a secret meeting in 2011 between Julian Assange and Eric Schmidt (former Google CEO).
Yesterday I watched a Bloomberg News segment on Bitcoin, which raised a number of questions about the cryptocurrency. All of these have been answered before but the journalists apparently weren’t aware of that (except perhaps the host, Sara Eisen, who seems to have done her homework).
Following are the answers to these questions, as well as some additional, unanswered, questions and issues the segment could have raised but didn’t.
Whenever I need to explain what is so special about Bitcoin to someone unfamiliar with it, I explain it in terms of accounting ledgers and the Byzantine Generals Problem. This is that explanation, so I can just link it to folks instead of having to rewrite it every time.
First, don’t think of bitcoin as a currency, but rather as a ledger. It is an electronic ledger, of which a copy is kept on every participant’s computer in the network, and all of which are continually updated, reconciled, and synchronized in real-time. Every participant can make entries in this ledger, which records transactions of a certain amount of currency from one participant to another participant, and every one of those entries is then propagated to the network in realtime, so that every copy on every computer is updated near simultaneously and all copies of the ledger are kept synchronized. The official term for this public, distributed ledger is the ‘blockchain’ (which you can see here), and it uses Bittorrent technology to keep all copies synchronized.
You can also think of Bitcoin not as a currency but as a general solution to a difficult algorithmic problem in the field of distributed systems, colloquially known as Byzantine Fault Tolerance, the Byzantine Generals Problem, or the Two Generals Problem.
A brief but clarifying look at US government spending.
“Election administrators go to bed at night and say, ‘dear Lord, let it be a landslide.’” -Trevor Potter, former commissioner and chairman of the FEC
Election 2012 is shaping up to be interesting for a variety of reasons, but not least because of two reasons involving forecasting:
- Sophisticated probabalistic electoral college forecast models first seen in the early-mid 2000s have had a few election cycles to mature and evolve.
- This is the first closely contested presidential election since said techniques hit mainstream, and for the first time since then we’re seeing disagreement among different forecasting methodologies.
There are roughly three categories of data-driven, systematic forecasting methodologies:
And some choice followup commentary:
Dan M.Grove, OK Nate, I really think you overstate your case. I’ll give an easy counter-examples to your statement that narrow theories are better than broad theories: the standard model of physics. From it, all weak intereactions and all of quantum electromagnetism can be derived. And classical electromagnitism has been derived from quantum electrodynamics. These theories have been verified millions of time. They are the basis for our understanding a wide range of technology, from electromagnetism to computers to lasers to quantum optics. You rightly point out that medical papers are often not reproduced. That is because they only need a 95% confidence level (or 2 sigma) to be published. And, since null results are rarely published, its easy to have 19 random unpublished result, and 1 random published one. When charm was found, it was published with a 5-sigma statistical signal. It was reproduced immediately. These are broad ranging theories that have been well identifed. If you want a political science result to be verified, it should be something that isn’t just something that can be restated N different ways, that has stable results when you change the question slightly. In particular, it is a big plus for the theory if you offer a skeptical colleauge the right to reset the question and then recomute the results. Then the results should have less than a 1 in 100 chance of being found randomly. 1 in 1000 would be much better.
Nate Silver, Brooklyn, NY Dan, You make some excellent points. In particular, one of the things I found very problematic when I began to examine the elections “fundamentals” models is that they were not very robust to small change in assumptions. Replace an economic variable with one that is normally closely correlated with it, and you will get a substantially different result in certain elections. But I think one needs to be careful about drawing analogies between the physical and the social sciences. One of the things that characterized Tetlock’s hedgehogs was that they saw the political system as more analogous to a noncomplex (perhaps even Newtonian) physical system than the foxes did. This can sometimes cut the other way as well. For instance, there are some criticisms of global warming forecasts that would be reasonably compelling if they were tantamount to social science predictions, but don’t work as well when the causality of the greenhouse effect, etc. is relatively well understood (although, I certainly don’t claim that global warming forecasts are above criticism or without their problematic elements). Then again, it’s interesting that a lot of Bayesian probability theory really originated with Laplace, who thought that even though the mechanisms understanding the universe were extremely regular, our ability to measure and understand them precisely might not be.
Richard, NY Nate, I think you are confusing complexity with uncertainty in formulation. The weather/climate system is immensely complex and involves a massive number of interactions and feedbacks. Most of those interactions however are reasonably well understood and can be derived from the laws of physics. Social science models on the other hand are complex but also subject to fundamental lack of understanding of the basic interactions involved. This manifests itself in the parametric sensitivity you mentioned. Weather models are not subject to anywhere near this degree of parametric uncertainty even though they are proably more complex. Indeed the largest numerical models in the world are weather/climate models.
Dan M.Grove, OK Thanks for your reply Nate. You are absolutely right that facile comparisons between physical sciences and social sciences are extremely dangerous. But, when you included medicine, I wanted to point out that broad statements can have enormous predictive power. Ecconomics and social sciences are causally dense. So, it is hard to make broad quantitative statements. Still, I don’t think that the attempt to make fields like interenational relations more like science by so limiting the scope of one’s study to make it barely useful is the answer either. People like Huntington still provided insight, even thought they weren’t quantitative. It’s interesting that you mention Laplace because physicists talk about the Laplacian ilusion; since QM shows the inherent indetermancy of physics. Indeed, some measurable properties cannot exist apart from measurement (e.g. electron spin at N degrees). Finally, while it is hard to make general, robust, high probabability statements in the field of political science, it is not impossible. It’s just that most folks in the social sciences, and many in medicine, alas, think they’ve done it when they haven’t. Part of it is the way statistics are improperly treated. Being one of the first scientists who learned his craft when Monte Carlos were reasonably priced, I understand something of the pitfalls and the ways around them. So, I agree, most of the supposedly precise general statements in the social sciences aren’t…but a few are.
Two notable posts on the subtle problems of big data, forecasting, statistical significance, and false positives. The first by Nassim Taleb, the second by Nate Silver. These posts and their ensuing comments discussions illuminate an issue that all data science practitioners should be aware of.
Nassim Nicholas Taleb The pathology of Big Data: the more variables, the DISPROPORTIONATELY higher the number of spurious results that appear “statistically significant”. For a real-life application see this busted article in The N E Journal of Medicine.
Additional clarification in the comments:
“Before taking the mandated Intro class last year, when I heard ‘computer science,’ I pictured nerdy boys, who turned into nerdy bearded men, slouched over huge computers and click-clacking out codes that meant nothing to me. There’s nothing wrong with nerdy boys, comp sci just didn’t seem like something I would ever be interested in.
“This image was quickly shattered in that first intro class. Computer science started to resonate with me when I worked on my first project, creating a simple animation of a string quartet using Netlogo. It was while I was working on this that I realized comp sci isn’t about nerdy boys sitting at computers and coding out nonsense that turns into violent video games and complicated math problem solvers. No, comp sci isn’t this at all. Comp sci, as I have found in my classes at Stuy, is a medium for expression, a place for creation and creativity.”
I’ve been meaning to write this post for a while, but Vice-President Biden just gave me the kick in the rear to finally get it done.
My mom and dad are wonderful parents who have done a many great things for my sister and I along the way. One of the most important was that when we were wee little tykes they got us started early with the nascent computer revolution by buying us a series of cutting edge home computers - Tandy TRS-80, Apple IIc, Apple IIgs, and a series of PCs. I still remember learning BASIC on the TRS-80 and connecting to the Internet’s precursors - CompuServe, Prodigy, AOL, and random BBS’s via fast 56k baud modem.
I don’t believe mom and dad were sure exactly what little kids could get out these new devices, but it was no leap of faith to perceive that computers were the way of the way of the future, and getting a head start on the information age was incredibly educational and quite a gift.
The dream of the 70s and 80s Silicon Valley visionaries like Steve Jobs, Bill Gates, Jack Tramiel (Commodore 64), and others was “A computer on every desk in every home”. They achieved it and more - now it’s a computer on every desk, in every backpack, in every pocket and purse, in every corner of the world, from developed to developing nations (well, almost). And look at how they changed the world.
However, the revolution isn’t over, it’s just getting started, and the next phase is going to be “a factory on every desk in every home”. Or, more specifically “A 3D Printer on every desk in every home.” Parents and schools will be buying their kids home 3D printers to go along with their computers, tablets, and smart phones.
One of the many problems caused by the ongoing financial crisis is renewed criticisms of the capitalist system in general. Our current system is deeply flawed and certainly deserves some of it, however there is bathwater and there is the baby, and we conflate the two at the risk of exacerbating the financial crisis rather than mitigating it or preventing a recurrence.
Bathwater = increasing fragility, instability, and systemic risk; regulatory, institutional, and govermental capture; crony capitalism; privatized profits and socialized losses; too big to fail; increasing financialization of the economy -> increasing concentration of paper wealth -> outsourcing of real wealth creation -> increasing concentration of political power -> tyranny of the minority, etc.
Baby = the core mechanic of capitalism - real wealth creation.