Thursday, July 30, 2009
Monday, July 27, 2009
Goldman Sachs: scum whom owns the world
this is a cool analysis of Goldman Sachs from Jon Steward..
but he is not the only one realizing this,
Max Keiser has very interesting argument.
And also in this video, the YoungTurks have a very interesting point of view, invest 1 million and get 13 billion in return... not bad
The Daily Show With Jon Stewart | Mon - Thurs 11p / 10c | |||
Pyramid Economy | ||||
|
but he is not the only one realizing this,
Max Keiser has very interesting argument.
And also in this video, the YoungTurks have a very interesting point of view, invest 1 million and get 13 billion in return... not bad
Labels: the economist
Sunday, July 19, 2009
UPDATE 10+1 things to do before I die
0. basE jump from Parekupa meru
1. Wing suit
2. Wing suit
3. Wing suit
5. Free fly
6. Swimming/diving with Rhincodon typus, Mola mola,Manta birostris, and under the school of Sphyrnidae
7. fullmoon date in Cataratas de Iguazu'
8. See GWAR & Jane's Addiction live concert
9. 18000 feet freefall
10. peeing in Everest (dont eat yellow snow!!!)
1. Wing suit
2. Wing suit
3. Wing suit
5. Free fly
6. Swimming/diving with Rhincodon typus, Mola mola,
7. fullmoon date in Cataratas de Iguazu'
9. 18000 feet freefall
10. peeing in Everest (dont eat yellow snow!!!)
Labels: evolution
Saturday, July 18, 2009
Thursday, July 16, 2009
Podcast: Crypto-Gram 15 February 2009:
Podfrom the Feb 15, 2009 Crypto-Gram Newsletter
by Bruce Schneier
* Helping the Terrorists
By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it's generally impossible to tell which is which. Any attempt to ban or limit infrastructure affects everybody. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then.
Society survives all of this because the good uses of infrastructure far outweigh the bad uses. While terrorism turns society's very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response - just as we would if we banned cars because bank robbers used them too.
* Monster.com Data Breach
To assess an organization's network security, you need to actually analyze it. You can't get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company's attorneys couldn't figure out a reason not to disclose to the public.
* The Exclusionary Rule and Security
Exclusionary rule : If the police search your home without a warrant and find drugs, they can't arrest you for possession. The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence.
Government databases are filled with errors. People often can't see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy.
Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren't held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.
* BitArmor's No-Breach Guarantee
fine print: "If your company has to publicly report a breach while your data is protected by BitArmor, we'll refund the purchase price of your software. It's that simple. No gimmicks, no hassles."
And: "BitArmor cannot be held accountable for data breaches, publicly or otherwise."
length: 14:22m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0902.html
by Bruce Schneier
* Helping the Terrorists
By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it's generally impossible to tell which is which. Any attempt to ban or limit infrastructure affects everybody. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then.
Society survives all of this because the good uses of infrastructure far outweigh the bad uses. While terrorism turns society's very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response - just as we would if we banned cars because bank robbers used them too.
* Monster.com Data Breach
To assess an organization's network security, you need to actually analyze it. You can't get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company's attorneys couldn't figure out a reason not to disclose to the public.
* The Exclusionary Rule and Security
Exclusionary rule : If the police search your home without a warrant and find drugs, they can't arrest you for possession. The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence.
Government databases are filled with errors. People often can't see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy.
Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren't held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.
* BitArmor's No-Breach Guarantee
fine print: "If your company has to publicly report a breach while your data is protected by BitArmor, we'll refund the purchase price of your software. It's that simple. No gimmicks, no hassles."
And: "BitArmor cannot be held accountable for data breaches, publicly or otherwise."
length: 14:22m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0902.html
Podcast: Crypto-Gram 15 January 2009:
Podfrom the Jan 15, 2009 Crypto-Gram Newsletter
by Bruce Schneier
* Impersonation
Traditional impersonation involves people fooling people.
These tricks work because we all regularly interact with people we don't know. No one could successfully impersonate your brother...
It's human nature to trust these credentials. Impersonation is even easier over limited communications channels.
A lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked.
Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. Decentralized authentication systems work better than centralized ones.
Any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails.
* Forging SSL Certificates
We already knew that MD5 is a broken hash function. Now researchers have successfully forged MD5-signed certificates.
This isn't a big deal.
Making cryptanalytic attacks used to break real-world security systems is often much harder than cryptographers think.
But SSL doesn't provide much in the way of security, so breaking it doesn't harm security very much. Pretty much no one ever verifies SSL certificates, so there's not much attack value in being able to forge them. And even more generally, the major risks to data on the Internet are at the endpoints -- Trojans and rootkits on users' computers, attacks against databases and servers, etc -- and not in the network.
This comment by Ted Dziuba is far too true: "If you're like me and every other user on the planet, you don't give a sh*t when an SSL certificate doesn't validate. Unfortunately, commons-httpclient was written by some pedantic f*cknozzles who have never tried to fetch real-world webpages."
I'm not losing a whole lot of sleep because of these attacks. No one should be using MD5 anymore.
* Biometrics
Biometrics may seem new, but they're the oldest form of identification.
What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There's a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized).
Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it's important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It's hard to affix a fake fingerprint to your finger or make your retina look like someone else's. Some people can mimic voices, and make-up artists can change people's faces, but these are specialized skills.
On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. And a stolen biometric can fool some systems.
The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there's a guard with a large gun making sure no one is trying to fool the system.
One more problem with biometrics: they don't fail well. Passwords can be changed, but if someone copies your thumbprint, you're out of luck: you can't update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you're stuck.
length: 12:07m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0901.html
by Bruce Schneier
* Impersonation
Traditional impersonation involves people fooling people.
These tricks work because we all regularly interact with people we don't know. No one could successfully impersonate your brother...
It's human nature to trust these credentials. Impersonation is even easier over limited communications channels.
A lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked.
Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. Decentralized authentication systems work better than centralized ones.
Any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails.
* Forging SSL Certificates
We already knew that MD5 is a broken hash function. Now researchers have successfully forged MD5-signed certificates.
This isn't a big deal.
Making cryptanalytic attacks used to break real-world security systems is often much harder than cryptographers think.
But SSL doesn't provide much in the way of security, so breaking it doesn't harm security very much. Pretty much no one ever verifies SSL certificates, so there's not much attack value in being able to forge them. And even more generally, the major risks to data on the Internet are at the endpoints -- Trojans and rootkits on users' computers, attacks against databases and servers, etc -- and not in the network.
This comment by Ted Dziuba is far too true: "If you're like me and every other user on the planet, you don't give a sh*t when an SSL certificate doesn't validate. Unfortunately, commons-httpclient was written by some pedantic f*cknozzles who have never tried to fetch real-world webpages."
I'm not losing a whole lot of sleep because of these attacks. No one should be using MD5 anymore.
* Biometrics
Biometrics may seem new, but they're the oldest form of identification.
What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There's a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized).
Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it's important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It's hard to affix a fake fingerprint to your finger or make your retina look like someone else's. Some people can mimic voices, and make-up artists can change people's faces, but these are specialized skills.
On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. And a stolen biometric can fool some systems.
The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there's a guard with a large gun making sure no one is trying to fool the system.
One more problem with biometrics: they don't fail well. Passwords can be changed, but if someone copies your thumbprint, you're out of luck: you can't update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you're stuck.
length: 12:07m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0901.html
Podcast: Crypto-Gram 15 December 2008:
Podfrom the Dec 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Lessons from Mumbai
Without discounting the awfulness of the events, I have some initial observations:
- low-tech is very effective.
- the attacks had a surprisingly low body count.
- terrorism is rare.
- specific countermeasures don't help against these attacks.
lesson: not to focus too much on the specifics of the attacks
* Communications During Terrorist Attacks are *Not* Bad
It helps people, calms people, and actually reduces the thing the terrorists are trying to achieve: terror.
* Audit
Most security against crime comes from audit. Of course we use locks and alarms, but we don't wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that's audit.
Audit helps ensure that people don't abuse positions of trust.
The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What they wanted was to not be subject to audit.
* The Future of Ephemeral Conversation
Ephemeral conversation is dying.
Cardinal Richelieu famously said, "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." When all our ephemeral conversations can be saved for later examination, different rules have to apply.
* "Here Comes Everybody" Review
In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don't people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market's transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense. But if they are too high, it makes more sense to get the job done by an organization that hires people.
length: 25:45m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0812.html
by Bruce Schneier
* Lessons from Mumbai
Without discounting the awfulness of the events, I have some initial observations:
- low-tech is very effective.
- the attacks had a surprisingly low body count.
- terrorism is rare.
- specific countermeasures don't help against these attacks.
lesson: not to focus too much on the specifics of the attacks
* Communications During Terrorist Attacks are *Not* Bad
It helps people, calms people, and actually reduces the thing the terrorists are trying to achieve: terror.
* Audit
Most security against crime comes from audit. Of course we use locks and alarms, but we don't wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that's audit.
Audit helps ensure that people don't abuse positions of trust.
The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What they wanted was to not be subject to audit.
* The Future of Ephemeral Conversation
Ephemeral conversation is dying.
Cardinal Richelieu famously said, "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." When all our ephemeral conversations can be saved for later examination, different rules have to apply.
* "Here Comes Everybody" Review
In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don't people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market's transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense. But if they are too high, it makes more sense to get the job done by an organization that hires people.
length: 25:45m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0812.html
Podcast: Crypto-Gram 15 November 2008:
from the Nov 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* The Skein Hash Function
NIST is holding a competition to replace the SHA family of hash functions
Skein is our submission (Bruce Schneier, Niels Ferguson, Stefan Lucks, Doug Whiting, Mihir Bellare, Tadayoshi Kohno, Jon Callas, and Jesse Walker): a new family of cryptographic hash functions. Its design combines speed, security, simplicity, and a great deal of flexibility in a modular package that is easy to analyze.
* Me and the TSA
TSA been checking ID's all this time to no purpose whatsoever.
* Quantum Cryptography
Quantum cryptography: the basic idea is still unbelievably cool, in theory, and nearly useless in real life.
The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg's uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper's presence. No disturbance, no eavesdropper -- period.
The basic science behind quantum crypto was developed, and prototypes built, in the early 1980s by Charles Bennett and Giles Brassard. This is totally separate from quantum computing, which also has implications for cryptography. WHich is fundamentally different from a classical computer. If one were built - and we're talking science fiction here - then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it's not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical. I think the best quantum computer today can factor the number 15.
I don't see any commercial value in quantum cryptography. I don't believe it solves any security problem that needs solving. I don't believe that it's worth paying for, and I can't imagine anyone but a few technophiles buying and deploying it. Systems that use it don't magically become unbreakable, because the quantum part doesn't address the weak points of the system.
Our symmetric and public-key algorithms are pretty good, even though they're not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.
Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.
* The Economics of Spam
Result of research on Storm worm infiltration"
"After 26 days, and almost 350 million e-mail messages, only 28 sales resulted: under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. These conversions would have resulted in revenues of $2,731.88- a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network - we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm's pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.
"Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than "millions of dollars every day," but certainly a healthy enterprise."
Of course, the authors point out that it's dangerous to make these sorts of generalizations.
* The Psychology of Con Men
These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he's just taken from the person's house, goes next door and then sells them the same lightbulbs again...
* Giving Out Replacement Hotel Room Keys
Guests lose their hotel room keys, and the hotel staff needs to be accommodating. But at the same time, they can't be giving out hotel room keys to anyone claiming to have lost one. Generally, hotels ask to see some ID before giving out a replacement key and, if the guest doesn't have his wallet with him, have someone walk to the room with the key and check their ID.
* P = NP?
There's a million-dollar prize for resolving the question.
length: 20:44m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0811.html
by Bruce Schneier
* The Skein Hash Function
NIST is holding a competition to replace the SHA family of hash functions
Skein is our submission (Bruce Schneier, Niels Ferguson, Stefan Lucks, Doug Whiting, Mihir Bellare, Tadayoshi Kohno, Jon Callas, and Jesse Walker): a new family of cryptographic hash functions. Its design combines speed, security, simplicity, and a great deal of flexibility in a modular package that is easy to analyze.
* Me and the TSA
TSA been checking ID's all this time to no purpose whatsoever.
* Quantum Cryptography
Quantum cryptography: the basic idea is still unbelievably cool, in theory, and nearly useless in real life.
The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg's uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper's presence. No disturbance, no eavesdropper -- period.
The basic science behind quantum crypto was developed, and prototypes built, in the early 1980s by Charles Bennett and Giles Brassard. This is totally separate from quantum computing, which also has implications for cryptography. WHich is fundamentally different from a classical computer. If one were built - and we're talking science fiction here - then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it's not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical. I think the best quantum computer today can factor the number 15.
I don't see any commercial value in quantum cryptography. I don't believe it solves any security problem that needs solving. I don't believe that it's worth paying for, and I can't imagine anyone but a few technophiles buying and deploying it. Systems that use it don't magically become unbreakable, because the quantum part doesn't address the weak points of the system.
Our symmetric and public-key algorithms are pretty good, even though they're not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.
Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.
* The Economics of Spam
Result of research on Storm worm infiltration"
"After 26 days, and almost 350 million e-mail messages, only 28 sales resulted: under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. These conversions would have resulted in revenues of $2,731.88- a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network - we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm's pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.
"Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than "millions of dollars every day," but certainly a healthy enterprise."
Of course, the authors point out that it's dangerous to make these sorts of generalizations.
* The Psychology of Con Men
These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he's just taken from the person's house, goes next door and then sells them the same lightbulbs again...
* Giving Out Replacement Hotel Room Keys
Guests lose their hotel room keys, and the hotel staff needs to be accommodating. But at the same time, they can't be giving out hotel room keys to anyone claiming to have lost one. Generally, hotels ask to see some ID before giving out a replacement key and, if the guest doesn't have his wallet with him, have someone walk to the room with the key and check their ID.
* P = NP?
There's a million-dollar prize for resolving the question.
length: 20:44m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0811.html
Podcast: Crypto-Gram 15 October 2008:
from the Oct 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* The Seven Habits of Highly Ineffective Terrorists
Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. To defeat terrorism we need to understand the motivation
Conventional wisdom holds that people become terrorists for political reasons.
If you believe this model, the way to fight terrorism is to change that equation, and that's what most experts advocate. Historically, none of these solutions has worked with any regularity.
Max Abrahms has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. He theorize that people turn to terrorism for social solidarity ~ people join terrorist organizations worldwide in order to be part of a community.
The evidence supports this:
1) Individual terrorists often have no prior involvement with a group's political agenda, and often join multiple terrorist groups with incompatible platforms.
2) Individuals who join terrorist groups are frequently not oppressed in any way, and often can't describe the political goals of their organizations.
3) People who join terrorist groups most often have friends or relatives who are members of the group.
4) The great majority of terrorist are socially isolated: unmarried young men or widowed women who weren't working prior to joining.
Solution:
- we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations.
- pay more attention to the socially marginalized than to the politically downtrodden - support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need.
- minimize collateral damage in our counterterrorism operations
* The Two Classes of Airport Contraband
1) that will get you in trouble if you try to bring it on an airplane
2) that will cheerily be taken away from you if you try to bring it on an airplane.
This difference is important: Making security screeners confiscate anything from that second class is a waste of time. All it does is harm innocents; it doesn't stop terrorists at all.
If you're caught at airport security with a bomb or a gun, the screeners aren't just going to take it away from you -> you'll be arrested.
The screeners don't have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there's a decent chance of getting caught, because the consequences of getting caught are too great.
But if you're caught with a bottle of liquid, the screeners will confiscate it without any consequences.
Hence if it's really true a terrorist can use liquid bomb, he/sh will try and try and try again until he is successful, and he/she will be never be caught.
* Nicholas Taleb on the Limitations of Risk Management
A lot of people have done some kind of "make-sense" type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don't give a fool the illusion of risk management. Don't ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated.
* Does Risk Management Make Sense?
"Risk management" is just a fancy term for the cost-benefit tradeoff associated with any security decision. It's what we do when we react to fear, or try to make ourselves feel secure.
Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments' budgets, and to their careers.
You can't completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That's what companies that manage risk for a living -- insurance companies, financial trading firms and arbitrageurs -- try to do. They try to replace intuition with models, and hunches with mathematics.
length: 18:42m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0810.html
by Bruce Schneier
* The Seven Habits of Highly Ineffective Terrorists
Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. To defeat terrorism we need to understand the motivation
Conventional wisdom holds that people become terrorists for political reasons.
If you believe this model, the way to fight terrorism is to change that equation, and that's what most experts advocate. Historically, none of these solutions has worked with any regularity.
Max Abrahms has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. He theorize that people turn to terrorism for social solidarity ~ people join terrorist organizations worldwide in order to be part of a community.
The evidence supports this:
1) Individual terrorists often have no prior involvement with a group's political agenda, and often join multiple terrorist groups with incompatible platforms.
2) Individuals who join terrorist groups are frequently not oppressed in any way, and often can't describe the political goals of their organizations.
3) People who join terrorist groups most often have friends or relatives who are members of the group.
4) The great majority of terrorist are socially isolated: unmarried young men or widowed women who weren't working prior to joining.
Solution:
- we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations.
- pay more attention to the socially marginalized than to the politically downtrodden - support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need.
- minimize collateral damage in our counterterrorism operations
* The Two Classes of Airport Contraband
1) that will get you in trouble if you try to bring it on an airplane
2) that will cheerily be taken away from you if you try to bring it on an airplane.
This difference is important: Making security screeners confiscate anything from that second class is a waste of time. All it does is harm innocents; it doesn't stop terrorists at all.
If you're caught at airport security with a bomb or a gun, the screeners aren't just going to take it away from you -> you'll be arrested.
The screeners don't have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there's a decent chance of getting caught, because the consequences of getting caught are too great.
But if you're caught with a bottle of liquid, the screeners will confiscate it without any consequences.
Hence if it's really true a terrorist can use liquid bomb, he/sh will try and try and try again until he is successful, and he/she will be never be caught.
* Nicholas Taleb on the Limitations of Risk Management
A lot of people have done some kind of "make-sense" type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don't give a fool the illusion of risk management. Don't ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated.
* Does Risk Management Make Sense?
"Risk management" is just a fancy term for the cost-benefit tradeoff associated with any security decision. It's what we do when we react to fear, or try to make ourselves feel secure.
Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments' budgets, and to their careers.
You can't completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That's what companies that manage risk for a living -- insurance companies, financial trading firms and arbitrageurs -- try to do. They try to replace intuition with models, and hunches with mathematics.
length: 18:42m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0810.html
Wednesday, July 15, 2009
Podcast: Crypto-Gram 15 September 2008: Security is not an investment that provides a return
from the Sep 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Identity Farming
It seems to me that our data shadows are becoming increasingly distinct from us, almost with a life of their own. What's important now is our shadows; we're secondary. And as our society relies more and more on these shadows, we might even become unnecessary.
Our data shadows can live a perfectly normal life without us.
* Security ROI
Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.
Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off.
It's a good idea in theory, but it's a mostly bunk in practice.
"ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.
While security can't produce ROI, loss prevention most certainly affects a company's bottom line.
The classic methodology is called annualized loss expectancy: ALE.
Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you're wasting money. Spend less than that, and you're also wasting money.
Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent - to 6 percent a year - then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it's worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn't.
Cybersecurity is considerably harder, because there just isn't enough good data. There aren't good crime rates for cyberspace
But there's another problem, and it's that the math quickly falls apart when it comes to rare and expensive events.
* Diebold Finally Admits its Voting Machines Drop Votes
It's unclear if this error is random or systematic. If it's random -- a small percentage of all votes are dropped -- then it is highly unlikely that this affected the outcome of any election. If it's systematic -- a small percentage of votes for a particular candidate are dropped -- then it is much more problematic.
* Full Disclosure and the Boston Fare Card Hack
The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors - who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.
Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes' existence, or calling them just theoretical. It wasn't until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this "responsible disclosure" protocol, it's the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.
length: 30:30m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0809.html
by Bruce Schneier
* Identity Farming
It seems to me that our data shadows are becoming increasingly distinct from us, almost with a life of their own. What's important now is our shadows; we're secondary. And as our society relies more and more on these shadows, we might even become unnecessary.
Our data shadows can live a perfectly normal life without us.
* Security ROI
Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.
Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off.
It's a good idea in theory, but it's a mostly bunk in practice.
"ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.
While security can't produce ROI, loss prevention most certainly affects a company's bottom line.
The classic methodology is called annualized loss expectancy: ALE.
Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you're wasting money. Spend less than that, and you're also wasting money.
Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent - to 6 percent a year - then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it's worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn't.
Cybersecurity is considerably harder, because there just isn't enough good data. There aren't good crime rates for cyberspace
But there's another problem, and it's that the math quickly falls apart when it comes to rare and expensive events.
* Diebold Finally Admits its Voting Machines Drop Votes
It's unclear if this error is random or systematic. If it's random -- a small percentage of all votes are dropped -- then it is highly unlikely that this affected the outcome of any election. If it's systematic -- a small percentage of votes for a particular candidate are dropped -- then it is much more problematic.
* Full Disclosure and the Boston Fare Card Hack
The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors - who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.
Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes' existence, or calling them just theoretical. It wasn't until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this "responsible disclosure" protocol, it's the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.
length: 30:30m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0809.html
Podcast: Crypto-Gram 15 Augustus 2008: Computers are also the only mass-market consumer item where the vendors accept no liability for faults.
from the Aug 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Memo to the Next President
With security the devil is always in the details
I have three pieces of policy advice for the next president:
1) use your immense buying power to improve the security of commercial products and services.
2) legislate results and not methodologies.
bad law is worse than no law. A law requiring companies to secure personal data is good; a law specifying what technologies they should use to do so is not. Mandating software liabilities for software failures is good, detailing how is not.
3) broadly invest in research.
* Hacking Mifare Transport Cards
NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing.
The security of Mifare Classic is terrible. This is not an exaggeration; it's kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.
The Dutch court decide in favor of the group that broke Mifare Classic : "Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings."
Publication of this attack might be expensive for NXP and its customers, but it's good for security overall. Companies will only design security as good as their customers know to ask for. NXP's security was so bad because customers didn't know how to evaluate security: either they don't know what questions to ask, or didn't know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.
* Information Security and Liabilities
A recent study of Internet browsers worldwide discovered that over half -- 52% -- of Internet Explorer users weren't using the current version of the software. For other browsers the numbers were better, but not much: 17% of Firefox users, 35% of Safari users, and 44% of Opera users were using an old version.
It's the system that's broken. There's no other industry where shoddy products are sold to a public that expects regular problems, and where consumers are the ones who have to learn how to fix them.
It is possible to write quality software. It is possible to sell software products that work properly, and don't need to be constantly patched. The problem is that it's expensive and time consuming. Software vendors won't do it, of course, because the marketplace won't reward it.
The key to fixing this is software liabilities. Computers are also the only mass-market consumer item where the vendors accept no liability for faults.
* Software Liabilities and Free Software
The key to understanding this is that this sort of contractual liability is part of a contract, and with free software -- or free anything -- there's no contract.
* TrueCrypt's Deniable File System
Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.
* The DNS Vulnerability
Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.
I'm kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.
The real lesson is that the patch treadmill doesn't work, and it hasn't for years.
Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That's exactly the work-around being rolled out now following Kaminsky's discovery. Bernstein didn't discover Kaminsky's attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn't need to be patched; it's already immune to Kaminsky's attack.
It's not just secure against known attacks; it's also secure against unknown attacks.
length: 27:17m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0808.html
by Bruce Schneier
* Memo to the Next President
With security the devil is always in the details
I have three pieces of policy advice for the next president:
1) use your immense buying power to improve the security of commercial products and services.
2) legislate results and not methodologies.
bad law is worse than no law. A law requiring companies to secure personal data is good; a law specifying what technologies they should use to do so is not. Mandating software liabilities for software failures is good, detailing how is not.
3) broadly invest in research.
* Hacking Mifare Transport Cards
NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing.
The security of Mifare Classic is terrible. This is not an exaggeration; it's kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.
The Dutch court decide in favor of the group that broke Mifare Classic : "Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings."
Publication of this attack might be expensive for NXP and its customers, but it's good for security overall. Companies will only design security as good as their customers know to ask for. NXP's security was so bad because customers didn't know how to evaluate security: either they don't know what questions to ask, or didn't know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.
* Information Security and Liabilities
A recent study of Internet browsers worldwide discovered that over half -- 52% -- of Internet Explorer users weren't using the current version of the software. For other browsers the numbers were better, but not much: 17% of Firefox users, 35% of Safari users, and 44% of Opera users were using an old version.
It's the system that's broken. There's no other industry where shoddy products are sold to a public that expects regular problems, and where consumers are the ones who have to learn how to fix them.
It is possible to write quality software. It is possible to sell software products that work properly, and don't need to be constantly patched. The problem is that it's expensive and time consuming. Software vendors won't do it, of course, because the marketplace won't reward it.
The key to fixing this is software liabilities. Computers are also the only mass-market consumer item where the vendors accept no liability for faults.
* Software Liabilities and Free Software
The key to understanding this is that this sort of contractual liability is part of a contract, and with free software -- or free anything -- there's no contract.
* TrueCrypt's Deniable File System
Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.
* The DNS Vulnerability
Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.
I'm kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.
The real lesson is that the patch treadmill doesn't work, and it hasn't for years.
Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That's exactly the work-around being rolled out now following Kaminsky's discovery. Bernstein didn't discover Kaminsky's attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn't need to be patched; it's already immune to Kaminsky's attack.
It's not just secure against known attacks; it's also secure against unknown attacks.
length: 27:17m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0808.html
Podcast: Crypto-Gram 15 July 2008:
from the Jul 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* CCTV Cameras
Pervasive security cameras don't substantially reduce crime. There are exceptions, of course, and that's what gets the press.
The question really isn't whether cameras reduce crime; the question is whether they're worth it.
* Kill Switches and Remote Control
Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?
How do we prevent this from being abused? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get "superuser" devices that cannot be limited, and do they get "supercontroller" devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?
* LifeLock and Identity Theft
May be someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone's name.
* The First Interdisciplinary Workshop on Security and Human Behavior
In order to be effective, security must be usable -- not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.
* The Truth About Chinese Hackers
The popular media conception is that there is a coordinated attempt by the Chinese government to hack into U.S. computers.
These hacker groups seem not to be working for the Chinese government. They don't seem to be coordinated by the Chinese military. They're basically young, male, patriotic Chinese citizens, trying to demonstrate that they're just as good as everyone else.
The hackers are in this for two reasons:
1) fame and glory
2) an attempt to make a living.
Some of the hackers are good:
- become more sophisticated in both tools and techniques.
- stealthy.
- do good network reconnaissance.
- discover their own vulnerabilities.
* Man-in-the-Middle Attacks
Man-in-the-middle is defeated by context.
There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.
length: 27:45m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0807.html
by Bruce Schneier
* CCTV Cameras
Pervasive security cameras don't substantially reduce crime. There are exceptions, of course, and that's what gets the press.
The question really isn't whether cameras reduce crime; the question is whether they're worth it.
* Kill Switches and Remote Control
Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?
How do we prevent this from being abused? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get "superuser" devices that cannot be limited, and do they get "supercontroller" devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?
* LifeLock and Identity Theft
May be someday Congress will do the right thing and put LifeLock out of business by forcing lenders to verify identity every time they issue credit in someone's name.
* The First Interdisciplinary Workshop on Security and Human Behavior
In order to be effective, security must be usable -- not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.
* The Truth About Chinese Hackers
The popular media conception is that there is a coordinated attempt by the Chinese government to hack into U.S. computers.
These hacker groups seem not to be working for the Chinese government. They don't seem to be coordinated by the Chinese military. They're basically young, male, patriotic Chinese citizens, trying to demonstrate that they're just as good as everyone else.
The hackers are in this for two reasons:
1) fame and glory
2) an attempt to make a living.
Some of the hackers are good:
- become more sophisticated in both tools and techniques.
- stealthy.
- do good network reconnaissance.
- discover their own vulnerabilities.
* Man-in-the-Middle Attacks
Man-in-the-middle is defeated by context.
There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.
length: 27:45m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0807.html
Podcast: Crypto-Gram 15 June 2008: put your sensive data in memory card of a camera.
from the Jun 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* The War on Photography
Given that real terrorists, and even wannabe terrorists, don't seem to photograph anything, why is it such pervasive conventional wisdom that terrorists photograph their targets?
Because it's a movie-plot threat.
* Crossing Borders with Laptops and PDAs
The best defense is to clean up your laptop. A customs agent can't read what you don't have.
Delete everything you don't absolutely need. And use a secure file erasure program to do it. While you're at it, delete your browser's cookies, cache and browsing history.
If you can't, consider putting your sensitive data on a USB drive or even a camera memory card.
* E-Mail After the Rapture
But what if the creator of this site isn't as scrupulous as he implies he is? What if he uses all of that account information, passwords, safe combinations, and whatever *before* any rapture? And even if he is an honest true believer, this seems like a mighty juicy target for any would-be identity thief.
* Fax Signatures
Our legal and business systems need to deal with the underlying problem -- false authentication -- rather than focus on the technology of the moment. Systems need to defend themselves against the possibility of fake signatures, regardless of how they arrive.
* More on Airplane Seat Cameras
How in the world are they "testing" this system without any real terrorists?
* How to Sell Security
It's a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses.
How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It's a choice between a small sure loss - the cost of the security product - and a large risky loss...
One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs.
The better solution is not to sell security directly, but to include it as part of a more general product or service.
length: 26:29m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0806.html
by Bruce Schneier
* The War on Photography
Given that real terrorists, and even wannabe terrorists, don't seem to photograph anything, why is it such pervasive conventional wisdom that terrorists photograph their targets?
Because it's a movie-plot threat.
* Crossing Borders with Laptops and PDAs
The best defense is to clean up your laptop. A customs agent can't read what you don't have.
Delete everything you don't absolutely need. And use a secure file erasure program to do it. While you're at it, delete your browser's cookies, cache and browsing history.
If you can't, consider putting your sensitive data on a USB drive or even a camera memory card.
* E-Mail After the Rapture
But what if the creator of this site isn't as scrupulous as he implies he is? What if he uses all of that account information, passwords, safe combinations, and whatever *before* any rapture? And even if he is an honest true believer, this seems like a mighty juicy target for any would-be identity thief.
* Fax Signatures
Our legal and business systems need to deal with the underlying problem -- false authentication -- rather than focus on the technology of the moment. Systems need to defend themselves against the possibility of fake signatures, regardless of how they arrive.
* More on Airplane Seat Cameras
How in the world are they "testing" this system without any real terrorists?
* How to Sell Security
It's a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses.
How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It's a choice between a small sure loss - the cost of the security product - and a large risky loss...
One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs.
The better solution is not to sell security directly, but to include it as part of a more general product or service.
length: 26:29m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0806.html
Tuesday, July 14, 2009
Podcast: Crypto-Gram 15 May 2008: No one wants to buy security. They want to buy something truly useful.
from the May 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Dual-Use Technologies and the Equities Issue
The NSA has two roles:
1) eavesdrop on their stuff
2) protect our stuff
When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.
In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.
* Crossing Borders with Laptops and PDAs
If you can't encrypt your HDD, consider putting your sensitive data on a USB drive or even a camera memory card. Encrypt it, slip it in your pocket, and it's likely to remain unnoticed even if the customs agent pokes through your laptop.
If someone does discover it, you can try saying: "I don't know what's on there. My boss told me to give it to the head of the New York office." If you've chosen a strong encryption password, you won't care if he confiscates it.
* The RSA Conference
Over 17,000 people
The problem is that most of the people attending the RSA Conference can't understand what the products do or why they should buy them. So they don't.
Commerce requires a meeting of minds between buyer and seller, and it's just not happening. The sellers can't explain what they're selling to the buyers, and the buyers don't buy because they don't understand what the sellers are selling.
No one wants to buy security. They want to buy something truly useful.
They don't want to have to become IT security experts.
large IT outsourcing contracts that companies are signing - not security outsourcing contracts, but more general IT contracts that include security.
* Risk Preferences in Chimpanzees and Bonobos
People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses - accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses.
length: 36:45m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0805.html
by Bruce Schneier
* Dual-Use Technologies and the Equities Issue
The NSA has two roles:
1) eavesdrop on their stuff
2) protect our stuff
When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.
In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.
* Crossing Borders with Laptops and PDAs
If you can't encrypt your HDD, consider putting your sensitive data on a USB drive or even a camera memory card. Encrypt it, slip it in your pocket, and it's likely to remain unnoticed even if the customs agent pokes through your laptop.
If someone does discover it, you can try saying: "I don't know what's on there. My boss told me to give it to the head of the New York office." If you've chosen a strong encryption password, you won't care if he confiscates it.
* The RSA Conference
Over 17,000 people
The problem is that most of the people attending the RSA Conference can't understand what the products do or why they should buy them. So they don't.
Commerce requires a meeting of minds between buyer and seller, and it's just not happening. The sellers can't explain what they're selling to the buyers, and the buyers don't buy because they don't understand what the sellers are selling.
No one wants to buy security. They want to buy something truly useful.
They don't want to have to become IT security experts.
large IT outsourcing contracts that companies are signing - not security outsourcing contracts, but more general IT contracts that include security.
* Risk Preferences in Chimpanzees and Bonobos
People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses - accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses.
length: 36:45m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0805.html
Podcast: Crypto-Gram 15 April 2008: Security mindset.
from the Apr 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* The Security Mindset
Security requires a particular mindset. Security professionals -- at least the good ones -- see the world differently. They can't walk into a store without noticing how they might shoplift. They can't use a computer without wondering about the security vulnerabilities. They can't vote without trying to figure out how to vote twice. They just can't help it.
We can't help it... <~ this is very cool this is *exactly how I feel" I cannot go in a bank without thinking how I can rob this bank, how can smuggle gun inside... same story airport... same story in shops, how I can steal, going to movie without paying, exploiting facebook... etc... not that I want it, but I simply cant help to think every way to exploit a potential vulnerability..
I've often speculated about how much of this is innate, and how much is teachable. In general, I think it's a particular way of looking at the world, and that it's far easier to teach someone domain expertise -- cryptography or software security or safecracking or document forgery -- than it is to teach someone a security mindset.
I should start blogging possible way to exploit all around me...
* The Feeling and Reality of Security
Security is both a feeling and a reality, and they're different. You can feel secure even though you're not, and you can be secure even though you don't feel it.
There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we're referring to one and when the other. There is value as well in recognizing when the two converge, understanding why they diverge, and knowing how they can be made to converge again.
Rabbits that are good at making that trade-off will tend to reproduce, while the rabbits that are bad at it will tend to get eaten or starve.
People make most trade-offs based on the *feeling* of security and not the reality.
If we make security trade-offs based on the feeling of security rather than the reality, we choose security that makes us *feel* more secure over security that actually makes us more secure.
2 ways to make people feel more secure:
1) to make people actually more secure and hope they notice.
2) to make people feel more secure without making them actually more secure, and hope they don't notice.
The key here is whether we notice.
The feeling and reality of security tend to converge when we take notice, and diverge when we don't.
People notice when:
1) there are enough positive and negative examples to draw a conclusion
2) there isn't too much emotion clouding the issue.
length: 23:22m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0804.html
by Bruce Schneier
* The Security Mindset
Security requires a particular mindset. Security professionals -- at least the good ones -- see the world differently. They can't walk into a store without noticing how they might shoplift. They can't use a computer without wondering about the security vulnerabilities. They can't vote without trying to figure out how to vote twice. They just can't help it.
We can't help it... <~ this is very cool this is *exactly how I feel" I cannot go in a bank without thinking how I can rob this bank, how can smuggle gun inside... same story airport... same story in shops, how I can steal, going to movie without paying, exploiting facebook... etc... not that I want it, but I simply cant help to think every way to exploit a potential vulnerability..
I've often speculated about how much of this is innate, and how much is teachable. In general, I think it's a particular way of looking at the world, and that it's far easier to teach someone domain expertise -- cryptography or software security or safecracking or document forgery -- than it is to teach someone a security mindset.
I should start blogging possible way to exploit all around me...
* The Feeling and Reality of Security
Security is both a feeling and a reality, and they're different. You can feel secure even though you're not, and you can be secure even though you don't feel it.
There is considerable value in separating out the two concepts: in explaining how the two are different, and understanding when we're referring to one and when the other. There is value as well in recognizing when the two converge, understanding why they diverge, and knowing how they can be made to converge again.
Rabbits that are good at making that trade-off will tend to reproduce, while the rabbits that are bad at it will tend to get eaten or starve.
People make most trade-offs based on the *feeling* of security and not the reality.
If we make security trade-offs based on the feeling of security rather than the reality, we choose security that makes us *feel* more secure over security that actually makes us more secure.
2 ways to make people feel more secure:
1) to make people actually more secure and hope they notice.
2) to make people feel more secure without making them actually more secure, and hope they don't notice.
The key here is whether we notice.
The feeling and reality of security tend to converge when we take notice, and diverge when we don't.
People notice when:
1) there are enough positive and negative examples to draw a conclusion
2) there isn't too much emotion clouding the issue.
length: 23:22m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0804.html
Podcast: Crypto-Gram 15 March 2008: Sooner or later the need to buy security will disappear.
from the Mar 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Israel Implementing IFF System for Commercial Aircraft
Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.
The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won't be a lot of false alarms.
* Third Parties Controlling Information
link rot: bits and pieces of the web that disappear.
* The Doghouse: Drecom
They advertise 128-bit AES encryption, but they use XOR.
* Security Products: Suites vs. Best-of-Breed
The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don't have is better than what we have at the time. And the real solution is to buy results, not products.
No one wants to buy IT security. People want to buy whatever they want -- connectivity, a Web presence, email, networked applications, whatever -- and they want it to be secure. That they're forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.
length: 16:13m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0803.html
by Bruce Schneier
* Israel Implementing IFF System for Commercial Aircraft
Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.
The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won't be a lot of false alarms.
* Third Parties Controlling Information
link rot: bits and pieces of the web that disappear.
* The Doghouse: Drecom
They advertise 128-bit AES encryption, but they use XOR.
* Security Products: Suites vs. Best-of-Breed
The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don't have is better than what we have at the time. And the real solution is to buy results, not products.
No one wants to buy IT security. People want to buy whatever they want -- connectivity, a Web presence, email, networked applications, whatever -- and they want it to be secure. That they're forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.
length: 16:13m
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0803.html
Podcast: Crypto-Gram 15 February 2008: hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems
from the Feb 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Security vs. Privacy
How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity?
Security and privacy are not opposite ends of a seesaw; you don't have to accept less of one to get more of the other.
Benjamin Franklin: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety."
It's also true that those who would give up privacy for security are likely to end up with neither.
* Anti-Missile Technology on Commercial Aircraft
Attaching an empty box to the belly of the plane and writing "Laser Anti-Missile System" on it would be just as effective a deterrent at a fraction of the cost.
* Lock-In
Computer companies want more control over the products they sell you, and they're resorting to increasingly draconian security measures to get that control. The reasons are economic.
* Hacking Power Networks
In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems.
Hundreds of millions of dollars have been extorted, and possibly more. It's difficult to know, because they pay to keep it a secret.
This kind of extortion is the biggest untold story of the cybercrime industry.
length: 25:05
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0802.html
by Bruce Schneier
* Security vs. Privacy
How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity?
Security and privacy are not opposite ends of a seesaw; you don't have to accept less of one to get more of the other.
Benjamin Franklin: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety."
It's also true that those who would give up privacy for security are likely to end up with neither.
* Anti-Missile Technology on Commercial Aircraft
Attaching an empty box to the belly of the plane and writing "Laser Anti-Missile System" on it would be just as effective a deterrent at a fraction of the cost.
* Lock-In
Computer companies want more control over the products they sell you, and they're resorting to increasingly draconian security measures to get that control. The reasons are economic.
* Hacking Power Networks
In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems.
Hundreds of millions of dollars have been extorted, and possibly more. It's difficult to know, because they pay to keep it a secret.
This kind of extortion is the biggest untold story of the cybercrime industry.
length: 25:05
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0802.html
Podcast: Crypto-Gram 15 Janurary 2008: Real security happens long before anyone gets to an airport, a shopping mall, or wherever.
from the Jan 15, 2008 Crypto-Gram Newsletter
by Bruce Schneier
* Anonymity and the Netflix Dataset
Little information is required to de-anonymize information in the Netflix dataset.
87% of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides.
Narayanan and Shmatikov are currently working on developing algorithms and techniques that enable the secure release of anonymous datasets
* "Where Should Airport Security Begin?"
Real security happens long before anyone gets to an airport, a shopping mall, or wherever.
* My Open Wireless Network
If someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.
length: 17:43
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0801.html
by Bruce Schneier
* Anonymity and the Netflix Dataset
Little information is required to de-anonymize information in the Netflix dataset.
87% of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides.
Narayanan and Shmatikov are currently working on developing algorithms and techniques that enable the secure release of anonymous datasets
* "Where Should Airport Security Begin?"
Real security happens long before anyone gets to an airport, a shopping mall, or wherever.
* My Open Wireless Network
If someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.
length: 17:43
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0801.html
Podcast: Crypto-Gram 15 December 2007: Real security isnt something u build, it's something u get when u leave out all the other garbage
from the Dec 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* How to Secure Your Computer, Disks, and Portable Drives
Computer security is hard. Software, computer and network security are all ongoing battles between attacker and defender. Attacker has an inherent advantage: He only has to find one network flaw, while the defender has to find and fix every flaw.
Cryptography is an exception. As long as you don't write your own algorithm, secure encryption is easy. And the defender has an inherent mathematical advantage: Longer keys increase the amount of work the defender has to do linearly, while geometrically increasing the amount of work the attacker has to do.
Unfortunately, cryptography can't solve most computer-security problems.
I use PGP Disk's Whole Disk Encryption tool for two reasons. It's easy, and I trust both the company and the developers
PGP's encouragement of passphrases makes this much easier
PGP Disk can also encrypt external disks
PGP Disk's encrypted zip
If you're a Windows Vista user, you might consider BitLocker
Many people like the open-source and free program, TrueCrypt
* Defeating the Shoe Scanning Machine at Heathrow Airport
This works because the two security systems are decoupled. And the shoe screening machine is so crowded and chaotic, and so poorly manned, that no one notices the switch.
* Security in Ten Years
Roy Amara : "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
In 10 years, computers will be 100 times more powerful, but throughout history and into the future, the one constant is human nature. There hasn't been a new crime invented in millennia.
You can pass laws about locking barn doors after horses have left, but it won't put the horses back in the barn.
Computers will be even more important to our lives, economies and infrastructure. If you're right that crime remains a constant, and I'm right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.
I believe it's increasingly likely that we'll suffer catastrophic failures in critical infrastructure systems by 2017.
IT service trend - the ultimate way to lock in customers. The endpoints are not going to get any better. The trend is to continue putting all our eggs in one basket and blithely trusting that basket.
It's the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.
It's ironic the counterculture "hackers" have enabled (by providing an excuse) today's run-patch-run-patch-reboot software environment and tomorrow's software Stalinism.
I don't think we're going to start building real security. Because real security is not something you build - it's something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn't factor in patching and patch-related downtime, because if it did, the numbers would stink.
length: 21:26
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0712.html
by Bruce Schneier
* How to Secure Your Computer, Disks, and Portable Drives
Computer security is hard. Software, computer and network security are all ongoing battles between attacker and defender. Attacker has an inherent advantage: He only has to find one network flaw, while the defender has to find and fix every flaw.
Cryptography is an exception. As long as you don't write your own algorithm, secure encryption is easy. And the defender has an inherent mathematical advantage: Longer keys increase the amount of work the defender has to do linearly, while geometrically increasing the amount of work the attacker has to do.
Unfortunately, cryptography can't solve most computer-security problems.
I use PGP Disk's Whole Disk Encryption tool for two reasons. It's easy, and I trust both the company and the developers
PGP's encouragement of passphrases makes this much easier
PGP Disk can also encrypt external disks
PGP Disk's encrypted zip
If you're a Windows Vista user, you might consider BitLocker
Many people like the open-source and free program, TrueCrypt
* Defeating the Shoe Scanning Machine at Heathrow Airport
This works because the two security systems are decoupled. And the shoe screening machine is so crowded and chaotic, and so poorly manned, that no one notices the switch.
* Security in Ten Years
Roy Amara : "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."
In 10 years, computers will be 100 times more powerful, but throughout history and into the future, the one constant is human nature. There hasn't been a new crime invented in millennia.
You can pass laws about locking barn doors after horses have left, but it won't put the horses back in the barn.
Computers will be even more important to our lives, economies and infrastructure. If you're right that crime remains a constant, and I'm right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.
I believe it's increasingly likely that we'll suffer catastrophic failures in critical infrastructure systems by 2017.
IT service trend - the ultimate way to lock in customers. The endpoints are not going to get any better. The trend is to continue putting all our eggs in one basket and blithely trusting that basket.
It's the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.
It's ironic the counterculture "hackers" have enabled (by providing an excuse) today's run-patch-run-patch-reboot software environment and tomorrow's software Stalinism.
I don't think we're going to start building real security. Because real security is not something you build - it's something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn't factor in patching and patch-related downtime, because if it did, the numbers would stink.
length: 21:26
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0712.html
again another 0day IE exploit in the wild.. within 1 week (M$ Office Web Components ActiveX Control)
interesting within 1 week after the previous M$ IE 0day exploit in the wild, now we have a new one:
Microsoft Office Web Components Control Could Allow Remote Code Execution
It is interesting that when you read M$ advisory, it does not say that you simply need to (unknowingly) go to an webpage that contains the exploit...
it is in the wild...
scary stuff.
Microsoft Office Web Components Control Could Allow Remote Code Execution
It is interesting that when you read M$ advisory, it does not say that you simply need to (unknowingly) go to an webpage that contains the exploit...
it is in the wild...
scary stuff.
Labels: security
Monday, July 13, 2009
Podcast: Crypto-Gram 15 November 2007:
from the Nov 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* The War on the Unexpected
If you act different, you might find yourself investigated, questioned, and even arrested -- even if you did nothing wrong, and had no intention of doing anything wrong. The problem is a combination of citizen informants and a CYA attitude among police that results in a knee-jerk escalation of reported threats.
the whole system is biased towards escalation and CYA instead of a more realistic threat assessment.
Someone sees something, so he says something. The person he says it to - a policeman, a security guard, a flight attendant - now faces a choice: ignore or escalate. Even though he may believe that it's a false alarm, it's not in his best interests to dismiss the threat. If he's wrong, it'll cost him his career. But if he escalates, he'll be praised for "doing his job" and the cost will be borne by others. So he escalates. And the person he escalates to also escalates, in a series of CYA decisions. And before we're done, innocent people have been arrested, airports have been evacuated, and hundreds of police hours have been wasted.
* Chemical Plant Security and Externalities
If the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn't even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident.
But to society, the cost of an actual attack can be much, much greater. A smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation's chemical plants are secured to a much smaller degree than the risk warrants.
In economics, this is called an 'externality': an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs to him.
* Switzerland Protects its Vote with Quantum Cryptography
Moving data from point A to point B securely is one of the easiest security problems we have. Conventional encryption works great. PGP, SSL, SSH could all be used to solve this problem, as could pretty much any good VPN software package; there's no need to use quantum crypto for this at all. Software security, OS security, network security, and user security are much harder security problems; and quantum crypto doesn't even begin to address them.
* The Strange Story of Dual_EC_DRBG
Random numbers are critical for cryptography: for encryption keys, random authentication challenges, initialization vectors, nonces, key agreement schemes, generating prime numbers, and so on. Break the random number generator, and most of the time you break the entire security system. Which is why you should worry about a new random number standard that includes an algorithm that is slow, badly designed, and just might contain a backdoor for the NSA.
Generating random numbers isn't easy, and researchers have discovered lots of problems and attacks over the years. A recent paper found a flaw in the Windows 2000 random number generator; another paper found flaws in the Linux random number generator. Back in 1996, an early version of SSL was broken because of flaws in its random number generator.
Cryptographers are a conservative bunch; we don't like to use algorithms that have even a whiff of a problem.
The algorithm contains a weakness that can only be described as a backdoor.
length: 27:36
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0711.html
by Bruce Schneier
* The War on the Unexpected
If you act different, you might find yourself investigated, questioned, and even arrested -- even if you did nothing wrong, and had no intention of doing anything wrong. The problem is a combination of citizen informants and a CYA attitude among police that results in a knee-jerk escalation of reported threats.
the whole system is biased towards escalation and CYA instead of a more realistic threat assessment.
Someone sees something, so he says something. The person he says it to - a policeman, a security guard, a flight attendant - now faces a choice: ignore or escalate. Even though he may believe that it's a false alarm, it's not in his best interests to dismiss the threat. If he's wrong, it'll cost him his career. But if he escalates, he'll be praised for "doing his job" and the cost will be borne by others. So he escalates. And the person he escalates to also escalates, in a series of CYA decisions. And before we're done, innocent people have been arrested, airports have been evacuated, and hundreds of police hours have been wasted.
* Chemical Plant Security and Externalities
If the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn't even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident.
But to society, the cost of an actual attack can be much, much greater. A smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation's chemical plants are secured to a much smaller degree than the risk warrants.
In economics, this is called an 'externality': an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs to him.
* Switzerland Protects its Vote with Quantum Cryptography
Moving data from point A to point B securely is one of the easiest security problems we have. Conventional encryption works great. PGP, SSL, SSH could all be used to solve this problem, as could pretty much any good VPN software package; there's no need to use quantum crypto for this at all. Software security, OS security, network security, and user security are much harder security problems; and quantum crypto doesn't even begin to address them.
* The Strange Story of Dual_EC_DRBG
Random numbers are critical for cryptography: for encryption keys, random authentication challenges, initialization vectors, nonces, key agreement schemes, generating prime numbers, and so on. Break the random number generator, and most of the time you break the entire security system. Which is why you should worry about a new random number standard that includes an algorithm that is slow, badly designed, and just might contain a backdoor for the NSA.
Generating random numbers isn't easy, and researchers have discovered lots of problems and attacks over the years. A recent paper found a flaw in the Windows 2000 random number generator; another paper found flaws in the Linux random number generator. Back in 1996, an early version of SSL was broken because of flaws in its random number generator.
Cryptographers are a conservative bunch; we don't like to use algorithms that have even a whiff of a problem.
The algorithm contains a weakness that can only be described as a backdoor.
length: 27:36
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0711.html
Podcast: Crypto-Gram 15 October 2007: Storm ~ the future of malware.
from the Oct 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* The Storm Worm
The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.
It is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm.
It is written by hackers looking for profit, and they're different. These worms spread more subtly, without making noise. Symptoms don't appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.
Storm represents the future of malware. Let's look at its behavior:
1. Storm is patient.
2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders.
3. Storm doesn't cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. This makes it harder to detect.
4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. The most common way to disable a botnet is to shut down the centralized control point. Storm doesn't have a centralized control point, and thus can't be shut down that way.
This technique has other advantages, too: but distributed C2 doesn't show up as a spike. Communications are much harder to detect.
One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won't work with Storm: An infected host may only know about a small fraction of infected hosts -- 25-30 at a time -- and those hosts are an unknown number of hops away from the primary C2 servers.
5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called "fast flux."
6. Storm's payload -- the code it uses to spread -- morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.
7. Storm's delivery mechanism also changes regularly.
8. The Storm e-mail also changes all the time, leveraging social engineering techniques.
9. Last month, Storm began attacking anti-spam sites focused on identifying it. I am reminded of a basic theory of war: Take out your enemy's reconnaissance.
Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it.
Oddly enough, Storm isn't doing much, so far, except gathering strength.Personally, I'm worried about what Storm's creators are planning for Phase II.
* Anonymity and the Tor Network
by joining Tor you join a network of computers around the world that pass Internet traffic randomly amongst each other before sending it out to wherever it is going.
It's called "onion routing," and it was first developed at the Naval Research Laboratory. The communications between Tor nodes are encrypted in a layered protocol -- hence the onion analogy -- but the traffic that leaves the Tor network is in the clear. It has to be.
Tor anonymizes, nothing more.
length: 19:26
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0710.html
by Bruce Schneier
* The Storm Worm
The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.
It is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm.
It is written by hackers looking for profit, and they're different. These worms spread more subtly, without making noise. Symptoms don't appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.
Storm represents the future of malware. Let's look at its behavior:
1. Storm is patient.
2. Storm is designed like an ant colony, with separation of duties. Only a small fraction of infected hosts spread the worm. A much smaller fraction are C2: command-and-control servers. The rest stand by to receive orders.
3. Storm doesn't cause any damage, or noticeable performance impact, to the hosts. Like a parasite, it needs its host to be intact and healthy for its own survival. This makes it harder to detect.
4. Rather than having all hosts communicate to a central server or set of servers, Storm uses a peer-to-peer network for C2. This makes the Storm botnet much harder to disable. The most common way to disable a botnet is to shut down the centralized control point. Storm doesn't have a centralized control point, and thus can't be shut down that way.
This technique has other advantages, too: but distributed C2 doesn't show up as a spike. Communications are much harder to detect.
One standard method of tracking root C2 servers is to put an infected host through a memory debugger and figure out where its orders are coming from. This won't work with Storm: An infected host may only know about a small fraction of infected hosts -- 25-30 at a time -- and those hosts are an unknown number of hops away from the primary C2 servers.
5. Not only are the C2 servers distributed, but they also hide behind a constantly changing DNS technique called "fast flux."
6. Storm's payload -- the code it uses to spread -- morphs every 30 minutes or so, making typical AV (antivirus) and IDS techniques less effective.
7. Storm's delivery mechanism also changes regularly.
8. The Storm e-mail also changes all the time, leveraging social engineering techniques.
9. Last month, Storm began attacking anti-spam sites focused on identifying it. I am reminded of a basic theory of war: Take out your enemy's reconnaissance.
Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it.
Oddly enough, Storm isn't doing much, so far, except gathering strength.Personally, I'm worried about what Storm's creators are planning for Phase II.
* Anonymity and the Tor Network
by joining Tor you join a network of computers around the world that pass Internet traffic randomly amongst each other before sending it out to wherever it is going.
It's called "onion routing," and it was first developed at the Naval Research Laboratory. The communications between Tor nodes are encrypted in a layered protocol -- hence the onion analogy -- but the traffic that leaves the Tor network is in the clear. It has to be.
Tor anonymizes, nothing more.
length: 19:26
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0710.html
Podcast: Crypto-Gram 15 September 2007: Catastrophic points of failure
from the Sep 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* Basketball Referees and Single Points of Failure
Of all major sports, basketball is the most vulnerable to manipulation. There are only five players on the court per team, fewer than in other professional team sports; thus, a single player can have a much greater effect on a basketball game than he can in the other sports.
It's not just that basketball referees are single points of failure, it's that they're both trusted insiders and single points of catastrophic failure.
The best way to catch corrupt trusted insiders is through audit. The particular components of a system that have the greatest influence on the performance of that system need to be monitored and audited, even if the probability of compromise is low.
Most companies focus the bulk of their IT-security monitoring on external threats, but they should be paying more attention to internal threats.
All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.
* Home Users: A Public Health Problem?
The only possible way to solve this problem is to force the ISPs to become IT departments. There's no reason why they can't provide home users with the same level of support my IT department provides me with. There's no reason why they can't provide "clean pipe" service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what's the alternative?
* Stupidest Terrorist Overreaction?
We screwed up, and we want someone to pay for our mistake.
* Getting Free Food at a Fast-Food Drive-In
Fast Foood synchronization attack. By exploiting the limited information flow between the two windows, you can insert yourself into the pay-receive queue.
Fast-food restaurant with two drive-through windows: one where you order and pay, and the other where you receive your food. Wait until there is someone behind you and someone in front of you. Don't order anything at the first window. Tell the clerk that you forgot your money and didn't order anything. Then drive to the second window, and take the food that the person behind you ordered.
length: 23:15
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0709.html
by Bruce Schneier
* Basketball Referees and Single Points of Failure
Of all major sports, basketball is the most vulnerable to manipulation. There are only five players on the court per team, fewer than in other professional team sports; thus, a single player can have a much greater effect on a basketball game than he can in the other sports.
It's not just that basketball referees are single points of failure, it's that they're both trusted insiders and single points of catastrophic failure.
The best way to catch corrupt trusted insiders is through audit. The particular components of a system that have the greatest influence on the performance of that system need to be monitored and audited, even if the probability of compromise is low.
Most companies focus the bulk of their IT-security monitoring on external threats, but they should be paying more attention to internal threats.
All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.
* Home Users: A Public Health Problem?
The only possible way to solve this problem is to force the ISPs to become IT departments. There's no reason why they can't provide home users with the same level of support my IT department provides me with. There's no reason why they can't provide "clean pipe" service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what's the alternative?
* Stupidest Terrorist Overreaction?
We screwed up, and we want someone to pay for our mistake.
* Getting Free Food at a Fast-Food Drive-In
Fast Foood synchronization attack. By exploiting the limited information flow between the two windows, you can insert yourself into the pay-receive queue.
Fast-food restaurant with two drive-through windows: one where you order and pay, and the other where you receive your food. Wait until there is someone behind you and someone in front of you. Don't order anything at the first window. Tell the clerk that you forgot your money and didn't order anything. Then drive to the second window, and take the food that the person behind you ordered.
length: 23:15
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0709.html
Podcast: Crypto-Gram 15 Augustus 2007: Disaster planning - You live in the safest society in the history of mankind.
from the Aug 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* Assurance
eVote testing: It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it's fixed, the system is again secure...
Yet again and again we react with surprise when a system has a vulnerability.
Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn't make us any more secure. If vulnerabilities are so common, finding a few doesn't materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn't more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn't mean that there's one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.
Brian Snow from NSA said they couldn't use modern commercial systems with their backward security thinking. Assurance was his antidote:
"Assurances are confidence-building activities demonstrating that:
"1. The system's security policy is internally consistent and reflects the requirements of the organization,
"2. There are sufficient security functions to support the security policy,
"3. The system functions to meet a desired set of properties and *only* those properties,
"4. The functions are implemented correctly, and
"5. The assurances *hold up* through the manufacturing, delivery and life cycle of the system."
* Avian Flu and Disaster Planning
If an avian flu pandemic broke out tomorrow, would your company be ready for it?
It's not that organizations don't spend enough effort on disaster planning, although that's true; it's that this really isn't the sort of disaster worth planning for?
There is a sweet spot, though, in disaster preparedness. Some disasters are too small or too common to worry about. And others are too large or too rare.
It makes no sense to plan for total annihilation of the continent, whether by nuclear or meteor strike: that's obvious.
You can only reasonably prepare for disasters that leave your world largely intact. If a third of the country's population dies, it's a different world. The economy is different, the laws are different -- the world is different. You simply can't plan for it; there's no way you can know enough about what the new world will look like. Disaster planning only makes sense within the context of existing society.
The proper place for bird flu planning is at the government level.
The key is preparedness. Much more important than planning, preparedness is about setting up social structures so that people fall into doing something sensible when things go wrong. Think of all the wasted effort -- and even more wasted *desire* -- to do something after Katrina because there was no way for most people to help. Preparedness is about getting people to react when there's a crisis. It's something the military trains its soldiers for.
You live in the safest society in the history of mankind.
length: 63:19
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0708.html
by Bruce Schneier
* Assurance
eVote testing: It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it's fixed, the system is again secure...
Yet again and again we react with surprise when a system has a vulnerability.
Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn't make us any more secure. If vulnerabilities are so common, finding a few doesn't materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn't more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn't mean that there's one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.
Brian Snow from NSA said they couldn't use modern commercial systems with their backward security thinking. Assurance was his antidote:
"Assurances are confidence-building activities demonstrating that:
"1. The system's security policy is internally consistent and reflects the requirements of the organization,
"2. There are sufficient security functions to support the security policy,
"3. The system functions to meet a desired set of properties and *only* those properties,
"4. The functions are implemented correctly, and
"5. The assurances *hold up* through the manufacturing, delivery and life cycle of the system."
* Avian Flu and Disaster Planning
If an avian flu pandemic broke out tomorrow, would your company be ready for it?
It's not that organizations don't spend enough effort on disaster planning, although that's true; it's that this really isn't the sort of disaster worth planning for?
There is a sweet spot, though, in disaster preparedness. Some disasters are too small or too common to worry about. And others are too large or too rare.
It makes no sense to plan for total annihilation of the continent, whether by nuclear or meteor strike: that's obvious.
You can only reasonably prepare for disasters that leave your world largely intact. If a third of the country's population dies, it's a different world. The economy is different, the laws are different -- the world is different. You simply can't plan for it; there's no way you can know enough about what the new world will look like. Disaster planning only makes sense within the context of existing society.
The proper place for bird flu planning is at the government level.
The key is preparedness. Much more important than planning, preparedness is about setting up social structures so that people fall into doing something sensible when things go wrong. Think of all the wasted effort -- and even more wasted *desire* -- to do something after Katrina because there was no way for most people to help. Preparedness is about getting people to react when there's a crisis. It's something the military trains its soldiers for.
You live in the safest society in the history of mankind.
length: 63:19
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0708.html
Friday, July 10, 2009
shit hit the fan... "MSVIDCTL.DLL" is much wider spread that ppl originally think...
I spare you what 0day IE vulnerabiltiy could be exploit (drive by rootkit installation...), but it looks like shit hit the fan...
Poking around MSVIDCTL.DLL
the issue is much wider & shittier...
now i know why M$ is taking soo long to fix this, it's a 0day IE exploit in the wild, but not patch coming :(
Poking around MSVIDCTL.DLL
the issue is much wider & shittier...
now i know why M$ is taking soo long to fix this, it's a 0day IE exploit in the wild, but not patch coming :(
Labels: security
Thursday, July 9, 2009
Podcast: Crypto-Gram 15 July 2007: Data Reuse
from the Jul 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* Correspondent Inference Theory and Terrorism
People tend to infer the motives -- and also the disposition -- of someone who performs an action based on the effects of his actions, and not on external or situational factors.
Terrorism is more likely to work if:
1) the terrorists attack military targets more often than civilian ones.
2) if they have minimalist goals like evicting a foreign power from their country or winning control of a piece of territory, rather than maximalist objectives like establishing a new political system in the country or annihilating another nation. But even so, terrorism is a pretty ineffective means of influencing policy.
* Risks of Data Reuse
Data reuse: Data collected for one purpose and then used for another.
2 bothersome issues about data reuse:
1) we lose control of our data.
2) error rate
time 26:06
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0707.html
by Bruce Schneier
* Correspondent Inference Theory and Terrorism
People tend to infer the motives -- and also the disposition -- of someone who performs an action based on the effects of his actions, and not on external or situational factors.
Terrorism is more likely to work if:
1) the terrorists attack military targets more often than civilian ones.
2) if they have minimalist goals like evicting a foreign power from their country or winning control of a piece of territory, rather than maximalist objectives like establishing a new political system in the country or annihilating another nation. But even so, terrorism is a pretty ineffective means of influencing policy.
* Risks of Data Reuse
Data reuse: Data collected for one purpose and then used for another.
2 bothersome issues about data reuse:
1) we lose control of our data.
2) error rate
time 26:06
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0707.html
Wednesday, July 8, 2009
Podcast: Crypto-Gram 15 June 2007: Leave decoy cash and jewelry in an obvious place so a burglar will think he's found your stash and then leave.
from the Jun 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* Rare Risk and Overreactions
If you want to do something that makes security sense, figure out what's common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.
* Tactics, Targets, and Objectives
If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don't run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can't climb trees. But don't do that if you're being chased by an elephant; he'll just knock the tree down. Stand still until he forgets about you.
Leave decoy cash and jewelry in an obvious place so a burglar will think he's found your stash and then leave. And save the jewelry in a new secret place.
* Perpetual Doghouse: Meganet
cryptographic snake-oil
time 41:17
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0706.html
by Bruce Schneier
* Rare Risk and Overreactions
If you want to do something that makes security sense, figure out what's common among a bunch of rare events, and concentrate your countermeasures there. Focus on the general risk of terrorism, and not the specific threat of airplane bombings using liquid explosives. Focus on the general risk of troubled young adults, and not the specific threat of a lone gunman wandering around a college campus. Ignore the movie-plot threats, and concentrate on the real risks.
* Tactics, Targets, and Objectives
If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don't run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can't climb trees. But don't do that if you're being chased by an elephant; he'll just knock the tree down. Stand still until he forgets about you.
Leave decoy cash and jewelry in an obvious place so a burglar will think he's found your stash and then leave. And save the jewelry in a new secret place.
* Perpetual Doghouse: Meganet
cryptographic snake-oil
time 41:17
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0706.html
Tuesday, July 7, 2009
Podcast: Crypto-Gram 15 May 2007: the threat is no longer Big Brother, but instead thousands of Little Brothers.
from the May 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* A Security Market for Lemons
I use PGPdisk, but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.
Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There's no data self-destruct feature. The password protection can easily be bypassed. The data isn't even encrypted. As a secure storage device, Secustick is pretty useless.
In 1970, American economist George Akerlof wrote a paper called "The Market for 'Lemons,'" which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.
A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can't tell the difference -- at least until he's made his purchase. I'll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality. This means that the best cars don't get sold; their prices are too high. Which means that the owners of these best cars don't put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.
In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.
Solution: signal: a way for buyers to tell the difference.
* Is Big Brother a Big Deal?
the threat is no longer Big Brother, but instead thousands of Little Brothers.
* More on REAL ID
As currently proposed, Real ID will fail for several reasons. From a technical and implementation perspective, there are serious questions about its operational abilities both to protect citizen information and resist attempts at circumvention by adversaries. Financially, the initial unfunded $11 billion cost, forced onto the states by the federal government, is excessive. And from a sociological perspective, Real ID will increase the potential for expanded personal surveillance and lay the foundation for a new form of class segregation in the name of protecting the homeland.
* Least Risk Bomb Location
Least Risk Bomb Location (LRBL): the place on an aircraft where a bomb would do the least damage if it exploded
All planes have a designated area where potentially dangerous packages should be placed. Usually it's in the back, adjacent to a door. There are a slew of procedures to be followed if an explosive device is found on board: depressurizing the plane, moving the item to the LRBL, and bracing/smothering it with luggage and other dense materials so that the force of the blast is directed outward, through the door.
• Social Engineering Notes
here's someone's story of social engineering a bank branch: "I enter the first branch at approximately 9:00AM. Dressed in Dickies coveralls, a baseball cap, work boots and sunglasses I approach the young lady at the front desk. 'Hello,' I say. 'John Doe with XYZ Pest Control, here to perform your pest inspection.' I flash her the smile followed by the credentials. She looks at me for a moment, goes 'Uhm… okay… let me check with the branch manager…' and picks up the phone. I stand around twiddling my thumbs and wait while the manager is contacted and confirmation is made. If all goes according to plan, the fake emails I sent out last week notifying branch managers of our inspection will allow me access. It does."
• Is Penetration Testing Worth It?
Given enough time and money, a pen test will find vulnerabilities; there's no point in proving it. And if you're not going to fix all the uncovered vulnerabilities, there's no point uncovering them. But there is a way to do penetration testing usefully. For years I've been saying security consists of protection, detection and response--and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.
I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.
• Do We Really Need a Security Industry?
IT security is getting harder -- increasing complexity is largely to blame -- and the need for aftermarket security products isn't disappearing anytime soon. But there's no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it's helpful in spotting SQL injection attacks. The whole IT security industry is an accident -- an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work -- and the details of how it works won't matter.
time 41:10
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0705.html
by Bruce Schneier
* A Security Market for Lemons
I use PGPdisk, but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.
Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There's no data self-destruct feature. The password protection can easily be bypassed. The data isn't even encrypted. As a secure storage device, Secustick is pretty useless.
In 1970, American economist George Akerlof wrote a paper called "The Market for 'Lemons,'" which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.
A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can't tell the difference -- at least until he's made his purchase. I'll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality. This means that the best cars don't get sold; their prices are too high. Which means that the owners of these best cars don't put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.
In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.
Solution: signal: a way for buyers to tell the difference.
* Is Big Brother a Big Deal?
the threat is no longer Big Brother, but instead thousands of Little Brothers.
* More on REAL ID
As currently proposed, Real ID will fail for several reasons. From a technical and implementation perspective, there are serious questions about its operational abilities both to protect citizen information and resist attempts at circumvention by adversaries. Financially, the initial unfunded $11 billion cost, forced onto the states by the federal government, is excessive. And from a sociological perspective, Real ID will increase the potential for expanded personal surveillance and lay the foundation for a new form of class segregation in the name of protecting the homeland.
* Least Risk Bomb Location
Least Risk Bomb Location (LRBL): the place on an aircraft where a bomb would do the least damage if it exploded
All planes have a designated area where potentially dangerous packages should be placed. Usually it's in the back, adjacent to a door. There are a slew of procedures to be followed if an explosive device is found on board: depressurizing the plane, moving the item to the LRBL, and bracing/smothering it with luggage and other dense materials so that the force of the blast is directed outward, through the door.
• Social Engineering Notes
here's someone's story of social engineering a bank branch: "I enter the first branch at approximately 9:00AM. Dressed in Dickies coveralls, a baseball cap, work boots and sunglasses I approach the young lady at the front desk. 'Hello,' I say. 'John Doe with XYZ Pest Control, here to perform your pest inspection.' I flash her the smile followed by the credentials. She looks at me for a moment, goes 'Uhm… okay… let me check with the branch manager…' and picks up the phone. I stand around twiddling my thumbs and wait while the manager is contacted and confirmation is made. If all goes according to plan, the fake emails I sent out last week notifying branch managers of our inspection will allow me access. It does."
• Is Penetration Testing Worth It?
Given enough time and money, a pen test will find vulnerabilities; there's no point in proving it. And if you're not going to fix all the uncovered vulnerabilities, there's no point uncovering them. But there is a way to do penetration testing usefully. For years I've been saying security consists of protection, detection and response--and you need all three to have good security. Before you can do a good job with any of these, you have to assess your security. And done right, penetration testing is a key component of a security assessment.
I like to restrict penetration testing to the most commonly exploited critical vulnerabilities, like those found on the SANS Top 20 list. If you have any of those vulnerabilities, you really need to fix them.
• Do We Really Need a Security Industry?
IT security is getting harder -- increasing complexity is largely to blame -- and the need for aftermarket security products isn't disappearing anytime soon. But there's no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it's helpful in spotting SQL injection attacks. The whole IT security industry is an accident -- an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work -- and the details of how it works won't matter.
time 41:10
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0705.html
time must be used as it comes
Money can be saved for later. And money that is wasted can be earned back. But time must be used as it comes, for once a moment is past it will never come again.
~ Ralph Marston
~ Ralph Marston
Labels: quote of the day
IE 0day exploit in the wild.... exploiting M$ Video streaming ActiveX control MsVidCtl
I found it very interesting when a vulnerability that was disclosed to M$ last year by Ryan Smith and Alex Wheeler of Hustle Labs of ISS X-Force
now become a running wild IE 0day exploit...
M$ info here
now become a running wild IE 0day exploit...
M$ info here
Labels: security
Podcast: Crypto-Gram 15 April 2007: Limiting the degree to which each individual must be trusted
from the Apr 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* JavaScript Hijacking
JavaScript hijacking is a new type of eavesdropping attack against Ajax-style Web applications. The attack is possible because Web browsers don't protect JavaScript the same way they protect HTML; if a Web application transfers confidential data using messages written in JavaScript, in some cases the messages can be read by an attacker.
Like so many of these sorts of vulnerabilities, preventing the class of attacks is easy. In many cases, it requires just a few additional lines of code. And like so many software security problems, programmers need to understand the security implications of their work so they can mitigate the risks they face. But my guess is that JavaScript hijacking won't be solved so easily, because programmers don't understand the security implications of their work and won't prevent the attacks.
* U.S. Government Contractor Injects Malicious Software into Critical Military Computers
One of the ways to deal with the problem of trusted individuals is by making sure they're trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.
time 13:11
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0704.html
by Bruce Schneier
* JavaScript Hijacking
JavaScript hijacking is a new type of eavesdropping attack against Ajax-style Web applications. The attack is possible because Web browsers don't protect JavaScript the same way they protect HTML; if a Web application transfers confidential data using messages written in JavaScript, in some cases the messages can be read by an attacker.
Like so many of these sorts of vulnerabilities, preventing the class of attacks is easy. In many cases, it requires just a few additional lines of code. And like so many software security problems, programmers need to understand the security implications of their work so they can mitigate the risks they face. But my guess is that JavaScript hijacking won't be solved so easily, because programmers don't understand the security implications of their work and won't prevent the attacks.
* U.S. Government Contractor Injects Malicious Software into Critical Military Computers
One of the ways to deal with the problem of trusted individuals is by making sure they're trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.
time 13:11
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0704.html
Podcast: Crypto-Gram 15 Mar 2007: CYA Security
from the Mar 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* CYA Security
Much of our country's counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs.
This is "Cover Your Ass" security, and unfortunately it's very common.
* Copycats
The lesson for counterterrorism in America: Stay flexible. We're not threatened by a bunch of copycats, so we're best off expending effort on security measures that will work regardless of the tactics or the targets: intelligence, investigation and emergency response. By focusing too much on specifics -- what the terrorists did last time -- we're wasting valuable resources that could be used to keep us safer.
* U.S Terrorism Arrests/Convictions Significantly Overstated
A new report from the U.S. Department of Justice's Inspector General says, basically, that all the U.S. terrorism statistics since 9/11 -- arrests, convictions, and so on -- have been grossly inflated.
* The Doghouse: Onboard Threat Detection System
Cameras fitted to seat-backs will record every twitch, blink, facial expression or suspicious movement before sending the data to onboard software which will check it against individual passenger profiles.
* Private Police Forces
Private security guards outnumber real police more than 5 to 1, and increasingly act like them.
Private police officers are different. They don't work for us; they work for corporations. They're focused on the priorities of their employers or the companies that hire them. They're less concerned with due process, public safety and civil rights.
Also, many of the laws that protect us from police abuse do not apply to the private sector.
If you're detained by a private security guard, you don't have nearly as many rights.
* Drive-By Pharming
Sid Stamm, Zulfikar Ramzan, and Markus Jakobsson have developed a clever, and potentially devastating, attack against home routers, something they call "drive-by pharming."
First, the attacker creates a web page containing a simple piece of malicious JavaScript code. When the page is viewed, the code makes a login attempt into the user's home broadband router, and then attempts to change its DNS server settings to point to an attacker-controlled DNS server. Once the user's machine receives the updated DNS settings from the router (after the machine is rebooted) future DNS requests are made to and resolved by the attacker's DNS server.
And then the attacker basically owns the victim's web connection.
The main condition for the attack to be successful is that the attacker can guess the router password. This is surprisingly easy, since home routers come with a default password that is uniform and often never changed.
They've written proof of concept code that can successfully carry out the steps of the attack on Linksys, D-Link, and NETGEAR home routers. If users change their home broadband router passwords to something difficult to guess, they are safe from this attack.
time 24:15
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0703.html
by Bruce Schneier
* CYA Security
Much of our country's counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs.
This is "Cover Your Ass" security, and unfortunately it's very common.
* Copycats
The lesson for counterterrorism in America: Stay flexible. We're not threatened by a bunch of copycats, so we're best off expending effort on security measures that will work regardless of the tactics or the targets: intelligence, investigation and emergency response. By focusing too much on specifics -- what the terrorists did last time -- we're wasting valuable resources that could be used to keep us safer.
* U.S Terrorism Arrests/Convictions Significantly Overstated
A new report from the U.S. Department of Justice's Inspector General says, basically, that all the U.S. terrorism statistics since 9/11 -- arrests, convictions, and so on -- have been grossly inflated.
* The Doghouse: Onboard Threat Detection System
Cameras fitted to seat-backs will record every twitch, blink, facial expression or suspicious movement before sending the data to onboard software which will check it against individual passenger profiles.
* Private Police Forces
Private security guards outnumber real police more than 5 to 1, and increasingly act like them.
Private police officers are different. They don't work for us; they work for corporations. They're focused on the priorities of their employers or the companies that hire them. They're less concerned with due process, public safety and civil rights.
Also, many of the laws that protect us from police abuse do not apply to the private sector.
If you're detained by a private security guard, you don't have nearly as many rights.
* Drive-By Pharming
Sid Stamm, Zulfikar Ramzan, and Markus Jakobsson have developed a clever, and potentially devastating, attack against home routers, something they call "drive-by pharming."
First, the attacker creates a web page containing a simple piece of malicious JavaScript code. When the page is viewed, the code makes a login attempt into the user's home broadband router, and then attempts to change its DNS server settings to point to an attacker-controlled DNS server. Once the user's machine receives the updated DNS settings from the router (after the machine is rebooted) future DNS requests are made to and resolved by the attacker's DNS server.
And then the attacker basically owns the victim's web connection.
The main condition for the attack to be successful is that the attacker can guess the router password. This is surprisingly easy, since home routers come with a default password that is uniform and often never changed.
They've written proof of concept code that can successfully carry out the steps of the attack on Linksys, D-Link, and NETGEAR home routers. If users change their home broadband router passwords to something difficult to guess, they are safe from this attack.
time 24:15
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0703.html
Monday, July 6, 2009
Podcast: Crypto-Gram 28 Feb 2007: Security is both a feeling and a reality
from the Feb 28, 2007 Crypto-Gram Newsletter
by Bruce Schneier
A special edition: THE PSYCHOLOGY OF SECURITY -- DRAFT
Security is both a feeling and a reality. And they're not the same.
The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We can calculate how secure is something, given enough data, it's easy.
But security is also a feeling, based not on probabilities and mathematical calculations, but on your psychological reactions to both risks and countermeasures.
The Feeling of security: where it comes from, how it works, and why it diverges from the reality of security.
Four fields of research -- two very closely related -- can help illuminate this issue.
1. Behavioral economics: looks at human biases - emotional, social, and cognitive - and how they affect economic decisions.
2. The psychology of decision-making ~ rationality: examines how we make decisions - explain the divergence between the feeling and the reality of security and, more importantly, where that divergence comes from.
3. The psychology of risk: trying to figure out when we exaggerate risks and when we downplay them.
4. Neuroscience: psychology of security is intimately tied to how we think: both intellectually and emotionally.
Over the millennia, our brains have developed complex mechanisms to deal with threats. Understanding how our brains work, and how they fail, is critical to understanding the feeling of security.
Security is a trade-off. There's no such thing as absolute security, and any gain in security always involves some sort of trade-off.
Security costs money, but it also costs in time, convenience, capabilities, liberties, and so on.
"Is this effective against the threat?" is the wrong question to ask. You need to ask: "Is it a good trade-off?"
We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. The truth is that we're not hopelessly bad at making security trade-offs.
We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It's just that the environment of New York in 2007 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.
There are several specific aspects of the security trade-off that can go wrong. For example:
1. The severity of the risk.
2. The probability of the risk.
3. The magnitude of the costs.
4. How effective the countermeasure is at mitigating the risk.
5. How well disparate risks and costs can be compared.
The more your perception diverges from reality in any of these five aspects, the more your perceived trade-off won't match the actual trade-off.
The divergences between perception and reality that can't be explained that easily.
Why is it that, even if someone knows that automobiles kill 40,000 people each year in the U.S. alone, and airplanes kill only hundreds worldwide, he is more afraid of airplanes than automobiles?
These irrational trade-offs can be explained by psychology.
It's critical to understanding why, as a successful species on the planet, we make so many bad security trade-offs.
Most of the time, when the perception of security doesn't match the reality of security, it's because the perception of the risk doesn't match the reality of the risk.
There are some general pathologies that come up over and over again.:
* People exaggerate spectacular but rare risks and downplay common risks.
* People have trouble estimating risks for anything not exactly like their normal situation.
* Personified risks are perceived to be greater than anonymous risks.
* People underestimate risks they willingly take and overestimate risks in situations they can't control.
* Last, people overestimate risks that are being talked about and remain an object of public scrutiny.[1]
David Ropeik and George Gray have a longer list in their book _Risk: A Practical Guide for Deciding What's Really Safe and What's Really Dangerous in the World Around You_:
* Most people are more afraid of risks that are new than those they've lived with for a while.
* Most people are less afraid of risks that are natural than those that are human-made.
* Most people are less afraid of a risk they choose to take than of a risk imposed on them.
* Most people are less afraid of risks if the risk also confers some benefits they want.
* Most people are more afraid of risks that can kill them in particularly awful ways, than they are of the risk of dying in less awful ways.
* Most people are less afraid of a risk they feel they have some control over and more afraid of a risk they don't control.
* Most people are less afraid of risks that come from places, people, corporations, or governments they trust, and more afraid if the risk comes from a source they don't trust.
* We are more afraid of risks that we are more aware of and less afraid of risks that we are less aware of.
* We are much more afraid of risks when uncertainty is high, and less afraid when we know more,
* Adults are much more afraid of risks to their children than risks to themselves.
* You will generally be more afraid of a risk that could directly affect you than a risk that threatens others.
The human brain is a fascinating organ, but an absolute mess. Because it has evolved over millions of years, there are all sorts of processes jumbled together rather than logically organized. Some of the processes are optimized for only certain kinds of situations, while others don't work as well as they could. And there's some duplication of effort, and even some conflicting brain processes.
Assessing and reacting to risk is one of the most important things a living creature has to deal with, and there's a very primitive part of the brain that has that job
Amygdala is responsible for processing base emotions that come from sensory inputs, like anger, avoidance, defensiveness, and fear. It's an old part of the brain, and seems to have originated in early fishes. It's what causes adrenaline and other hormones to be pumped into your bloodstream, triggering the fight-or-flight response, causing increased heart rate and beat force, increased muscle tension, and sweaty palms.
This kind of thing works great if you're a lizard or a lion. Fast reaction is what you're looking for; the faster you can notice threats and either run away from them or fight back, the more likely you are to live to reproduce.
But the world is actually more complicated than that. Some scary things are not really as risky as they seem, and others are better handled by staying in the scary situation to set up a more advantageous future response. This means that there's an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for dealing with it.
Neocortex, a more advanced part of the brain that developed very recently, evolutionarily speaking, and only appears in mammals. It's intelligent and analytic. It can reason. It can make more nuanced trade-offs. It's also much slower.
So here's the first fundamental problem: we have two systems for reacting to risk -- a primitive intuitive system and a more advanced analytic system -- and they're operating in parallel. And it's hard for the neocortex to contradict the amygdala.
time 87:05
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0702a.html
by Bruce Schneier
A special edition: THE PSYCHOLOGY OF SECURITY -- DRAFT
Security is both a feeling and a reality. And they're not the same.
The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We can calculate how secure is something, given enough data, it's easy.
But security is also a feeling, based not on probabilities and mathematical calculations, but on your psychological reactions to both risks and countermeasures.
The Feeling of security: where it comes from, how it works, and why it diverges from the reality of security.
Four fields of research -- two very closely related -- can help illuminate this issue.
1. Behavioral economics: looks at human biases - emotional, social, and cognitive - and how they affect economic decisions.
2. The psychology of decision-making ~ rationality: examines how we make decisions - explain the divergence between the feeling and the reality of security and, more importantly, where that divergence comes from.
3. The psychology of risk: trying to figure out when we exaggerate risks and when we downplay them.
4. Neuroscience: psychology of security is intimately tied to how we think: both intellectually and emotionally.
Over the millennia, our brains have developed complex mechanisms to deal with threats. Understanding how our brains work, and how they fail, is critical to understanding the feeling of security.
Security is a trade-off. There's no such thing as absolute security, and any gain in security always involves some sort of trade-off.
Security costs money, but it also costs in time, convenience, capabilities, liberties, and so on.
"Is this effective against the threat?" is the wrong question to ask. You need to ask: "Is it a good trade-off?"
We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. The truth is that we're not hopelessly bad at making security trade-offs.
We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It's just that the environment of New York in 2007 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.
There are several specific aspects of the security trade-off that can go wrong. For example:
1. The severity of the risk.
2. The probability of the risk.
3. The magnitude of the costs.
4. How effective the countermeasure is at mitigating the risk.
5. How well disparate risks and costs can be compared.
The more your perception diverges from reality in any of these five aspects, the more your perceived trade-off won't match the actual trade-off.
The divergences between perception and reality that can't be explained that easily.
Why is it that, even if someone knows that automobiles kill 40,000 people each year in the U.S. alone, and airplanes kill only hundreds worldwide, he is more afraid of airplanes than automobiles?
These irrational trade-offs can be explained by psychology.
It's critical to understanding why, as a successful species on the planet, we make so many bad security trade-offs.
Most of the time, when the perception of security doesn't match the reality of security, it's because the perception of the risk doesn't match the reality of the risk.
There are some general pathologies that come up over and over again.:
* People exaggerate spectacular but rare risks and downplay common risks.
* People have trouble estimating risks for anything not exactly like their normal situation.
* Personified risks are perceived to be greater than anonymous risks.
* People underestimate risks they willingly take and overestimate risks in situations they can't control.
* Last, people overestimate risks that are being talked about and remain an object of public scrutiny.[1]
David Ropeik and George Gray have a longer list in their book _Risk: A Practical Guide for Deciding What's Really Safe and What's Really Dangerous in the World Around You_:
* Most people are more afraid of risks that are new than those they've lived with for a while.
* Most people are less afraid of risks that are natural than those that are human-made.
* Most people are less afraid of a risk they choose to take than of a risk imposed on them.
* Most people are less afraid of risks if the risk also confers some benefits they want.
* Most people are more afraid of risks that can kill them in particularly awful ways, than they are of the risk of dying in less awful ways.
* Most people are less afraid of a risk they feel they have some control over and more afraid of a risk they don't control.
* Most people are less afraid of risks that come from places, people, corporations, or governments they trust, and more afraid if the risk comes from a source they don't trust.
* We are more afraid of risks that we are more aware of and less afraid of risks that we are less aware of.
* We are much more afraid of risks when uncertainty is high, and less afraid when we know more,
* Adults are much more afraid of risks to their children than risks to themselves.
* You will generally be more afraid of a risk that could directly affect you than a risk that threatens others.
The human brain is a fascinating organ, but an absolute mess. Because it has evolved over millions of years, there are all sorts of processes jumbled together rather than logically organized. Some of the processes are optimized for only certain kinds of situations, while others don't work as well as they could. And there's some duplication of effort, and even some conflicting brain processes.
Assessing and reacting to risk is one of the most important things a living creature has to deal with, and there's a very primitive part of the brain that has that job
Amygdala is responsible for processing base emotions that come from sensory inputs, like anger, avoidance, defensiveness, and fear. It's an old part of the brain, and seems to have originated in early fishes. It's what causes adrenaline and other hormones to be pumped into your bloodstream, triggering the fight-or-flight response, causing increased heart rate and beat force, increased muscle tension, and sweaty palms.
This kind of thing works great if you're a lizard or a lion. Fast reaction is what you're looking for; the faster you can notice threats and either run away from them or fight back, the more likely you are to live to reproduce.
But the world is actually more complicated than that. Some scary things are not really as risky as they seem, and others are better handled by staying in the scary situation to set up a more advantageous future response. This means that there's an evolutionary advantage to being able to hold off the reflexive fight-or-flight response while you work out a more sophisticated analysis of the situation and your options for dealing with it.
Neocortex, a more advanced part of the brain that developed very recently, evolutionarily speaking, and only appears in mammals. It's intelligent and analytic. It can reason. It can make more nuanced trade-offs. It's also much slower.
So here's the first fundamental problem: we have two systems for reacting to risk -- a primitive intuitive system and a more advanced analytic system -- and they're operating in parallel. And it's hard for the neocortex to contradict the amygdala.
time 87:05
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0702a.html
Podcast: Crypto-Gram 15 Feb 2007: You can be secure even though you don't feel secure, and you can feel secure even though you're not really secure.
from the Feb 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* In Praise of Security Theater
1-in-375,000 chance of baby abduction
VS
1-in-415 of infant mortality
Why to prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet.
Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures.
But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don't feel secure, and you can feel secure even though you're not really secure.
The RFID bracelets are what I've come to call security theater: security primarily designed to make you *feel* more secure.
Like real security, security theater has a cost. It can cost money, time, concentration, freedoms, and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.
Too much security theater and our feeling of security becomes greater than the reality, which is also bad. But to write off security theater completely is to ignore the feeling of security.
* Real-ID: Costs and Benefits
Real ID is another lousy security trade-off. It'll cost the United States at least $11 billion, and we won't get much security in return.
* Debating Full Disclosure
Full disclosure -- the practice of making the details of security vulnerabilities public -- is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.
Unfortunately, secrecy *sounds* like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers. The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.
But that assumes that hackers can't discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false.
To understand why the second assumption isn't true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you -- the user -- much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.
So a bunch of software companies, and some security researchers, banded together and invented "responsible disclosure." The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.
This was a good idea -- and these days it's normal procedure -- but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.
Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn't improve security; it stifles it.
* Sending Photos to 911 Operators
Since 1968, the 911 system has evolved smartly with the times. Calls are now automatically recorded. Callers are now automatically located by phone number or cell phone location. This plan is the next logical evolution
* DRM in Windows Vista
Windows Vista includes an array of "features" that you don't want. These features will make your computer less reliable and less secure. They'll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won't do anything useful. In fact, they're working against you. They're digital rights management (DRM) features built into Vista at the behest of the entertainment industry.
And you don't get to refuse them.
* A New Secure Hash Standard
The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.
time 37:37
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0702.html
by Bruce Schneier
* In Praise of Security Theater
1-in-375,000 chance of baby abduction
VS
1-in-415 of infant mortality
Why to prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet.
Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures.
But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don't feel secure, and you can feel secure even though you're not really secure.
The RFID bracelets are what I've come to call security theater: security primarily designed to make you *feel* more secure.
Like real security, security theater has a cost. It can cost money, time, concentration, freedoms, and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.
Too much security theater and our feeling of security becomes greater than the reality, which is also bad. But to write off security theater completely is to ignore the feeling of security.
* Real-ID: Costs and Benefits
Real ID is another lousy security trade-off. It'll cost the United States at least $11 billion, and we won't get much security in return.
* Debating Full Disclosure
Full disclosure -- the practice of making the details of security vulnerabilities public -- is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.
Unfortunately, secrecy *sounds* like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers. The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.
But that assumes that hackers can't discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false.
To understand why the second assumption isn't true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you -- the user -- much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.
So a bunch of software companies, and some security researchers, banded together and invented "responsible disclosure." The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.
This was a good idea -- and these days it's normal procedure -- but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.
Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn't improve security; it stifles it.
* Sending Photos to 911 Operators
Since 1968, the 911 system has evolved smartly with the times. Calls are now automatically recorded. Callers are now automatically located by phone number or cell phone location. This plan is the next logical evolution
* DRM in Windows Vista
Windows Vista includes an array of "features" that you don't want. These features will make your computer less reliable and less secure. They'll make your computer less stable and run slower. They will cause technical support problems. They may even require you to upgrade some of your peripheral hardware and existing software. And these features won't do anything useful. In fact, they're working against you. They're digital rights management (DRM) features built into Vista at the behest of the entertainment industry.
And you don't get to refuse them.
* A New Secure Hash Standard
The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.
time 37:37
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0702.html
Podcast: Crypto-Gram 15 Jan 2007: The real threat is the alliance between the Gov & private industry.
from the Jan 15, 2007 Crypto-Gram Newsletter
by Bruce Schneier
* Automated Targeting System
Automated Targeting System is: a "risk assessment" score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.
* Wal-Mart Stays Open During Bomb Scare
A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb.
I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.
* Auditory Eavesdropping
The threats to privacy in the information age are not solely from government; they're from private industry as well. And the real threat is the alliance between the two.
* NSA Helps Microsoft with Windows Vista
NSA has two roles: eavesdrop on their stuff, and protect our stuff.
When both sides use the same stuff -- Windows Vista, for example -- the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff. In its partnership with Microsoft, it could have decided to go either way: to deliberately introduce vulnerabilities that it could exploit, or deliberately harden the OS to protect its own interests.
time 28:51
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0701.html
by Bruce Schneier
* Automated Targeting System
Automated Targeting System is: a "risk assessment" score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.
* Wal-Mart Stays Open During Bomb Scare
A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb.
I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.
* Auditory Eavesdropping
The threats to privacy in the information age are not solely from government; they're from private industry as well. And the real threat is the alliance between the two.
* NSA Helps Microsoft with Windows Vista
NSA has two roles: eavesdrop on their stuff, and protect our stuff.
When both sides use the same stuff -- Windows Vista, for example -- the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff. In its partnership with Microsoft, it could have decided to go either way: to deliberately introduce vulnerabilities that it could exploit, or deliberately harden the OS to protect its own interests.
time 28:51
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0701.html
Saturday, July 4, 2009
i always thought beethoven sucks... and of course Bach rules
The Daily Show With Jon Stewart | Mon - Thurs 11p / 10c | |||
Oliver Sacks | ||||
|
"what do you like more?"
"i like all together..."
briliant
Labels: evolution
Podcast: Crypto-Gram 15 Dec 2006: random errors and systemic errors
from the Dec 15, 2006 Crypto-Gram Newsletter
by Bruce Schneier
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random - equally likely to happen to anyone. In a close race, random errors won't change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A.
Historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they'll cancel each other out.
time 29:11
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0612.html
by Bruce Schneier
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random - equally likely to happen to anyone. In a close race, random errors won't change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A.
Historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they'll cancel each other out.
time 29:11
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0612.html
Podcast: Crypto-Gram 15 Nov 2006: perceived vs actual risk
from the Nov 15, 2006 Crypto-Gram Newsletter
by Bruce Schneier
* Voting Technology and Security
Voting accuracy, therefore, is a matter of:
1) minimizing the number of steps
2) increasing the reliability of each step.
Electronic voting is like an iceberg; the real threats are below the waterline where you can't see them. Paperless electronic voting machines bypass that security process, allowing a small group of people -- or even a single hacker -- to affect an election.
The solution is surprisingly easy: The trick is to use electronic voting machines as ballot-generating machines. Vote by whatever automatic touch-screen system you want: a machine that keeps no records or tallies of how people voted, but only generates a paper ballot. The voter can check it for accuracy, then process it with an optical-scan machine.
* The Inherent Inaccuracy of Voting
There are two basic types of voting errors: random errors and systemic errors.
Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they'll cancel each other out. But in a very close election, a careful recount will yield a more accurate - but almost certainly not perfectly accurate- result.
Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse.
The problems of electronic voting machines become critical: they're more likely to be systemic problems.
* Perceived Risk vs. Actual Risk
Reasons why some risks are perceived to be more or less serious than they actually are:
1) We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.
2) We over-react to things that offend our morals.
3) We over-react to immediate threats and under-react to long-term threats.
4) We under-react to changes that occur slowly and over time.
Perceived vs actual risk:
1) People exaggerate spectacular but rare risks and downplay common risks.
2) People have trouble estimating risks for anything not exactly like their normal situation. "
3) Personified risks are perceived to be greater than anonymous risks.
4) People underestimate risks they willingly take and overestimate risks in situations they can't control.
5) People people overestimate risks that are being talked about and remain an object of public scrutiny.
time 60:38
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0611.html
by Bruce Schneier
* Voting Technology and Security
Voting accuracy, therefore, is a matter of:
1) minimizing the number of steps
2) increasing the reliability of each step.
Electronic voting is like an iceberg; the real threats are below the waterline where you can't see them. Paperless electronic voting machines bypass that security process, allowing a small group of people -- or even a single hacker -- to affect an election.
The solution is surprisingly easy: The trick is to use electronic voting machines as ballot-generating machines. Vote by whatever automatic touch-screen system you want: a machine that keeps no records or tallies of how people voted, but only generates a paper ballot. The voter can check it for accuracy, then process it with an optical-scan machine.
* The Inherent Inaccuracy of Voting
There are two basic types of voting errors: random errors and systemic errors.
Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they'll cancel each other out. But in a very close election, a careful recount will yield a more accurate - but almost certainly not perfectly accurate- result.
Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse.
The problems of electronic voting machines become critical: they're more likely to be systemic problems.
* Perceived Risk vs. Actual Risk
Reasons why some risks are perceived to be more or less serious than they actually are:
1) We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.
2) We over-react to things that offend our morals.
3) We over-react to immediate threats and under-react to long-term threats.
4) We under-react to changes that occur slowly and over time.
Perceived vs actual risk:
1) People exaggerate spectacular but rare risks and downplay common risks.
2) People have trouble estimating risks for anything not exactly like their normal situation. "
3) Personified risks are perceived to be greater than anonymous risks.
4) People underestimate risks they willingly take and overestimate risks in situations they can't control.
5) People people overestimate risks that are being talked about and remain an object of public scrutiny.
time 60:38
PS: this is my cheat sheet of Bruce Schneier's Podcast:
http://www.schneier.com/crypto-gram-0611.html