Wednesday 24 October 2012

5 Passwords you should never pick

I wanted to write a post about which passwords are best and how to find a strategy to pick up a good password.
Then I realized that it would be pointless, as at the same moment you tell a strategy to form a good password, it becomes an information manual for crackers and might be implemented in bruteforce methods.

What I will tell you is what are the 5 passwords you should NEVER pick.

1. password, 123456, qwerty and hunter2.

The first two are between the most used passwords of all time. There have been many passwords leaks and the Yahoo leak which was storing unencrypted passwords and usernames (foolish, I know) made possible interesting statistics: on 450,000 passwords leaked, an astonishing 0.38% was 123456 and 0.18% was password. Figure why those are the first passwords a cracker would check.

2. Vocabulary words.

Bruteforcers have already implemented methods to quickly spot those words. Even a random, only-letter 3 characters word would be safer than a vocabulary word.

3. Passwords without numbers.

Using numbers increases the possible characters used from 26 to 36, which becomes hugely significant if combined with a long password.

4. Passwords without capitals.

Using capitals doubles the possible combinations of characters, so from 26 possible permutation we would have 56, which combined with numbers would give 66. Symbols might be used as well to give extra security for smaller words, but many websites do not accept symbols in passwords.

5. L33t speak.

Crackers already know leet speak (even before normal users). They are already used to bruteforce passwords. If you don't know what it is, it is a technique to exchange letters with numbers which look like letters:

O -> 0
I -> 1
Z -> 2
E -> 3
A -> 4
S -> 5
G -> 6
T -> 7
B -> 8

This methods bypasses the vocabulary word check and potentially makes a good encryption, but it has become too popular.

This is the reason for which it is not good to tell encryption methods to form passwords. They will be used in the future generations of bruteforce software. It is much safer to create your own encryption.


Still, I can tell you a common good method which will not spoil much to crackers:

use mnemonics!

Transforming a sentence only known to you into letters and numbers will be as good as a totally random sequence of characters and numbers. For example: 

I hate to wake up at 8 o'clock every Monday

will become:
Ihtwua8o'ceM

which will give ~79 bits of entropy, which is safe enough. It might seem hard to memorize but it's very easy to retrieve if you forget it and as safe as it can get. It would be one of 5.4036 x 10^23 possibilities and would take 1.7135 x 10^13 Years to discover with 1000 checks per second.

Even if this is an excellent method enough (the only problem occurs if someone manages to guess your initial sentence, which completely destroys the safety of this method, but if you did not pick up something common as the first lines of a popular song or poem, it will be safe enough) there are many other ways to create passwords which are easy to remember and require one (or more) encryption methods as the one used above. I will let you have fun with finding your own method.

But why using encryption?

It is a good method to have easy-to-remember but difficult-to-guess passwords. Of course the encryption method must be only known to you and should be memorable enough.

Another good suggestion would be not to use the same passwords for many websites. This is because some websites might not care to store passwords safely (even Yahoo, as we have seen before) and a leak will give your ultra-safe and encrypted password away, which you also accidentally use for your internet banking. Surveys say that around 60% of people use the same password for every service.

There are, of course, also methods to encrypt a memorable password for different websites and then have a set of different passwords with only one encryption method to remember. I will leave you the fun to find a good one.

Now, quickly go to change your password!


Monday 15 October 2012

Fortnightly Science News Digest - 15/10/12


Skydiver breaks sound barrier:

it was a quite spectacular event, the one set up and performed by Felix Baumgartner, Austrian skydiver that pushed his passion a bit too far: farther than any of his colleagues before him, to be precise.

On 14 October, he jumped from a helium balloon at the height of  39,045m (breaking a world record) and reached the free-fall speed of 1,342.8km/h (breaking another world record).

With nothing on apart from his parachute and suit (which served a similar purpose to an astronaut suit), he fell freely for 93% of his 39km trip. It took less to fall for 36km (4'20") than to reach land with his parachute for 3km (4'43").

The skydiver claims he did the dive to collect scientific data on developments of high-altitude parachutes, but the event shook everyone for its spectacular altitudes.

It is fair to underline that the view from the stratosphere is not as fantastic as the one from the ISS. It is easy to be tricked by the high curvature of Earth in the pictures. Those are "fish-eye lens" picture which are distorted to include angles which would be otherwise left out. Nonetheless impressive, the height from which he jumped was still relatively very close to Earth, being only 0.6% of Earth's radius. At that height just a glimpse of curvature can be caught with perfect visibility.

In this picture, the height from which Felix jumped is exactly one pixel. I drew it on top, it might be visible with some zoom. This should give a good sense of the scales involved.


Physics Nobel goes to Serge Haroche and David Wineland

Planet with four suns discovered

Saturday 6 October 2012

The Past and the Future of Computing

It is curious how deeply computers are entering into our life.


Think of only 60 years ago and computers were very young and as big as a room. Most of the people did not own one and did not even know what was that beeping and flashing "devil's machinery".
The high potential of computers was understood soon and they grew big (not in size!) very fast to reach the consumer market in the 80s. They were still very hard to use and the graphical interface was not so graphical at all, as input and output was still mainly text.
For a decade, computers were still in a niche market of electronic lovers and programmers, and they slowly started to enter into families only in the 90s, where one computer was enough and hardly used as well.
The birth of internet was another milestone, which potential only came out later, when bandwidth started getting larger and could be used as a better method of communication than land-line phone.



Today we have smartphones. You can bring them in your pockets, they have quad cores inside and are far more powerful than thousands of old personal computers.
They are incredibly popular and, most importantly, cool.
We use them to perform loads of small tasks and we are getting always more dependent on them. They are our map, our calendar, our camera, our newspaper, our encyclopedia and, mainly, our connection to the world. We can keep in touch with a friend almost instantly, if we wanted to.

As our dependence on technology is increasing, many people wonder if this is a good thing. People are scared that technology is making us dumber, because we let them do tasks, that otherwise we would have to do ourselves, and because we know that we have a source of knowledge readily at our hand. This would let us give up on thinking about the resolution of a problem, as it could be easily looked up over the internet.

Should we give up on technology, then? I do not think that is the right approach to face this problem.
I think the problem lies in how companies are presenting technology to us, and making it addictive, for profit. What are smartphones mostly used for? Sadly, Facebook and Angry Birds. This is a bit of a downer if you think that mankind went to the Moon with computers thousands of times less powerful than our smartphone.
Unfortunately companies are making profit over games on smartphones and I think it is a big waste, as time could be well spent to improve other aspects or make "smarter" programs, which would stimulate better our brain in a less flashy and noisy way.

Also, they should teach us to use it responsibly. Technology has a great potential for doing the most amazing things, if directed in a good direction, and this is a good enough reason to not let it stop from galloping.

This reminds me of the same question we faced with calculators. Using them will not let us practice on simple math which stimulates the brain. Are they a good tool, then? Surely they are, and thanks to them we can do a much better job of calculating, in an incredibly quicker way, but the problems arise when they are getting abused. They should not be used for calculations that can be easily done mentally and, most importantly, they should not let be used to children, who need to learn fundamental mathematics.

At the same time, calculators, and computers, opened up a whole new branch, which otherwise would have not existed: programming, which stimulates logic.
Despite popular belief, technology is not a brain-killer. It can stimulate logic, design and problem-solving skills in ways that were never found before. What kills the brain is the way we use technology, how do producers present it to us.

Computers can now do a number of tasks that we could have never imagined ten years ago. We are way closer (and in many aspects, even over) to the robots age we imagined in the past. Computers can recognize human speech and work out an answer. They can crawl the web to search for information and memorize things in a much better way than humans. Many computers already are unrecognizable from humans when chatting to some testers.

But what about the future?

What scares me most is the potentiality of computers of even programming for us, in a future, which might not be that distant. Like it happened for calculators, doing basic mathematical operations for us, it might as well happen again for programming. It is already becoming easier thanks to graphical editors making it more intuitive, and its evolution could lead to completely eliminate the intervention of the user writing the code.
A smart enough computer could listen to our speech, understand the basic functions we want in a program or a script and build it for us.

This could seem like an end to programming, but if we take the comparison to calculators again, mathematics did not just die with the advent of calculators. The same way, programming could become more accessible to everyone, when the syntactic (and logical) part will be left to do to computers themselves. Creativity would surely benefit from such a change, as everyone would be able to "program", and computers will almost be like servants, being able to potentially write programs that respond to our needs.


In conclusion, I think that we live in very unique and exciting times for technological advance and the future still holds many surprises for us. Stay tuned.