The second discussion of the day was on the topic “Ethics of Security Disclosure”. The focus was on a particular DNS attack known as cache poisoning. DNS, which stands for Domain Name Services, is a service which translates IP addresses into names (like google.com) and vice versa. A Seattle local researcher, named Dan Kaminsky discovered that queries for websites can easily be redirected to malicious sites instead of their true destination. This means that a user, while attempting to connect to Bank of America’s website, could secretly be redirected to a clone of the website, belonging to a malicious hacker. Clearly, the effects of such an attack would be devastating. However, even upon the discovery of such an attack, it is not clear who should be responsible for fixing the issue as well as how those parties should be addressed. There is not one particular entity that ‘owns’ DNS. Furthermore, if this issue becomes publically known, the news would do very little to protect actual users and worse off, it would inform anyone with the proper internet knowledge and malicious intent exactly how to stage such an attack. Dan Kaminsky’s solution was to approach representatives from all of the major players in the internet, including the people responsible for maintaining DNS as well as major corporations such as Microsoft and Cisco in total secrecy to work out a patch (solution), build this patch such that it is not obvious exactly what was fixed by merely inspecting the patch and deploy it to hundreds of thousands of computers worldwide before news of this can get out. This scheme was carried out exactly to plan and before the public had any knowledge of the attack all of the major companies (and their customers) as well as many other computers on the web were safe.
In class we discussed the pros and cons of such a solution. On the topic of releasing it to the public, the general consensus seemed to be that bluntly releasing it to the public would be a bad course of action because of what can happen if the information falls into the wrong hands. When discussing what one could do in such situations, it was brought up that reporting such a problem silently could be more difficult than expected. It is not easy to summon representatives from major corporations and even then, certain corporations may not care to listen. Perhaps releasing information to the public will hold companies more accountable and give them further motivation to address whatever the problem at hand.
Monday, December 8, 2008
Friday, December 5, 2008
Integrating Embedded Systems with the Human Body
This week, the topic of discussion revolved around the integration of technological systems with the human body. The core of this topic relates to the ability of such systems to augment human functioning, providing the user of the embedded system "additional senses" that are not biologically innate.
One possible application of this augmentation could be to reintegrate originally lost senses back into patients who have lost some form of original sense. As discussed, such an application would clearly have a dramatic effect on the patient's ability to get back to living a normal life.
However, when this process of sensory augmentation (though the integration of embedded systems with the human body) is performed in a manner to "upgrade" / enhance the sensory input of the body, the results are much more dramatic. For instance, a belt with locational vibrating inputs was discussed that gave the wearer incredibly acute directional ability.
According to the article and discussion, the brain eventually adapted to the sensory input and made the system "part of its own". Because of this, the removal of the device was shown to cause fairly acute side effects in some users. Some were unable to go back to normal functioning, and one subject even had to obsessively carry around a GPS unit in order to function normally.
Clearly, this would not be optimal if the integration of increasingly advanced embedded systems results in increasingly acute side effects should they be removed. Apparently the brain is able to rapidly adapt to their presence and integrate the new sensory inputs, but the converse is not true when the device is removed. It seems that -- at least in some cases -- the brain is unable to "unadapt" and revert to its original ability to function.
Such research definitely carries important implications for the possibility of integrating futuristic sensory systems to the human body.
One possible application of this augmentation could be to reintegrate originally lost senses back into patients who have lost some form of original sense. As discussed, such an application would clearly have a dramatic effect on the patient's ability to get back to living a normal life.
However, when this process of sensory augmentation (though the integration of embedded systems with the human body) is performed in a manner to "upgrade" / enhance the sensory input of the body, the results are much more dramatic. For instance, a belt with locational vibrating inputs was discussed that gave the wearer incredibly acute directional ability.
According to the article and discussion, the brain eventually adapted to the sensory input and made the system "part of its own". Because of this, the removal of the device was shown to cause fairly acute side effects in some users. Some were unable to go back to normal functioning, and one subject even had to obsessively carry around a GPS unit in order to function normally.
Clearly, this would not be optimal if the integration of increasingly advanced embedded systems results in increasingly acute side effects should they be removed. Apparently the brain is able to rapidly adapt to their presence and integrate the new sensory inputs, but the converse is not true when the device is removed. It seems that -- at least in some cases -- the brain is unable to "unadapt" and revert to its original ability to function.
Such research definitely carries important implications for the possibility of integrating futuristic sensory systems to the human body.
Sunday, November 30, 2008
Machine Intelligence and its Implications
Last week, Aida and Hakson talked about Artificial Intelligence, a relatively new field in Computer Science and its implications. AI as a field has suffered some setbacks, but a lot of developments are taking place that would help us solve some of the toughest computer science problems. The article spoke about the “law of accelerating returns” or more a belief where technological change has an exponential progression and so AI research that is still in its inchoate stage would be accelerated in the near future.
In class, we discussed combining human and machine intelligence so that it would improve the quality of human life and make us more efficient at what we do. There were some pros and cons to this that were discussed. On one hand, machines would take all the mundane tasks off our hands and let us concentrate more on the problem at hand. However, then we face the problem of becoming way too dependent on machines to the extent that we forget or not be able to do even the simplest tasks on our own. There is also the scenario in which machines take our place and we lose our jobs.
Hence the questions “How can we engineer machine intelligence to be a benefit?”, “Is the current technological revolution similar to the industrial revolution in the 19th century and if so how much change would it bring? “To what extent should we be dependent on machines and robots before they take over?” As computer scientists, we have a technological imperative to solve certain problems. Fields like medicine and healthcare, biology, computing and problem solving have plenty of problems that AI can solve. We should fully invest in trying to combine human and machine intelligence. Humans have special skills like being able to recognize people and machines can perform complex computations quickly. Together, AI can have a significant impact on our lives.
In class, we discussed combining human and machine intelligence so that it would improve the quality of human life and make us more efficient at what we do. There were some pros and cons to this that were discussed. On one hand, machines would take all the mundane tasks off our hands and let us concentrate more on the problem at hand. However, then we face the problem of becoming way too dependent on machines to the extent that we forget or not be able to do even the simplest tasks on our own. There is also the scenario in which machines take our place and we lose our jobs.
Hence the questions “How can we engineer machine intelligence to be a benefit?”, “Is the current technological revolution similar to the industrial revolution in the 19th century and if so how much change would it bring? “To what extent should we be dependent on machines and robots before they take over?” As computer scientists, we have a technological imperative to solve certain problems. Fields like medicine and healthcare, biology, computing and problem solving have plenty of problems that AI can solve. We should fully invest in trying to combine human and machine intelligence. Humans have special skills like being able to recognize people and machines can perform complex computations quickly. Together, AI can have a significant impact on our lives.
Tuesday, November 25, 2008
Computing for the developing world
Sunil and Benjamin presented on Computing for the Developing World. They described the goals of computing for developing countries, such as increasing access to technology and using technology to improve public systems like healthcare and education. They also shed light on the various constraints involved when creating technology for developing countries, such as: illiteracy, foreignness of technology, lack of infrastructure, and expense.
Some related projects include:
For discussion, we split into two groups.
One group focused on the requirements associated with accomplishing the goals stated above. The group also discussed how to determine what technology would be useful, and how to begin implementing it. Two ways of evaluating what technology would help a particular area are: asking people from the area who are familiar with technology (such as university students), and immersing a technology expert in the culture to allow them time to learn what the community needs. It helps to make sure that those being helped are motivated enough to help bring change, and to demonstrate the usefulness of technology to community leaders to help convince the community as a whole. The group also agreed that each solution needs to be tailored specifically to the area involved. Each village, state, country, etc. will have their own unique constraints. And lastly, it is crucial to keep in the current frame of mind and focus on what we can do now to provide effective technology for the area.
The second group focused on the maintenance of computing systems deployed in developing countries. It was agreed that because hardware is generally unreliable, there should be some sort of localized system for hardware repairs and replacements, which only pulls in new hardware from "upstream" when necessary. Providing local support is also an interesting issue: adoption of a technology must reach some sort of critical mass before providing self-sustaining local support is feasible. It's easy to forget that there's more to maintenance of a technology than the hardware. Maintaining software becomes difficult in this sort of environment: how does a remote developer determine the needs of their users? One idea might be to collect usage data, and send it back to the developer over the Internet. However, the required connectivity is unavailable in many of the targeted areas of the world, and for those in which it is available, it is often prohibitively expensive. The group also discussed some barriers to expanding a deployment to a wider range of regions, such as interface localization, and adapting to differing sets of locally prevalent technologies, such as SMS or internet connectivity.
Some related projects include:
- Digital Green
- Digital Study Hall (using technology to allow students access to more knowledgible people to combat a lack of teachers)
- MultiPoint
- Registering Births and Deaths
For discussion, we split into two groups.
One group focused on the requirements associated with accomplishing the goals stated above. The group also discussed how to determine what technology would be useful, and how to begin implementing it. Two ways of evaluating what technology would help a particular area are: asking people from the area who are familiar with technology (such as university students), and immersing a technology expert in the culture to allow them time to learn what the community needs. It helps to make sure that those being helped are motivated enough to help bring change, and to demonstrate the usefulness of technology to community leaders to help convince the community as a whole. The group also agreed that each solution needs to be tailored specifically to the area involved. Each village, state, country, etc. will have their own unique constraints. And lastly, it is crucial to keep in the current frame of mind and focus on what we can do now to provide effective technology for the area.
The second group focused on the maintenance of computing systems deployed in developing countries. It was agreed that because hardware is generally unreliable, there should be some sort of localized system for hardware repairs and replacements, which only pulls in new hardware from "upstream" when necessary. Providing local support is also an interesting issue: adoption of a technology must reach some sort of critical mass before providing self-sustaining local support is feasible. It's easy to forget that there's more to maintenance of a technology than the hardware. Maintaining software becomes difficult in this sort of environment: how does a remote developer determine the needs of their users? One idea might be to collect usage data, and send it back to the developer over the Internet. However, the required connectivity is unavailable in many of the targeted areas of the world, and for those in which it is available, it is often prohibitively expensive. The group also discussed some barriers to expanding a deployment to a wider range of regions, such as interface localization, and adapting to differing sets of locally prevalent technologies, such as SMS or internet connectivity.
Friday, November 14, 2008
Electronic Voting
Our discussion last week was on electronic voting; very fitting, considering the historic election taking place that day. It quickly became clear that electronic voting introduces as many problems as it solves and will require many creative solutions to adequately address its shortcomings.
The advantages of voting electronically are many. Fewer people are needed to count or process ballots while making results available almost instantaneously. In addition, it can reduce the level of human error, and can drastically reduce the amount of paper used. At this point, it is hard to deny that electronic voting sounds pretty great. Who doesn't want cheaper, faster and better for the environment?
Unfortunately, what initially look like advantages bring many implications which need to be addressed, the largest and most difficult of which is security. Electronic voting is full of security holes that all need to be filled if people are ever going to trust it. It is also worth considering that such an important event deserves whatever time and resources are necessary for an effective election, whether that comes in dollars (and paper to print ballots).
Let's start from the beginning and work our way through the process.
- Software: All electronic voting machines depend on software for their accuracy and reliability. Someone has to write that software and we need to make sure that votes are not, through maliciousness or incompetence, reported incorrectly.
- Physical security: The voting machines have to get from the factory to the polling place and as the article illustrated, give someone with malicious intent even a couple of minutes alone with one and then can destroy the integrity of every vote entered on that machine.
- Identity verification: How do you verify that people are not voting multiple times? Smart cards can be faked. Having to voter enter personal information could compromise the secrecy of their vote.
- Data collection: Assuming that the software is sound, the machines haven't been tampered with, and each person has only voted once, how do you collect the vote totals? Do you transmit it over a network? Save it on encrypted hard drives?
We discussed several potential solutions to these issues, including open source software, background checks for programmers and technicians, non-partisan inspectors (or inspectors from multiple parties), and printing out paper ballots that can be verified by the voter or even used to cast the actual vote.
In the end, however, it may not be necessary to develop a perfectly secure, fool-proof electronic voting system. Paper ballots have many of the same security risks and can be very prone to error, but at the same time provide a physical record of each vote for future auditing. We closed the discussion wondering whether we could improve upon current paper voting methods, perhaps by using technology to produce easy to use and accurately readable paper ballots.
The advantages of voting electronically are many. Fewer people are needed to count or process ballots while making results available almost instantaneously. In addition, it can reduce the level of human error, and can drastically reduce the amount of paper used. At this point, it is hard to deny that electronic voting sounds pretty great. Who doesn't want cheaper, faster and better for the environment?
Unfortunately, what initially look like advantages bring many implications which need to be addressed, the largest and most difficult of which is security. Electronic voting is full of security holes that all need to be filled if people are ever going to trust it. It is also worth considering that such an important event deserves whatever time and resources are necessary for an effective election, whether that comes in dollars (and paper to print ballots).
Let's start from the beginning and work our way through the process.
- Software: All electronic voting machines depend on software for their accuracy and reliability. Someone has to write that software and we need to make sure that votes are not, through maliciousness or incompetence, reported incorrectly.
- Physical security: The voting machines have to get from the factory to the polling place and as the article illustrated, give someone with malicious intent even a couple of minutes alone with one and then can destroy the integrity of every vote entered on that machine.
- Identity verification: How do you verify that people are not voting multiple times? Smart cards can be faked. Having to voter enter personal information could compromise the secrecy of their vote.
- Data collection: Assuming that the software is sound, the machines haven't been tampered with, and each person has only voted once, how do you collect the vote totals? Do you transmit it over a network? Save it on encrypted hard drives?
We discussed several potential solutions to these issues, including open source software, background checks for programmers and technicians, non-partisan inspectors (or inspectors from multiple parties), and printing out paper ballots that can be verified by the voter or even used to cast the actual vote.
In the end, however, it may not be necessary to develop a perfectly secure, fool-proof electronic voting system. Paper ballots have many of the same security risks and can be very prone to error, but at the same time provide a physical record of each vote for future auditing. We closed the discussion wondering whether we could improve upon current paper voting methods, perhaps by using technology to produce easy to use and accurately readable paper ballots.
Tuesday, November 4, 2008
Quantum Internet
Last week, Linden and Sandra did a presentation on the use of quantum mechanics to create secure networks – a quantum internet. This is not the same as quantum computing, which is still only theoretical . Researchers in Vienna have succeeded in creating a secure quantum link over short distances and at relatively low speeds. The main advantage of quantum internet is that it is inherently tamper-proof and very secure. This makes it ideal for transferring encryption keys. If an observer attempts to read the bits, the laws of physics governing the particles make it impossible not to alter their state. If the bits reach the intended target unchanged, then no one else has tried to read them.
Another advantage of the quantum internet is that it utilizes quantum bits, or qubits, that can take on a value of 0 and 1 simultaneously. Also, the idea of quantum entanglement allows one thing to be in two places at once, creating a way for instant communication between two places.
There are several inherent disadvantages to this quantum network, however. Researchers are having trouble coming up with a solution to relaying quantum information over long distances. Repeaters cannot be used, since reading the information changes it irreversibly. This makes it hard to copy, back up, or broadcast data. More research needs to be done on the subject of quantum RAM for these things to be possible.
Another critique of this research is that it is strengthening an already strong part of a security system. Typically, the most vulnerable parts of internet communication isn’t the encryption, but rather the security on both end points. Quantum internet allows a key to be transferred securely, but doesn’t guarantee that key remains safe on the other computer. Keyloggers, hackers, and even onlookers could potentially get the key after it has been received.
With a more secure internet comes the question of who should be allowed to use it. Some countries censor and firewall certain information, and this system would allow invulnerability from censorship. Government may want to enforce restrictions on the type of encryption allowed in order to monitor potentially threatening communication. If this level of encryption was allowed, the government would know that someone was up to something if they were using it.
Another interesting topic discussed was how quantum research is actually making it easier to break current encryption methods. At the same time, it is allowing for a more secure network of information. We also talked about whether quantum research is a good use of taxpayer money, since it has not been extremely productive. However, it can’t be shown to be impossible if it is never researched.
Quantum internet was an interesting topic with many implications to think about. We discussed that it could be focusing on the wrong part of security, and that it raises questions of who should be allowed to use it. On a positive note, notions of instantaneous communication and 100% certainty of no eavesdroppers give potential to ongoing research in the field.
Another advantage of the quantum internet is that it utilizes quantum bits, or qubits, that can take on a value of 0 and 1 simultaneously. Also, the idea of quantum entanglement allows one thing to be in two places at once, creating a way for instant communication between two places.
There are several inherent disadvantages to this quantum network, however. Researchers are having trouble coming up with a solution to relaying quantum information over long distances. Repeaters cannot be used, since reading the information changes it irreversibly. This makes it hard to copy, back up, or broadcast data. More research needs to be done on the subject of quantum RAM for these things to be possible.
Another critique of this research is that it is strengthening an already strong part of a security system. Typically, the most vulnerable parts of internet communication isn’t the encryption, but rather the security on both end points. Quantum internet allows a key to be transferred securely, but doesn’t guarantee that key remains safe on the other computer. Keyloggers, hackers, and even onlookers could potentially get the key after it has been received.
With a more secure internet comes the question of who should be allowed to use it. Some countries censor and firewall certain information, and this system would allow invulnerability from censorship. Government may want to enforce restrictions on the type of encryption allowed in order to monitor potentially threatening communication. If this level of encryption was allowed, the government would know that someone was up to something if they were using it.
Another interesting topic discussed was how quantum research is actually making it easier to break current encryption methods. At the same time, it is allowing for a more secure network of information. We also talked about whether quantum research is a good use of taxpayer money, since it has not been extremely productive. However, it can’t be shown to be impossible if it is never researched.
Quantum internet was an interesting topic with many implications to think about. We discussed that it could be focusing on the wrong part of security, and that it raises questions of who should be allowed to use it. On a positive note, notions of instantaneous communication and 100% certainty of no eavesdroppers give potential to ongoing research in the field.
Tuesday, October 28, 2008
Tuesday October 21
Our discussion last Tuesday lead by Mack Talcott and Blake Thomson revolved around the topics of the Mozilla Geode to Join Google and the Yahoo for Geolocation Web Services. As an add on for Firefox, Geode takes advantage of the Skyhook Wireless' Loki technology that map the Wi-Fi signals in your area to your location. During our discussion of Geode we marked the positives of being able to find local locations fast, easy and automatic. Which lead to the negatives of privacy and security. We also learned about the Yahoo's FireEagle services which lead to a surprise presentation of Whirl.
Geode and FireEagle were interesting products which brought up much debate between the class. Exact locations being sent back to Mozilla or Yahoo to be stored in databases with confident levels made some question the integrity of the companies and the implied trust we have for big companies. Right now there are security checks where we are prompted by the products for how much personal information, like exact location, we would like to give up. What if it comes to the point where products like Geode can get your information without that notification? Questions like those came up often in our discussion. However there were also arguments supporting these technologies. Ideas such as personal banking and select locations just to name one was brought up only to be challenged by the idea of a stranger with a laptop coming to the front of your house and hack into your bank account.
Like the previous week we as a class discussed the legal ramifications of these technologies. How would it be regulated and how can we keep it secure? At one point it was brought up that the law makers who make these laws sometimes don't even know the technology itself. How can a product that is legalized by someone who is not familiar with it be possibly fool proof?
As we moved on we were given a surprise presentation of a start up companies called Whirl. It was a very interesting online social environment that was similar to Geode and FireEagle in that it logs your location and gives notifications of friends who are near. Again the familiar question of security came up, how does the program share its information? Also who is going to see it? We ended our day's discussion to with facebook and myspace. We came to the conclusion that people should generally be more careful when using such social online services. Somethings like pictures that other people post of you can later come back and hurt you.
Geode and FireEagle were interesting products which brought up much debate between the class. Exact locations being sent back to Mozilla or Yahoo to be stored in databases with confident levels made some question the integrity of the companies and the implied trust we have for big companies. Right now there are security checks where we are prompted by the products for how much personal information, like exact location, we would like to give up. What if it comes to the point where products like Geode can get your information without that notification? Questions like those came up often in our discussion. However there were also arguments supporting these technologies. Ideas such as personal banking and select locations just to name one was brought up only to be challenged by the idea of a stranger with a laptop coming to the front of your house and hack into your bank account.
Like the previous week we as a class discussed the legal ramifications of these technologies. How would it be regulated and how can we keep it secure? At one point it was brought up that the law makers who make these laws sometimes don't even know the technology itself. How can a product that is legalized by someone who is not familiar with it be possibly fool proof?
As we moved on we were given a surprise presentation of a start up companies called Whirl. It was a very interesting online social environment that was similar to Geode and FireEagle in that it logs your location and gives notifications of friends who are near. Again the familiar question of security came up, how does the program share its information? Also who is going to see it? We ended our day's discussion to with facebook and myspace. We came to the conclusion that people should generally be more careful when using such social online services. Somethings like pictures that other people post of you can later come back and hurt you.
Subscribe to:
Posts (Atom)