Analysis of the New Requirements for PCI 3.0

The PCI –DSS 3.0 draft is out and the changes are significant.  However, when we parse out the new standard, there are really only six new requirements (and one of those is a just a augmenting an existing requirement).  Anitian analyzed these six new standards along with the supporting material and the results are promising.

These new items will all be considered “best practices” until June 2015, then they become official requirements. The Council restrains QSAs from disclosing the full requirement text until the new standard is published in November 2013.  So, we are including only the guidance text that the Council has provided on these new requirements.

6.5.6 – Insecure Handling of PAN and SAD in Memory

Council Guidance

Attackers use malware tools to capture sensitive data from memory. Minimizing the exposure of PAN/SAD while in memory will help reduce the likelihood that it can be captured by a malicious user or be unknowingly saved to disk in a memory file and left unprotected.

This requirement is intended to ensure that consideration is given for how PAN and SAD are handled in memory. The specific coding techniques resulting from this activity will depend on the technology in use.

Analysis

This one has people scratching their heads, but if you have some software development experience it is not that unusual.  If you have an application that handles primary account numbers (PAN) or sensitive authentication data (SAD), you need to define how you secure that data while it is held in memory.  This requirement is aimed at malware that can sit resident and scrape data from memory.

Frankly, this is not a new concept.  Secure handling of data in memory has been a topic among development communities for over a decade. There is ample scholarly research into the matter, dating back years. (Here is a paper on the very idea from 2006 https://www.pitt.edu/~juy9/papers/infoshield.pdf)

The key issues that need to be addressed are: 1) methods to prevent dumping PAN and SAD to files during a crash or other process, 2) completely erasing data when no longer needed, 3) using programmatic features that prevent unauthorized applications from accessing memory.  Most modern development environments have mechanisms to handle these issues, they just need to be used.  There are also third-party security tools that can monitor for applications trying to perform buffer overflows or other tactics to gain access to memory.  Most importantly, the development people need to document the methods used to protect the memory, so a QSA can review those methods and ensure they are reasonable.

The onus here will be on QSAs to sharpen their application development skills and be able to discuss this concept with developers and fairly analyze their techniques.  Less technical QSAs will be at a significant disadvantage with this new requirement.

6.5.11 – Broken Authentication and Session Management

Council Guidance

Secure authentication and session management prevents unauthorized individuals from compromising legitimate account credentials, keys, or session tokens that would otherwise enable the intruder to assume the identity of an authorized user

Analysis

This one is similar, in some ways, to 6.5.6.  It is something application developers should have been doing all along, but often ignore.  This requirement focuses on web applications and securing them from client-side attacks.  It makes good sense for the PCI rules to go after this issue, since client-side attacks are one of the most common ways hackers get access to data.  This also focuses on man-in-the-middle style attacks as well.

In essence, this requirement says “code your web applications correctly.”  That means a set of application configuration issues that in many cases are fairly easy to implement. For example, session control and timeouts is a simple concept that any session that handles sensitive data should automatically expire and all data erased after a reasonable time period.  This prevents half-open sessions that malware can hijack.  It is just easier not to code timeouts and session controls because they add overhead.  They also can create user experiences that are annoying.  Particularly for retail web sites that have users walk away from their shopping carts for a bit, only to return to a “session timeout” message.

Web application testing will uncover these flaws, which means requirement 6.6 gets a slightly new dimension to it.  QSAs should now demand web application tests for any applications that handle PAN or SAD to look for these weaknesses.

Also, with 6.5.6, this is one where developers need to document their efforts to minimize client-side attacks.

8.5.1 – Unique Authentication Credentials for Service Providers

Council Guidance

Additional requirement for service providers only: Examine authentication policies and procedures and interview personnel to verify that different authentication are used for access to each customer environments.Service providers or vendors must not use the same or similar authentication credential to access multiple customers (for example, for support or service). An example of this is using the same password for each customer, but making it “unique” by starting or ending the password with the customer name. These common authentication credentials have become known over time and used by unauthorized individuals to compromise many of a service provider’s customers in a short period of time.

Analysis

This is a “duh” addition to the PCI-DSS. Why it was not in 2.0 now seems like a significant oversight.  In simple terms, this requirement says third-party providers need to use unique credentials for each customer.  This is an obvious best practice that service providers should have been doing all along.

Consider this requirement the first item in the crackdown on third party providers. One of the recurring themes in the 3.0 standard is to close the loop on non-compliant service providers.  Keep reading, the new 12.9 standard is where this gets some real teeth.

9.9 – Protect of Point-of-Sale (POS) Devices from Tampering

Council Guidance

Criminals attempt to steal cardholder data by stealing and/or manipulating POS terminals. For example, they will try to steal POS devices so they can learn how to break into them, and they often try to replace legitimate devices with fraudulent devices that send them payment card information every time a card is entered. Criminals will also try to add “skimming” components to the outside of devices, which are designed to capture payment card details before they even enter the device—for example, by attaching an additional card reader on top of the legitimate card reader so that the payment card details are captured twice: once by the criminal’s component and then by the device’s legitimate component. In this way, transactions may still be completed without interruption while the criminal is “skimming” the payment card information during the process.This requirement is recommended, but not required, for manual key-entry components such as computer keyboards and POS keypads. Additional best practices on skimming prevention are available on the PCI SSC website.

Analysis

This is another obvious addition that is targeted at skimmers and terminal thieves. Most merchants already have practices in place to handle this requirement, so it should not be very onerous.  It mandates having inventories of systems, periodic inspections, and employee training.  The employee training is probably the most onerous. Organizations are going to need to have specific training programs to teach clerks, cashiers, and other staff how to identify tampered terminals.

11.3 – Develop and Implement a Methodology for Penetration Testing

Council Guidance

The intent of a penetration test is to simulate a real-world attack situation with a goal of identifying how far an attacker would be able to penetrate into an environment. This allows an entity to gain a better understanding of their potential exposure and develop a strategy to defend against attacks.A penetration test differs from a vulnerability scan, as a penetration test is an active process that may include exploiting identified vulnerabilities. Conducting a vulnerability scan may be one of the first steps a penetration tester will perform in order to plan the strategy, although it is not the only step. Even if a vulnerability scan does not detect known vulnerabilities, the penetration tester will often gain enough knowledge about the system to identify possible security gaps.Penetration testing is generally a highly manual process. While some automated tools may be used, the tester uses their knowledge of systems to penetrate into an environment. Often the tester will chain several types of exploits together with a goal of breaking through layers of defenses. For example, if the tester finds a means to gain access to an application server, they will then use the compromised server as a point to stage a new attack based on the resources the server has access to. In this way, a tester is able to simulate the methods performed by an attacker to identify areas of potential weakness in the environment.

Analysis

11.3 is technically not a new requirement. However, it might as well be.  Penetration testing has always been part of the PCI-DSS.  However, previous versions of the PCI standard made a terrible assumption that companies would conduct legitimate penetration tests.  In fact, 11.3 is an area of the PCI DSS that has been excessively abused.  Companies have cut corners on this requirement seeking out cut-rate penetration testers who conduct meaningless scans in place of real testing.  The 3.0 version of the PCI-DSS effectively ends this practice.  Companies can no longer skirt around this requirement with weak penetration tests.  They will be required to develop and adopt an official methodology for testing, and that methodology cannot be “hire some kid with a Nessus scanner from Craigslist.”

Anitian is delighted to see the PCI Council crack down on this issue.  A weak or bad penetration test is more than just useless, they are dangerous.  Inexperienced testers who do not know how to properly analyze vulnerabilities can miss serious exploits thus causing a false sense of security.  Likewise, bad testers will also fail to properly validate vulnerabilities, leading their clients on wild goose chases after non-existent problems.

Compliant companies should take the time to carefully examine who is conducting their testing.  Moreover, they need to question vendors to ensure they are not only performing real exploit testing, but that their staff has the skills to do such testing.

12.9 – Additional Requirement for Service Providers

Council Guidance

In conjunction with Requirement 12.8.2, this requirement is intended to promote a consistent level of understanding between service providers and their customers about their applicable PCI DSS responsibilities. The acknowledgement of the service providers evidences their commitment to maintaining proper security of cardholder data that it obtains from its clients.The method by which the service provider provides written acknowledgment should be agreed between the provider and their customers. The exact wording of an acknowledgement will depend on the agreement between the two parties, the details of the service being provided, and the responsibilities assigned to each party. The acknowledgement does not have to include the exact wording provided in this requirement.This requirement applies only when the entity being assessed is a service provider.

Analysis

The party is over for service providers who ignore PCI.  The new requirement says it very clearly: “Service providers acknowledge in writing to customers that they will maintain all applicable PCI DSS requirements…”  That means in order for merchants to be compliant, their service providers must be compliant as well.  This change is going to hit a lot of smaller ISPs, cloud hosting providers, call-centers, managed security providers, and off-site storage companies like a ton of bricks.  Many of these places have skirted around PCI since their customers have not been very pushy about it.  They are going to get pushy.

Anitian is also pleased to see this addition since it makes a clearer, more definitive statement.  It also closes the loop on a serious security weakness for compliant merchants and acquirers.  Attackers are increasingly focusing on third party providers as an avenue to gain access to larger companies.  The RSA breach from a while back is a well-known example of this.  Moreover, as companies move more and more services into the cloud, having a compliant cloud hosting provider is no longer an option.

Service providers need to start planning now for PCI compliance.  Otherwise, they need to start planning on losing business to companies that require compliance.

Conclusion

It is easy to look at the 3.0 standard and see a lot of change.  In reality, most of the changes are merely structural.  Some documentation requirements are moved around, but by and large, the 3.0 standard is not that fundamentally different from the 2.0 standard.  These new requirements demonstrate a slow evolution of the PCI-DSS. What is unfortunate is that there is not more guidance on mobile and virtual environments.

We will continue to parse out these new requirements and provide additional guidance as possible.

Anitian – Intelligent Information Security. For more information please visit www.anitian.com

Leave a Reply