Top security Questions

List of Tags
1319
Joel Coehoorn

What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well?

I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports).

Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web.

Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.


This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point. To search in only the answers from this question, use the inquestion:this option.

Answered By: victoriah ( 1202)

The idea here is that most of us should already know most of what is on this list. But there just might be one or two items you haven't really looked into before, don't fully understand, or maybe never even heard of.

Interface and User Experience

Security

Performance

  • Implement caching if necessary, understand and use HTTP caching properly as well as HTML5 Manifest.
  • Optimize images - don't use a 20 KB image for a repeating background.
  • Learn how to gzip/deflate content (deflate is better).
  • Combine/concatenate multiple stylesheets or multiple script files to reduce number of browser connections and improve gzip ability to compress duplications between files.
  • Take a look at the Yahoo Exceptional Performance site, lots of great guidelines including improving front-end performance and their YSlow tool. Google page speed is another tool for performance profiling. Both require Firebug to be installed.
  • Use CSS Image Sprites for small related images like toolbars (see the "minimize HTTP requests" point)
  • Busy web sites should consider splitting components across domains. Specifically...
  • Static content (i.e. images, CSS, JavaScript, and generally content that doesn't need access to cookies) should go in a separate domain that does not use cookies, because all cookies for a domain and its subdomains are sent with every request to the domain and its subdomains. One good option here is to use a Content Delivery Network (CDN).
  • Minimize the total number of HTTP requests required for a browser to render the page.
  • Utilize Google Closure Compiler for JavaScript and other minification tools.
  • Make sure there’s a favicon.ico file in the root of the site, i.e. /favicon.ico. Browsers will automatically request it, even if the icon isn’t mentioned in the HTML at all. If you don’t have a /favicon.ico, this will result in a lot of 404s, draining your server’s bandwidth.

SEO (Search Engine Optimization)

  • Use "search engine friendly" URLs, i.e. use example.com/pages/45-article-title instead of example.com/index.php?page=45
  • When using # for dynamic content change the # to #! and then on the server $_REQUEST["_escaped_fragment_"] is what googlebot uses instead of #!. In other words, ./#!page=1 becomes ./?_escaped_fragments_=page=1. Also, for users that may be using FF.b4 or Chromium, history.pushState({"foo":"bar"}, "About", "./?page=1"); Is a great command. So even though the address bar has changed the page does not reload. This allows you to use ? instead of #! to keep dynamic content and also tell the server when you email the link that we are after this page, and the AJAX does not need to make another extra request.
  • Don't use links that say "click here". You're wasting an SEO opportunity and it makes things harder for people with screen readers.
  • Have an XML sitemap, preferably in the default location /sitemap.xml.
  • Use <link rel="canonical" ... /> when you have multiple URLs that point to the same content, this issue can also be addressed from Google Webmaster Tools.
  • Use Google Webmaster Tools and Yahoo Site Explorer.
  • Install Google Analytics right at the start (or an open source analysis tool like Piwik).
  • Know how robots.txt and search engine spiders work.
  • Redirect requests (using 301 Moved Permanently) asking for www.example.com to example.com (or the other way round) to prevent splitting the google ranking between both sites.
  • Know that there can be badly-behaved spiders out there.
  • If you have non-text content look into Google's sitemap extensions for video etc. There is some good information about this in Tim Farley's answer.

Technology

  • Understand HTTP and things like GET, POST, sessions, cookies, and what it means to be "stateless".
  • Write your XHTML/HTML and CSS according to the W3C specifications and make sure they validate. The goal here is to avoid browser quirks modes and as a bonus make it much easier to work with non-standard browsers like screen readers and mobile devices.
  • Understand how JavaScript is processed in the browser.
  • Understand how JavaScript, style sheets, and other resources used by your page are loaded and consider their impact on perceived performance. It may be appropriate in some cases to move scripts to the bottom of your pages.
  • Understand how the JavaScript sandbox works, especially if you intend to use iframes.
  • Be aware that JavaScript can and will be disabled, and that AJAX is therefore an extension, not a baseline. Even if most normal users leave it on now, remember that NoScript is becoming more popular, mobile devices may not work as expected, and Google won't run most of your JavaScript when indexing the site.
  • Learn the difference between 301 and 302 redirects (this is also an SEO issue).
  • Learn as much as you possibly can about your deployment platform.
  • Consider using a Reset Style Sheet.
  • Consider JavaScript frameworks (such as jQuery, MooTools, Prototype, or YUI 3), which will hide a lot of the browser differences when using JavaScript for DOM manipulation.

Bug fixing

  • Understand you'll spend 20% of your time coding and 80% of it maintaining, so code accordingly.
  • Set up a good error reporting solution.
  • Have a system for people to contact you with suggestions and criticisms.
  • Document how the application works for future support staff and people performing maintenance.
  • Make frequent backups! (And make sure those backups are functional) Ed Lucas's answer has some advice. Have a restore strategy, not just a backup strategy.
  • Use a version control system to store your files, such as Subversion or Git.
  • Don't forget to do your Unit Testing. Frameworks like Selenium can help.

Lots of stuff omitted not necessarily because they're not useful answers, but because they're either too detailed, out of scope, or go a bit too far for someone looking to get an overview of the things they should know. If you're one of those people you can read the rest of the answers to get more detailed information about the things mentioned in this list. If I get the time I'll add links to the various answers that contain the things mentioned in this list if the answers go into detail about these things. Please feel free to edit this as well, I probably missed some stuff or made some mistakes.

602
Ahamed

In Swing, the password field has a getPassword() (returns char[]) method instead of the usual getText() (returns String) method. Similarly, I have come across a suggestion not to use Strings to handle passwords. Why does String pose a threat to security when it comes to passwords?

It feels inconvenient to use char[].

Answered By: Jon Skeet ( 836)

Strings are immutable. That means once you've created the string, if another process can dump memory, there's no way (aside from reflection) you can get rid of the data before GC kicks in.

With an array, you can explicitly wipe the data after you're done with it: you can overwrite the array with anything you like, and the password won't be present anywhere in the system, even before garbage collection.

So yes, this is a security concern - but even using char[] only reduces the window of opportunity for an attacker, and it's only for this specific type of attack.

EDIT: As noted in comments, it's possible that arrays being moved by the garbage collector will leave stray copies of the data in memory. I believe this is implementation-specific - the GC may clear all memory as it goes, to avoid this sort of thing. Even if it does, there's still the time during which the char[] contains the actual characters as an attack window.

As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password.

When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable.

In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason.

Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics.

Additional Information for Bounty

I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach.

In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them.

In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind.

Thanks to Everyone

This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery.

As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

Also, thanks to everyone in the Stack community who voted for this question and/or marked it as a favorite. I take hitting 100 up votes as a compliment and hope that this discussion has helped someone else with the same concern that I had.

Answered By: Michael Burr ( 316)

How about taking another approach or angle at this problem. Ask why the password is required to be in plaintext - if it's so that the user can retrieve the password, then strictly speaking you don't really need to retrieve the password they set (they don't remember what it is anyway), you need to be able to give them a password they can use.

Think about it - if the user needs to retrieve the password, it's because they've forgotten it. In which case a new password is just as good as the old one. But, one of the drawbacks of common password reset mechanisms used today is that the generated passwords produced in a reset operation are generally a bunch of random characters, so they're difficult for the user to simply type in correctly unless they copy-n-paste. That can be a problem for less savvy computer users.

One way around that problem is to provide auto-generated passwords that are more or less natural language text. While natural language strings might not have the entropy that a string of random characters of the same length has, there's nothing that says that your auto-generated password needs to have only 8 (or 10 or 12) characters. Get a high-entropy auto-generated passphrase by stringing together several random words (leave a space between them, so they're still recognizable and typeable by anyone who can read). Six words random words of varying length are probably easier to type correctly and with confidence than 10 random characters, and they can have a higher entropy as well. For example, the entropy of a 10 character password that drew randomly from uppercase, lowercase, digits and 10 punctuation symbols (for a total of 72 valid symbols) would have an entropy of 61.7 bits. Using a dictionary of 7776 words (as Diceware uses) that could be randomly selected for a six word passphrase, the passphrase would have an entropy of 77.4 bits. See the Diceware FAQ for more info.

  • a passphrase with about 77 bits of entropy: "admit prose flare table acute flair"

  • a password with about 74 bits of entropy: "K:&$R^tt~qkD"

I know I'd prefer typing the phrase, and with copy-n-paste, the phrase is no less easy to use that the password either, so no loss there. Of course if your website (or whatever the protected asset is) doesn't need 77 bits of entropy for an auto-generated passphrase, generate fewer words (which I'm sure your users would appreciate).

I understand the arguments that there are password protected assets that really don't have a high level of value, so the breach of a password might not be the end of the world. For example, I probably wouldn't care if 80% of the passwords I use on various websites was breached - all that could happen is a someone spamming or posting under my name for a while. That wouldn't be great, but it's not like they'd be breaking into my bank account. However, given the fact that many people use the same password for their web forum sites as they do for their bank accounts (and probably national security databases), I think it would be best to handle even those 'low-value' passwords as non-recoverable.

414
Si.

What is the worst security hole you've ever seen? It is probably a good idea to keep details limited to protect the guilty.

For what it's worth, here's a question about what to do if you find a security hole, and another with some useful answers if a company doesn't (seem to) respond.

Answered By: John Stauffer ( 647)

From early days of online stores:

Getting a 90% discount by entering .1 in the quantity field of the shopping cart. The software properly calculated the total cost as .1 * cost, and the human packing the order simply glossed over the odd "." in front of the quantity to pack :)

322
Jeff Atwood

It looks like we'll be adding CAPTCHA support to Stack Overflow. This is necessary to prevent bots, spammers, and other malicious scripted activity. We only want human beings to post or edit things here!

We'll be using a JavaScript (jQuery) CAPTCHA as a first line of defense:

http://docs.jquery.com/Tutorials:Safer_Contact_Forms_Without_CAPTCHAs

The advantage of this approach is that, for most people, the CAPTCHA won't ever be visible!

However, for people with JavaScript disabled, we still need a fallback and this is where it gets tricky.

I have written a traditional CAPTCHA control for ASP.NET which we can re-use.

CaptchaImage

However, I'd prefer to go with something textual to avoid the overhead of creating all these images on the server with each request.

I've seen things like..

  • ASCII text captcha: \/\/(_)\/\/
  • math puzzles: what is 7 minus 3 times 2?
  • trivia questions: what tastes better, a toad or a popsicle?

Maybe I'm just tilting at windmills here, but I'd like to have a less resource intensive, non-image based <noscript> compatible CAPTCHA if possible.

Ideas?

Answered By: GateKiller ( 207)

A method that I have developed and which seems to work perfectly (although I probably don't get as much comment spam as you), is to have a hidden field and fill it with a bogus value e.g.:

<input type="hidden" name="antispam" value="lalalala" />

I then have a piece of JavaScript which updates the value every second with the number of seconds the page has been loaded for:

var antiSpam = function() {
        if (document.getElementById("antiSpam")) {
                a = document.getElementById("antiSpam");
                if (isNaN(a.value) == true) {
                        a.value = 0;
                } else {
                        a.value = parseInt(a.value) + 1;
                }
        }
        setTimeout("antiSpam()", 1000);
}

antiSpam();

Then when the form is submitted, If the antispam value is still "lalalala", then I mark it as spam. If the antispam value is an integer, I check to see if it is above something like 10 (seconds). If it's below 10, I mark it as spam, if it's 10 or more, I let it through.

If AntiSpam = A Integer
    If AntiSpam >= 10
        Comment = Approved
    Else
        Comment = Spam
Else
    Comment = Spam

The theory being that:

  • A spam bot will not support JavaScript and will submit what it sees
  • If the bot does support JavaScript it will submit the form instantly
  • The commenter has at least read some of the page before posting

The downside to this method is that it requires JavaScript, and if you don't have JavaScript enabled, your comment will be marked as spam, however, I do review comments marked as spam, so this is not a problem.

Response to comments

@MrAnalogy: The server side approach sounds quite a good idea and is exactly the same as doing it in JavaScript. Good Call.

@AviD: I'm aware that this method is prone to direct attacks as I've mentioned on my blog. However, it will defend against your average spam bot which blindly submits rubbish to any form it can find.

312
luiscubal

It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection.

Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt.

I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters.

  1. The hashing mechanism must be available in PHP
  2. It must be safe
  3. It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?)

Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer?

In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism.

EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure.

EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) (EDIT 3: I haven't used sha256 in some time now, see the accepted answer for a better mechanism)

Questions that didn't help: What's the difference between SHA and MD5 in PHP
Simple Password Encryption
Secure methods of storing keys, passwords for asp.net
How would you implement salted passwords in Tomcat 5.5

Answered By: Robert K ( 301)

TL;DR

Don'ts

  • Don't limit what characters users can enter for passwords. Only idiots do this.
  • Don't limit the length of a password. If your users want a sentence with supercalifragilisticexpialidocious in it, don't prevent them from using it.
  • Never store your user's password in plain-text.
  • Never email a password to your user except when they have lost theirs, and you sent a temporary one.
  • Never, ever log passwords in any manner.
  • Never hash passwords with SHA1 or MD5! Modern crackers can exceed 60 and 180 billion hashes/second (respectively).

Do's

  • Use scrypt when you can; bcrypt if you cannot.
  • Use PBKDF2 if you cannot use either bcrypt or scrypt, with SHA2 hashes.
  • Reset everyone's passwords when the database is compromised.
  • Implement a reasonable 8-10 character minimum length, plus require at least 1 upper case letter, 1 lower case letter, a number, and a symbol. This will improve the entropy of the password, in turn making it harder to crack. (See the "What makes a good password?" section for some debate.)

Why hash passwords anyway?

The objective behind hashing passwords is simple: preventing malicious access to user accounts by compromising the database. So the goal of password hashing is to deter a hacker or cracker by costing them too much time or money to calculate the plain-text passwords. And time/cost are the best deterrents in your arsenal.

Another reason that you want a good, robust hash on a user accounts is to give you enough time to change all the passwords in the system. If your database is compromised you will need enough time to at least lock the system down, if not change every password in the database.

Jeremiah Grossman, CTO of Whitehat Security, stated on his blog after a recent password recovery that required brute-force breaking of his password protection:

Interestingly, in living out this nightmare, I learned A LOT I didn’t know about password cracking, storage, and complexity. I’ve come to appreciate why password storage is ever so much more important than password complexity. If you don’t know how your password is stored, then all you really can depend upon is complexity. This might be common knowledge to password and crypto pros, but for the average InfoSec or Web Security expert, I highly doubt it.

(Emphasis mine.)

What makes a good password anyway?

Entropy. (Not that I fully subscribe to Randall's viewpoint.)

In short, entropy is how much variation is within the password. When a password is only lowercase roman letters, that's only 26 characters. That isn't much variation. Alpha-numeric passwords are better, with 36 characters. But allowing upper and lower case, with symbols, is roughly 96 characters. That's a lot better than just letters. One problem is, to make our passwords memorable we insert patterns—which reduces entropy. Oops!

Password entropy is approximated easily. Using the full range of ascii characters (roughly 96 typeable characters) yields an entropy of 6.6 per character, which at 8 characters for a password is still too low (52.679 bits of entropy) for future security. But the good news is: longer passwords, and passwords with unicode characters, really increase the entropy of a password and make it harder to crack.

There's a longer discussion of password entropy on the Crypto StackExchange site. A good Google search will also turn up a lot of results.

In the comments I talked with @popnoodles, who pointed out that enforcing a password policy of X length with X many letters, numbers, symbols, etc, can actually reduce entropy by making the password scheme more predictable. I do agree. Randomess, as truly random as possible, is always the safest but least memorable solution.

So far as I've been able to tell, making the world's best password is a Catch-22. Either its not memorable, too predictable, too short, too many unicode characters (hard to type on a Windows/Mobile device), too long, etc. No password is truly good enough for our purposes, so we must protect them as though they were in Fort Knox.

Best practices

Bcrypt and scrypt are the current best practices. Scrypt will be better than bcrypt in time, but it hasn't seen adoption as a standard by Linux/Unix or by webservers, and hasn't had in-depth reviews of its algorithm posted yet. But still, the future of the algorithm does look promising. If you are working with Ruby there is an scrypt gem that will help you out, and Node.js now has its own scrypt package.

I highly suggest reading the documentation for the crypt function if you want to roll your own use of bcrypt, or finding yourself a good wrapper or use something like PHPASS for a more legacy implementation. I recommend a minimum of 12 rounds of bcrypt, if not 15 to 18.

I changed my mind about using bcrypt when I learned that bcrypt only uses blowfish's key schedule, with a variable cost mechanism. The latter lets you increase the cost to brute-force a password by increasing blowfish's already expensive key schedule.

Average practices

I almost can't imagine this situation anymore. PHPASS supports PHP 3.0.18 through 5.3, so it is usable on almost every installation imaginable—and should be used if you don't know for certain that your environment supports bcrypt.

But suppose that you cannot use bcrypt or PHPASS at all. What then?

Try an implementation of PDKBF2 with the minimum number of rounds that your environment/application/user-perception can tolerate. The lowest number I'd recommend is 2500 rounds. Also, make sure to use hash_hmac() if it is available to make the operation harder to reproduce.

Future Practices

Coming in PHP 5.5 is a full password protection library that abstracts away any pains of working with bcrypt. While most of us are stuck with PHP 5.2 and 5.3 in most common environments, especially shared hosts, @ircmaxell has built a compatibility layer for the coming API that is backward compatible to PHP 5.3.7.

Cryptography Recap & Disclaimer

The computational power required to actually crack a hashed password doesn't exist. The only way for computers to "crack" a password is to recreate it and simulate the hashing algorithm used to secure it. The speed of the hash is linearly related to its ability to be brute-forced. Worse still, most hash algorithms can be easily parallelized to perform even faster. This is why costly schemes like bcrypt and scrypt are so important.

You cannot possibly foresee all threats or avenues of attack, and so you must make your best effort to protect your users up front. If you do not, then you might even miss the fact that you were attacked until it's too late... and you're liable. To avoid that situation, act paranoid to begin with. Attack your own software (internally) and attempt to steal user credentials, or modify other user's accounts or access their data. If you don't test the security of your system, then you cannot blame anyone but yourself.

Lastly: I am not a cryptographer. Whatever I've said is my opinion, but I happen to think it's based on good ol' common sense ... and lots of reading. Remember, be as paranoid as possible, make things as hard to intrude as possible, and then, if you are still worried, contact a white-hat hacker or cryptographer to see what they say about your code/system.

Just looking at:

XKCD Strip (Source: http://xkcd.com/327/)

What does this SQL do:

Robert'); DROP
TABLE STUDENTS; --

I know both ' and -- are for comments, but doesn't the word DROP get commented as well since it is part of the same line?

Answered By: Will ( 514)

It drops the students table.

The original code in the school's program probably looks something like

qry = "INSERT INTO Students VALUES ('" + tbName.Text + "', " + tbSSN.Text + ")";

This is the naive way to add text input into a query, and is very bad, as you will see.

After the values for tbName.Text (which is Robert'); DROP TABLE STUDENTS; --) and tbSSN.Text (let's call it 555443333) are concatenated with the rest of the query, the result is now actually two queries separated by the statement terminator (semicolon). The second query has been injected into the first. When the code executes this query against the database, it will look like this

INSERT INTO Students VALUES ('Robert'); DROP TABLE Students; --', 555443333)

which, in plain English, roughly translates to the two queries:

Add a new record to the Students table with a Name value of 'Robert'

and

Delete the Students table

Everything past the second query is marked as a comment: --', 555443333)

The ' in the student's name is not a comment, it's the closing string delimiter. Since the student's name is a string, it's needed syntactically to complete the hypothetical query. Injection attacks only work when the SQL query they inject results in good SQL (good being very relative in this case).

Edited as per dan04's astute comment

281
tylerl

I'm trying to build a list of functions that can be used for arbitrary code execution. The purpose isn't to list functions that should be blacklisted or otherwise disallowed. Rather, I'd like to have a grep-able list of red-flag keywords handy when searching a compromised server for back-doors.

The idea is that if you want to build a multi-purpose malicious PHP script -- such as a "web shell" script like c99 or r57 -- you're going to have to use one or more of a relatively small set of functions somewhere in the file in order to allow the user to execute arbitrary code. Searching for those those functions helps you more quickly narrow down a haystack of tens-of-thousands of PHP files to a relatively small set of scripts that require closer examination.

Clearly, for example, any of the following would be considered malicious (or terrible coding):

<? eval($_GET['cmd']); ?>

<? system($_GET['cmd']); ?>

<? preg_replace('/.*/e',$_POST['code']); ?>

and so forth.

Searching through a compromised website the other day, I didn't notice a piece of malicious code because I didn't realize preg_replace could be made dangerous by the use of the /e flag (which, seriously? Why is that even there?). Are there any others that I missed?

Here's my list so far:

Shell Execute

  • system
  • exec
  • popen
  • backtick operator
  • pcntl_exec

PHP Execute

  • eval
  • preg_replace (with /e modifier)
  • create_function
  • include[_once] / require[_once] (see mario's answer for exploit details)

It might also be useful to have a list of functions that are capable of modifying files, but I imagine 99% of the time exploit code will contain at least one of the functions above. But if you have a list of all the functions capable of editing or outputting files, post it and I'll include it here. (And I'm not counting mysql_execute, since that's part of another class of exploit.)

Answered By: Rook ( 207)

To build this list I used 2 sources. A Study In Scarlet and RATS. I have also added some of my own to the mix and people on this thread have helped out.

Edit: After posting this list I contacted the founder of RIPS and as of now this tools searches PHP code for the use of every function in this list.

Most of these function calls are classified as Sinks. When a tainted variable (like $_REQUEST) is passed to a sink function, then you have a vulnerability. Programs like RATS and RIPS use grep like functionality to identify all sinks in an application. This means that programmers should take extra care when using these functions, but if they where all banned then you wouldn't be able to get much done.

"With great power comes great responsibility."

--Stan Lee

Command Execution

exec           - Returns last line of commands output
passthru       - Passes commands output directly to the browser
system         - Passes commands output directly to the browser and returns last line
shell_exec     - Returns commands output
`` (backticks) - Same as shell_exec()
popen          - Opens read or write pipe to process of a command
proc_open      - Similar to popen() but greater degree of control
pcntl_exec     - Executes a program

PHP Code Execution

Apart from eval there are other ways to execute PHP code: include/require can be used for remote code execution in the form of Local File Include and Remote File Include vulnerabilities.

eval()
assert()  - identical to eval()
preg_replace('/.*/e',...) - /e does an eval() on the match
create_function()
include()
include_once()
require()
require_once()
$_GET['func_name']($_GET['argument']);
$func = new ReflectionFunction($_GET['func_name']); $func->invoke(); or $func->invokeArgs(array());

List of functions which accept callbacks

These functions accept a string parameter which could be used to call a function of the attacker's choice. Depending on the function the attacker may or may not have the ability to pass a parameter. In that case an Information Disclosure function like phpinfo() could be used.

Function                     => Position of callback arguments
'ob_start'                   =>  0,
'array_diff_uassoc'          => -1,
'array_diff_ukey'            => -1,
'array_filter'               =>  1,
'array_intersect_uassoc'     => -1,
'array_intersect_ukey'       => -1,
'array_map'                  =>  0,
'array_reduce'               =>  1,
'array_udiff_assoc'          => -1,
'array_udiff_uassoc'         => array(-1, -2),
'array_udiff'                => -1,
'array_uintersect_assoc'     => -1,
'array_uintersect_uassoc'    => array(-1, -2),
'array_uintersect'           => -1,
'array_walk_recursive'       =>  1,
'array_walk'                 =>  1,
'assert_options'             =>  1,
'uasort'                     =>  1,
'uksort'                     =>  1,
'usort'                      =>  1,
'preg_replace_callback'      =>  1,
'spl_autoload_register'      =>  0,
'iterator_apply'             =>  1,
'call_user_func'             =>  0,
'call_user_func_array'       =>  0,
'register_shutdown_function' =>  0,
'register_tick_function'     =>  0,
'set_error_handler'          =>  0,
'set_exception_handler'      =>  0,
'session_set_save_handler'   => array(0, 1, 2, 3, 4, 5),
'sqlite_create_aggregate'    => array(2, 3),
'sqlite_create_function'     =>  2,

Information Disclosure

Most of these function calls are not sinks. But rather it maybe a vulnerability if any of the data returned is viewable to an attacker. If an attacker can see phpinfo() it is definitely a vulnerability.

phpinfo
posix_mkfifo
posix_getlogin
posix_ttyname
getenv
get_current_user
proc_get_status
get_cfg_var
disk_free_space
disk_total_space
diskfreespace
getcwd
getlastmo
getmygid
getmyinode
getmypid
getmyuid

Other

extract - Opens the door for register_globals attacks (see study in scarlet).
parse_str -  works like extract if only one argument is given.  
putenv
ini_set
mail - has CRLF injection in the 3rd parameter, opens the door for spam. 
header - on old systems CRLF injection could be used for xss or other purposes, now it is still a problem if they do a header("location: ..."); and they do not die();. The script keeps executing after a call to header(), and will still print output normally. This is nasty if you are trying to protect an administrative area. 
proc_nice
proc_terminate
proc_close
pfsockopen
fsockopen
apache_child_terminate
posix_kill
posix_mkfifo
posix_setpgid
posix_setsid
posix_setuid

Filesystem Functions

According to RATS all filesystem functions in php are nasty. Some of these don't seem very useful to the attacker. Others are more useful than you might think. For instance if allow_url_fopen=On then a url can be used as a file path, so a call to copy($_GET['s'], $_GET['d']); can be used to upload a PHP script anywhere on the system. Also if a site is vulnerable to a request send via GET everyone of those file system functions can be abused to channel and attack to another host through your server.

// open filesystem handler
fopen
tmpfile
bzopen
gzopen
SplFileObject->__construct
// write to filesystem (partially in combination with reading)
chgrp
chmod
chown
copy
file_put_contents
lchgrp
lchown
link
mkdir
move_uploaded_file
rename
rmdir
symlink
tempnam
touch
unlink
imagepng   - 2nd parameter is a path.
imagewbmp  - 2nd parameter is a path. 
image2wbmp - 2nd parameter is a path. 
imagejpeg  - 2nd parameter is a path.
imagexbm   - 2nd parameter is a path.
imagegif   - 2nd parameter is a path.
imagegd    - 2nd parameter is a path.
imagegd2   - 2nd parameter is a path.
iptcembed
ftp_get
ftp_nb_get
// read from filesystem
file_exists
file_get_contents
file
fileatime
filectime
filegroup
fileinode
filemtime
fileowner
fileperms
filesize
filetype
glob
is_dir
is_executable
is_file
is_link
is_readable
is_uploaded_file
is_writable
is_writeable
linkinfo
lstat
parse_ini_file
pathinfo
readfile
readlink
realpath
stat
gzfile
readgzfile
getimagesize
imagecreatefromgif
imagecreatefromjpeg
imagecreatefrompng
imagecreatefromwbmp
imagecreatefromxbm
imagecreatefromxpm
ftp_put
ftp_nb_put
exif_read_data
read_exif_data
exif_thumbnail
exif_imagetype
hash_file
hash_hmac_file
hash_update_file
md5_file
sha1_file
highlight_file
show_source
php_strip_whitespace
get_meta_tags
272
Nathan

When designing a REST API or service are there any established best practices for dealing with security (Authentication, Authorization, Identity Management) ?

When building a SOAP API you have WS-Security as a guide and much literature exists on the topic. I have found less information about securing REST endpoints.

While I understand REST intentionally does not have specifications analogous to WS-* I am hoping best practices or recommended patterns have emerged.

Any discussion or links to relevant documents would be very much appreciated. If it matters, we would be using WCF with POX/JSON serialized messages for our REST API's/Services built using v3.5 of the .NET Framework.

Answered By: Greg Hewgill ( 105)

As tweakt said, Amazon S3 is a good model to work with. Their request signatures do have some features (such as incorporating a timestamp) that help guard against both accidental and malicious request replaying.

The nice thing about HTTP Basic is that virtually all HTTP libraries support it. You will, of course, need to require SSL in this case because sending plaintext passwords over the net is almost universally a bad thing. Basic is preferable to Digest when using SSL because even if the caller already knows that credentials are required, Digest requires an extra roundtrip to exchange the nonce value. With Basic, the callers simply sends the credentials the first time.

Once the identity of the client is established, authorization is really just an implementation problem. However, you could delegate the authorization to some other component with an existing authorization model. Again the nice thing about Basic here is your server ends up with a plaintext copy of the client's password that you can simply pass on to another component within your infrastructure as needed.

256
UltimateBrent

Is there a catchall function somewhere that works well for sanitizing user input for sql injection and XSS attacks, while still allowing certain types of html tags?

Answered By: troelskn ( 369)

It's a common misconception that user input can be filtered. PHP even has a (now deprecated) "feature", called magic-quotes, that builds on this idea. It's nonsense. Forget about filtering (Or cleaning, or whatever people call it).

What you should do, to avoid problems is quite simple: Whenever you embed a string within foreign code, you must escape it, according to the rules of that language. For example, if you embed a string in some SQL targeting MySql, you must escape the string with MySql's function for this purpose (mysql_real_escape_string).

Another example is HTML; If you embed strings within HTML markup, you must escape it with htmlspecialchars. This means that every single echo or print statement should use htmlspecialchars.

A third example could be shell commands; If you are going to embed strings (Such as arguments) to external commands, and call them with exec, then you must use escapeshellcmd and escapeshellarg.

And so on and so forth ...

The only case where you need to actively filter data, is if you're accepting preformatted input. Eg. if you let your users post HTML markup, that you plan to display on the site. However, you should be wise to avoid this at all cost, since no matter how well you filter it, it will always be a potential security hole.