What is some current research going on in foundations about?
It is quite difficult to answer this question comprehensively. It's a bit like asking "so what's been going on in analysis lately?" It is probably best if logicians who work in various areas each answer what is going on in their area. I will speak about logic in computer science. I am very curious to see what logicians from other areas have to say about their branches of logic.
A hundred years ago logic was driven by philosophical and foundational questions about mathematics. The development of first-order logic and ZFC was one important branch of logic, but others were model theory and computability theory. In fact, computability theory led directly to the invention of modern computers. Ever since logic has had a very fruitful relationship with computer science.
One could argue that applications of logic in computer science are not about foundations of mathematics, but such an opinion can hardly be defended. Of course, many applications of logic in computer science are just that, applications, but not all of them. Some are directly related to foundations of mathematics (see below), while others are about understanding what computation is about (Turing got us on the right track, but there's a lot more that can be said about computation apart from "it's all equivalent to Turing machines"). I hope you can agree that computation is a foundational concept in mathematics, equal to such concepts as (mathematical) structure and deductive method.
We live in an era of logic engineering: we are learning how to apply methods of logic in situations that were not envisioned 100 years ago. Consequently, we need to devise new kinds of logic that are well-suited for new problems at hand. To give just one example: real-world computer systems often work concurrently with each other in an environment which acts in a stochastic fashion. Computer scientists have learned that Turing machines, first-order logic and set theory are not a very good way of describing or analyzing such systems (the computers are of course no more powerful than Turing machines and can be simulated by them, but that's hardly a helpful way to look at things). Instead they devise new kinds of logic which are well suited for their problems. Often we design logics that have good computational properties, so that machines can use them.
Another aspect of logic in computer science, which is closer to a mathematician's heart, is computer-supported formalization of mathematics. Humans are not able to produce complete formal proofs of mathematical theorems. Instead we adhere to what is best described as "informal rigour". Computers however, are very good at checking all the details of a mathematical proof. Modern tools, such as various proof assistants (Coq, HOL, Agda, Lean), are improving at a fast rate, and many believe they will soon become useful tools for the working mathematician. Be that as it may, the efforts to actually do mathematics formally are driving research in how this can be feasibly done. While some proof assistants do use first-order logic and set theory (e.g. Mizar), most use type theory. This is probably not just a fashion. It is a response to the fact that first-order logic and set theory suffice for formalization of mathematics in principle, but not so much in practice. Logicians have been telling us that in principle all mathematics can be formalized. And now that we're actually doing it, we're learning new things about logic. This to my mind is one of the most interesting contermporary developments in foundations of mathematics.
When I think about foundations I'm thinking of reducing everything to ZFC set theory + first order logic. Is this still contemporary?
The only sense in which this is contemporary is that nowadays, people are actually constructing formal proofs using computerized proof assistants, as mentioned in Andrej Bauer's answer. As Andrej mentioned, Mizar is based on set theory and first-order logic, but most proof assistants use type theory. One system not mentioned by Andrej is homotopy type theory as pioneered by Voevodsky and others. Some highly nontrivial mathematical theorems have been fully formalized, such as the Feit–Thompson theorem and the Kepler conjecture.
Another active area of research in foundations is reverse mathematics, which is concerned with examining exactly which axioms are needed for various portions of mathematics. It has long been recognized that ZFC is far stronger than necessary for formalizing most of mathematics, and so there is interest in figuring out which axioms are really needed. The standard introduction to reverse mathematics is Simpson's book, Subsystems of Second-Order Arithmetic. However, there is a lot of research into axiomatic systems not treated in Simpson's book. There are weaker systems, such as bounded arithmetic, which have close connections to computational complexity theory. In the other direction, Harvey Friedman has a long-term research program to construct natural-looking elementary combinatorial statements that cannot be proved except by assuming (the consistency of) large cardinal axioms.
Although Gödel's results tell us that we cannot expect a general decision procedure for theoremhood, there continues to be research into the decidability of various finite fragments of mathematics. As mentioned by Harvey Friedman, all three-quantifier sentences of first-order set theory are decided in a weak fragment of ZF. It is also an open problem whether Hilbert's Tenth Problem is decidable over $\mathbb Q$.
By the way, it is perhaps worth mentioning that some people equate "foundations" with "logic and set theory." I personally don't agree with this equation. There is a lot of interesting technical work in logic and set theory that I would not describe as being concerned with the foundations of mathematics as such. If you're interested in current research topics in logic, this MO question provides some information.
The foundations of mathematics has undergone an astounding series of revolutions in the 20th century, which continues into the 21st. Many of the most spectacular discoveries have been in set theory, although revolutionary developments have occurred in other branches as well.
Here is a brief list of some of these developments.
1938. Godel's definition of the constructible sets L.
1963. Cohen's prof that the continuum hypothesis is independent of ZFC, provided ZFC is consistent.
1963 (and later). The discovery of the metamathematical properties of measurable cardinals.
1966–67. Silver and Solovay's discovery of $0^\#$.
1970 (and later). Determinacy and large cardinals properties. http://plato.stanford.edu/entries/large-cardinals-determinacy, section 3
1972. Jensen's discovery of combinatorial principles which hold in L.
1977. Paris and Harrington produce a combinatorial problem independent of Peano arithmetic.
1998. Friedman produces a combinatorial problem independent of ZFC (in fact having large cardinal strength). http://scirate.com/arxiv/math/9811187
A more extensive compilation would require an entire book. To mention one more area, many logicians agree that ZFC needs to be extended with new axioms; see this or this for example. In 2011 I wrote a paper which gives some new axioms that are sufficiently conservative and well-justified that it is arguable that they can be adopted at this point.