Java 7 new Features !!


Today i will introduce some new features of Java 7

Java 7 was having some priorities for the Java platform to achieve like :

  1. Grow Developer Base
  2. Grow Adoption
  3. Increase Competitiveness
  4. Adapt to Change

Java language principles

  1. Reading is more important than writing
  2. Code should be a joy to read
  3. The language should not hide what is happening
  4. Code should do what it seems to do
  5. Simplicity matters
  6. Every “good” feature adds more “bad” weight
  7. Sometimes it is best to leave things out
  8. One language: with the same meaning everywhere
  9. No dialects
  10. We will evolve the Java language
  11. But cautiously, with a long term view
  12. “first do no harm”
so if you want to change the language what can we do to change it :
  1. Update the java language Spec.
  2. Compiler Implementation.
  3. Essential library Support
  4. Write less
  5. Update the JVM Specs
  6. Future Language evolution.
  7. Update the JVM and the class file tools
  8. Update The JNI
  9. Update the reflective API
  10. Update Serialization
  11. Update Java doc output
  12. Update Kinds of compatibility
So the next release of Java 7 contains :
  1. Java Language
  2. Project Coin (JSR-334)
  3. Class Libraries
  4. NIO2 (JSR-203)
  5. Fork-Join framework, ParallelArray (JSR-166y)
  6. Java Virtual Machine
  7. The DaVinci Machine project (JSR-292)
  8. InvokeDynamic bytecode
  9. Miscellaneous things
  10. JSR-336: Java SE 7 Release Contents
So What’s new in the Java 7 let’s find out
1. Better Integer Literal
Binary literals
int mask = 0b101010101010;
and for clarity we use underscores
int mask = 0b1010_1010_1010;
long big = 9_223_783_036_967_937L;

2. String Switch Statement :

it’s like a dream to me , it’s was very bad thing that you have to use the switch case statement with only numbers “Integers” and Characters . now we can use Strings too

int monthNameToDays(String s, int year) {
switch(s) {
case "April": case "June":
case "September": case "November":
return 30;
case "January": case "March":
case "May": case "July":
case "August": case "December":
return 31;
case "February”:

3. Simplifying Generics

when java 5 comes with generics it was a very big chance to maintain collection to get a specific object and hold it like this

ArrayList<String> testList  = new ArrayList<String>();

so what about generic the Generic it self or a nested Generics 🙂 like this :

List<String> strList = new ArrayList<String>();
List<Map<String, List<String>> strList = new ArrayList<Map<String, List<String>>();

JDK 7 introduce the diamond operator, now you only need to specify once the type of elements been hold by the collection, on the left hand side, the rest is taking care for you, as java will “infer” the type that will go on the right hand side.

This is more complex than just having the compiler perform string substitution.  For certain cases the type to be inserted is not represented by the string in the variable declaration (wildcards are a good example).  The compiler must infer the type parameter for the instantiation from the type parameter of the variable declaration.

List<String> strList = new ArrayList<>();
List<Map<String, List<String>> strList = new ArrayList<>();

4. Lets move into a different area, the Automatic resource managements.

First lets study this code.  We have two streams, one for reading data (in), and one for writing data (out)  The code will read from the in stream and write what is read into out stream.

InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dest);
byte[] buf = new byte[8192];
int n;
while (n = >= 0)
  out.write(buf, 0, n);

The first issue that we have previously, is that when you use resources you need to make sure you close them after you finish.   If there is any exception reading from or writing to them, you should be able to still close the streams.

This code here is a bit better, but we still haven’t got it perfect.  Lets imagine you get an exception reading from in stream, then you will be sent to the finally clause.  Great, that sounds perfect, but lets imagine that when you are executing the finally close, again you get an exception when you try to close the in stream, then the code will never close the out stream, not good.

InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dest);
try {
byte[] buf = new byte[8192];
int n;
while (n = >= 0)
out.write(buf, 0, n);
} finally {

Then the solution will be to use two different try blocks.  In one finally block we close the out stream, and in the second one we close the in stream.

The code looks good, but a bit complex.  When you have a longer and more complex code, it’s very easy to forget to close the resources properly.

InputStream in = new FileInputStream(src);
try {
OutputStream out = new FileOutputStream(dest);
try {
byte[] buf = new byte[8192];
int n;
while (n = >= 0)
out.write(buf, 0, n);
} finally {
} finally {

Even if the code looks correct, we can be loosing very important information about the exceptions.

Lets imagine you got really unlucky today, and when the execution started you got the first exception trying to write on the out stream. Execution will jump to the first finally block, where the out steam will be close.  Again, remember you are really unlucky today, and  this close method call generate a second exception, then you will be thrown to the second finally block.  This seems to be ok, the problem that we are facing is lost of information.  As it’s today, we only store information about the last occurred exception, meaning at this point we already forgot there was an exception trying to write to the stream, and usually the first exception is the most important and meaningful one.

Now in the second finally close we get a third exception, one more time the new exception will erase any previous information, now we only know there was an exception when trying to close the in stream.

InputStream in = new FileInputStream(src);
try {
OutputStream out = new FileOutputStream(dest);
try {
byte[] buf = new byte[8192];
int n;
while (n = >= 0)
out.write(buf, 0, n);
} finally {
} finally {
Exception thrown from
potentially three places.
Details of first two could be lost

JDK 7 introduce a solution for the issues presented previously , the automatic resource management.  Now you don’t have to worry about closing your resources, they will be automatically closed where  the try block finish execution.  Please note the new definition of the try block, where you include the resources to be managed.

try (InputStream in = new FileInputStream(src),
OutputStream out = new FileOutputStream(dest))
byte[] buf = new byte[8192];
int n;
while (n = >= 0)
out.write(buf, 0, n);


  1. Compiler desugars try-with-resources into nested try-finally blocks with variables to track exception state
  2. Suppressed exceptions are recorded for posterity using a new facillity of Throwable
  3. API support in JDK 7
  4. New superinterface java.lang.AutoCloseable
  5. All AutoCloseable and by extension types useable with try-with-resources
  6. anything with a void close() method is a candidate
  7. JDBC 4.1 retrefitted as AutoCloseable too
5. Varargs Warnings

Lets study this code closely as we get an unchecked generic array warning.

There is a heavy use of the static method asList from the Arrays class, defined as:

public static <T> List<T> asList(T... a)    Returns a fixed-size list backed by the specified array.

Lets go back to the code.  We are defining monthsInTwoLanguages, and we are expecting it to be a List<List<String>>.

First we call Arrays.asList(“January”, “February”), this will return a List<Strings>, in other words a list with the months in English. Then we call Arrays.asList(“Gennaio”,  “Febbraio” )); and again we will get a List<Strings>, in this case it will be a list with the months in Italian.

Then we call  Arrays.asList(Arrays.asList(“January”,  “February”),  Arrays.asList(“Gennaio”, “Febbraio” ));  in this case the call seems to be  Arrays.asList( List<String>, List<String>) , from previous paragraph.  Then we expect to get as result  a List<List <String>> that is the expected type for our definition, so why are we getting this warning?

When a generic type is instantiated in java, the compiler translates those types by a technique called type erasure — a process where the compiler removes all information related to type parameters and type arguments within a class or method.  This is done to maintain binary compatibility with Java libraries and applications that were created before generics.

Then when we do the first asList call, the only stored information is List,  we don’t store any information about String, the type of elements hold by this list.  And it’s the same scenario for the second asList call.  When we finally get to the 3rd asList call, know we are returning a new List <List <???>> but we didn’t store any information about String, so the compiler doesn’t know what  ??? means, In our case we are sure that we have String types, but because we don’t have any information about it, it could be wrong, the compiler doesn’t have this information, and that’s the reason why we get the warning.   There could be a potential issue.

class Test {
public static void main(String... args) {
List<List<String>> monthsInTwoLanguages =
"Febbraio" ));
}  warning:
[unchecked]  unchecked  generic  array  creation
for  varargs parameter  of  type  List<String>[]
1  warning
6. Heap Pollution – JLSv3

The problem explained previously is called heap pollution, and  the compiler reports a possible location of ClassCastException.

As we mentioned previously , our code was correct as the List is holding Strings, but it could hold a different type of objects, and the compiler simply doesn’t have this information.

  1. A variable of a parameterized type refers to an object that is not of that parameterized type
  2. For example, the variable of type List<String>[] might point to an array of Lists where the Lists did not contain strings
  3. Reports possible locations of ClassCastExceptions at runtime
  4. A consequence of erasure
  5. Possibly properly addressed by reification in the future

The Varargs warning was revised in the JDK 7, and new annotations were added to suppress any unwanted warning.  Just make sure you are aware of potentially hiding real issues in your code.

  1. New mandatory compiler warning at suspect varargs method declarations
  2. By applying an annotation at the declaration, warnings at the declaration and call sites can be suppressed
  3. @SuppressWarnings(value = “unchecked”)
  4. @SafeVarargs

When you are dealing with code that may throw exceptions you need to catch all of them in a  catch block.  Untill the JDK 6 there was no way to group exceptions, and  share common code, and you ended up with a big list of catch blocks as shown in this

try {
} catch(ClassNotFoundException cnfe) {
throw cnfe;
} catch(InstantiationException ie) {
throw ie;
} catch(NoSuchMethodException nsme) {
throw nsme;
} catch(InvocationTargetException ite) {
throw ite;

8. Multi-catch

Now with JDK 7 you can group under the same catch, several exceptions using the “|’ operand.  Simpler, cleaner and you can reuse common code.

try {
} catch (ClassCastException e) {
throw e;
} catch(InstantiationException |
NoSuchMethodException |
InvocationTargetException e) {
throw e;

9. Now lets look at the changes to the standard class libraries

The existing Java IO APIs are fifteen years old.  They were not designed to be extensible so lack features like a service providert interface and pluggability to support different file system featuires.  File systems have moved on considerably in this time.

One very big problem is the inconsistent use of exceptions.  Many methods in the File class return a boolean to indicate success or failure.  If an operation fails there is no way to determine why.

The rename method works incosistently and won’t work across volumes or file systems.  Support for links is very platform specific

With advances in file system structure many applications need greater access to file system meta-datat and, again this is not possible with

The NIO2 APIs are inteneded to address these limitations.

  1. Original Java I/O APIs presented challenges for developers
  2. Not designed to be extensible
  3. Many methods do not throw exceptions as expected
  4. rename() method works inconsistently
  5. Developers want greater access to file metadata
  6. Java NIO2 solves these problems
  7. Java NIO2 Features

From a developer’s perspective the biggest change will be to replace use of the File object with the Path object.  Since File was not designed to be extended it was not possible to add functionality so a new class was required.  In many places this can be used in exactly the same way, but there are a few differences that developers will need to be aware of to use this correctly.

In the existing IO API reading the contents of a directory blocks until all entries have been retrieved.  If you are reading several millin file names across a network connection (not unusual today) this can take a very longh time.  NIO2 now provides support for streaming entries so they can be processed as they arrive.  There is also support for filtering as part of the API so that it is easy to list only certin types of file and so on.

Symbolic link support is optional and is based on the long standing UNIX semantics.  Most of the time the link is treated as a normal file, exceptions to this include the delete operation and WalkFIleTree (although the FOLLOW_LINKS option can be used with this).

To provide extensibility the Filesystem class provides an interface to a Filesystem which can be any form of file storage system, for example a ZIP file can be accessed as if it were a filesystem even though it is itself a file on another filesystem.

The attribute package provides enhanced access to metadata for files and also solves a longstanding performance problem, namely that every request for an attribute results in a separate stat() call

  1. Path is a replacement for File
  2. Biggest impact on developers
  3. Better directory support
  4. list() method can stream via iterator
  5. Entries can be filtered using regular expressions in API
  6. Symbolic link support
  7. java.nio.file.Filesystem
  8. interface to a filesystem (FAT, ZFS, Zip archive, network, etc)
  9. java.nio.file.attribute package
  10. Access to file metadata
Path Class

Although the Path class is a replacement for the File class there are some differences worth noting.  Since Path is an interface it cannot be instantiated directly (unlike File which is a class).  The Paths class provides a set of factory methods for creating Path objects, but there are also ways to create a Path from another Path.  This is logical because a hierarchical filesystem allows a path to be created between two files in a relative way (frequently using .. To indicate moving to the parent).

Path supports both absolute paths, i.e. how to access the file from the root of the filesystem and relative paths, i.e. how to access the file relative to the current directory.

  1. Equivalent of in the new API
  2. Immutable
  3. Have methods to access and manipulate Path
  4. Few ways to create a Path
  5. From Paths and FileSystem
//Make a reference to the path
Path home = Paths.get(“/home/fred”);
//Resolve tmp from /home/fred -> /home/fred/tmp
Path tmpPath = home.resolve(“tmp”);
//Create a relative path from tmp -> ..
Path relativePath = tmpPath.relativize(home)
File file = relativePath.toFile();

File Operation – Copy, Move

Finally the most obvious file system operations are supported in a clean consistent way in Java.  Copying, moving and renaming files is all performed using a Path for each part of the operation.  Three copy options can also be specified to copy the attributes of the file rather than using the defaults, to replace an existing file if one already exists at the destination path and to make the move an atomic operation from the filesystem perspective.

The Files class provides a large number of utility methods for typical filesystem operations (copy, move, delete, createDirectory and so on).

  1. File copy is really easy with fine grain control
  2. File move is supported
  3. Optional atomic move supported
Path src = Paths.get(“/home/fred/readme.txt”);
Path dst = Paths.get(“/home/fred/copy_readme.txt”);
Files.copy(src, dst,
Path src = Paths.get(“/home/fred/readme.txt”);
Path dst = Paths.get(“/home/fred/readme.1st”);
Files.move(src, dst, StandardCopyOption.ATOMIC_MOVE);


The DirectoryStream class provides an iterator that can be used to read entries from a directory.  One point of note is that, as the DirectoryStream provides an Iterator, it is not possible to use the DirectoryStream more than once, since there is no way to reset an Iterator,.

There is built in support for regular expressions so complex patterns can be used like *.{c,h,cpp,hpp,java}

  1. DirectoryStream iterate over entries
  2. Scales to large directories
  3. Uses less resources
  4. Smooth out response time for remote file systems
  5. Implements Iterable and Closeable for productivity
  6. Filtering support
  7. Build-in support for glob, regex and custom filters
Path srcPath = Paths.get(“/home/fred/src”);
try (DirectoryStream<Path> dir =
srcPath.newDirectoryStream(“*.java”)) {
for (Path file: dir)

Concurrency APIs

Java SE 7 includes updates to the concurrency APIs first introduced in Java SE 5.  This is an update to an update;the orinial utilities were defined in JSR166.  This was eXtended in JSR166x (Java SE 6) and extended further through JSR166y (Java SE 7).

Introduces the fork-join framework for fine grained parallelism

The Phaser which is a reusable synchronization barrier, similar in functionality to CyclicBarrier and CountDownLatch but supporting more flexible usage.

A TransferQueue is a  BlockingQueue in which producers may wait for consumers to receive elements. A TransferQueue may be useful for example in message passing applications in which producers sometimes (using method transfer(E)) await receipt of elements by consumers invoking take or poll, while at other times enqueue elements (via method put) without waiting for receipt. Non-blocking and time-out versions of tryTransfer are also available. A TransferQueue may also be queried, via hasWaitingConsumer(), whether there are any threads waiting for items, which is a converse analogy to a peek operation. Like other blocking queues, a TransferQueue may be capacity bounded. If so, an attempted transfer operation may initially block waiting for available space, and/or subsequently block waiting for reception by a consumer. Note that in a queue with zero capacity, such as SynchronousQueue, put and transfer are effectively synonymous.  This is implemented by the LinkedTransferQuue which is an unbounded TransferQueue based on linked nodes. This queue orders elements FIFO (first-in-first-out) with respect to any given producer. The head of the queue is that element that has been on the queue the longest time for some producer. The tail of the queue is that element that has been on the queue the shortest time for some producer.

  1. JSR166y
  2. Update to JSR166x which was an update to JSR166
  3. Adds a lightweight task framework
  4. Also referred to as Fork/Join
  5. Phaser
  6. Barrier similar to CyclicBarrier and CountDownLatch
  7. TransferQueue interface
  8. Extension to BlockingQueue
  9. Implemented by LinkedTransferQueue
Fork Join Framework

The fork-join framework is designed for fine grained tasks where a large task can easily be broken up into a large number of sub-tasks.  This is best suited to sub-tasks that do not rely on shared, mutatable data.  The framework will work well with shared, read-omly data, but if multiple subtasks are trying to modify shared data the issues of locking and contention reduce the frameworks overall efficiency.

  1. Goal is to take advantage of multiple processor
  2. Designed for task that can be broken down into smaller pieces
  3. Eg. Fibonacci number fib(10) = fib(9) + fib(8)
  4. Typical algorithm that uses fork join
if I can manage the task
perform the task
fork task into x number of smaller/similar task
join the results
The main classes for the Fork Join framework:
ForkJoinPool – Creates a pool of worker threads to execute the tasks.
ForkJoinTask – a task to be processed
RecursiveAction – A recursively decomposable task that does not return a result
RecursiveTask – A recursively decomposable task that does return a value

This example shows the calculation of Fibonacci sequence value using the FJ framework.  The class extends the RecursiveTask class with a generic type parameter of Integer to show that the result generated will be of type Integer.  The class overrides the compute method which will be called by the framework to generate the result.  In the case where the number is not zero or one the task is forked into two subtasks which are added together to produce the result.  The calls to join will block until a result is available.

public class Fibonacci extends RecursiveTask<Integer> {
private final int number;
public Fibonacci(int n) { number = n; }
@Override protected Integer compute() {
switch (number) {
case 0: return (0);
case 1: return (1);
Fibonacci f1 = new Fibonacci(number – 1);
Fibonacci f2 = new Fibonacci(number – 2);
f1.fork(); f2.fork();
return (f1.join() + f2.join());

Here we create a new ForkJoinPool to process the Fibonacci calculation.  We instantiate the initial task to calculate the 10th Fibonacci value and submit this to the pool of worker threads.  We can continue to work in parallel with this checking for when the task is completed using the isDone method.  The result is provided as a Future so we use the get method to retrieve the result.  Alternatively we could simply call get once we’ve submitted the task and this would block until the task was complete.

Notice that none of this code reuies any direct manipulation of either threads or locks.  This is all handled by the framework

ForkJoinPool pool = new ForkJoinPool();
Fibonacci r = new Fibonacci(10);
while (!r.isDone()) {
//Do some work
System.out.println("Result of fib(10) = "
+ r.get());

Java SE 7 als includes some updates to the client libraries.

  1. Nimbus Look and Feel
  2. Platform APIs for shaped and translucent windows
  3. JLayer (formerly from Swing labs)
  4. Optimised 2D rendering

The Jlayer component provides an easy way to add an overlay to existing Swing components.  Examples are a progress wheel or highlighting only vallid choices in a multiple-choice component.

The Jlayer can interceot all input and focus events so it appears to the user as that component for interaction.

Here we create a new Jlayer for a previously instantiated Jpanel component.  A custom component, myLayerUI, is then added to the layer so that it will provide the extra functionality.

The layer is then added to a Frame which will add both the underlying Jpanel and the custom UI.

// wrap your component with JLayer
JLayer<JPanel> layer = new JLayer<JPanel>(panel);
// custom ui provides all extra functionality
// add the layer as usual component

Lets talk about the changes being made in the JVM In Java SE 7

The DaVinci Machine Project (JSR-292)
(A multi-language renaissance for the JVM)

Programming languages have proliferated over that last few years as people have come up with more and more ideas for how best to approach solving different types of problem in the most effective way.  This is referred to as domain specific languages (DSLs).  As a compiler writer there are a variety of tasks that are common to many languages; how to manage memory for storage of intermediate results, how to manage concurrent execution of code, security, and so on.  Since many virtual machines already provide these features targeting a language at a VM makes life a lot easier for compiler writers and for developers it gives them access to a large set of existing libraries for everyday tasks.  Since the JVM has been developed over 15 years many bugs have already been eliminated and performance has been extensively tuned.

  1. Programming languages need runtime support
  2. Memory management / Garbage collection
  3. Concurrency control
  4. Security
  5. Reflection
  6. Debugging integration
  7. Standard libraries
  8. Compiler writers have to build these from scratch
  9. Targeting a VM allows reuse of infrastructure

The Java Virtual Machine perspective clearly states that the JVM has bo knowledge of Java the language.  It understands the Java bytecodes and, provided it is handed a valid set of bytecodes, will execute them, regardless of how they were generated,  This makes the job of compiling from languages other than Java much easier.  However, even though the JVM does not know about the Java language it was designed with Java in mind.  As such it was built to support a language that uses single inheritance, static typing and does not have support for explicit pointers.  This doesn’t mean you can’t compile a dynamically typed language to bytecodes, but it doesmake the job harder.

“The Java virtual machine knows nothing about the Java programming language, only of a particular binary format, the class file format.”

1.2 The Java Virtual Machine Spec.

As we can see the result is that many people have written compilers that will take all sorts of languages and created compilers for them; you can see functional languages, dynamically typed languages, list based languages and so on.

The biggest issue for dynamically typed languages running on the JVM is performance, specifically of how method calls get made.  Since Java was designed with static typing it is possible to resolve methods at compile time and include the reference directly in the code (dynamic class loading might seem like an issue, but the new class is still static in its references).  The JVM has four ways to invoke a method; most calls will go through invokevirtual.  For interfaces invokeinterface is used, static methods use invokestatic and constructors are called by invokespecial.   Since these all required a full method signature compiler writers for dynamically typed languages must resolve the reference every time a method is called in case a type involved in the call has changed.

To make life easier we are including the first new bytecode in the JVM instruction set since it was launched.  This is called InvokeDynamic and will not be used by Java (at least not until Java SE 8).

The basic idea is that when a method is first called bootstrap code will resolve the reference and store a method handle in a callsite (effectively a function pointer).  Subsequent calls to the method will find there is already a method handle and call the method through the callsite.  If a type involved in the method signature changes the compiled code can detect this, resolve the method based on the new signature and store the changed reference in the call site.  Therefore the method only needs to be resolved when changes are made not every time the method is called.  This is much more efficient.

  1. JVM currently has four ways to invoke method
  2. Invokevirtual, invokeinterface, invokestatic, invokespecial
  3. All require full method signature data
  4. InvokeDynamic will use method handle
  5. Effectively an indirect pointer to the method
  6. When dynamic method is first called bootstrap code determines method and creates handle
  7. Subsequent calls simply reference defined handle
  8. Type changes force a re-compute of the method location and an update to the handle
  9. Method call changes are invisible to calling code

This provides a bit more detail.

The method handle is the way the method or constructor can be executed.  For dynamically typed languages manipulation of the method handle and callsite is more efficinet in many cases by a factor of 10

  1. Invokedynamic linked to a CallSite
  2. CallSite can be linked or unlinked
  3. CallSite holder of MethodHandle
  4. MethodHandle is a directly executable reference to an underlying method, constructor, field
  5. Can transform arguments and return type
  6. Transformation – conversion, insertion, deletion, substitution

Here are a few miscellaneous things that don’t fit into the preceeding areas.

Some minor changes to cryptography in terms of the underlying algorithm used.

Various updates to Java APIs defined in other JSRs

Some minor changes to class loading which will only have an impact on a very small number of developers.

Finally!  Javadocs get CSS support so we can bring javadocs into the 21st Century.

We will ”converge” by moving and re-implementing the missing goodies from JRockit into the Hotspot codebase.

Most of those changes will go into the OpenJDK, but some will remain premium features. E.g, JRockit Mission Control.

Once all of the features are moved, we will prounce the convergence complete. Well, in all honesty, we will probably make a bigger deal than that from it, but it sounds less dramatic when you know how its done.

Hopefully, the convergence will be completed by JDK 8 GA, but I’m making no promises today.

To conclude we can see that Java SE 7 is an incremental change over Java SE 6 providing evolutionary, not revolutionary new features.  The changes that are included provide a good, solid set of enhancements to make developers lives easier whilst not affecting backwards compatability.  Support for things like the fork-join framework will enable applications to benefot from developments in hardware without having to write complex code.

Java SE 8 which we only covered very briefly will introduce more substantial features that will help Java developers to be even more productive.

The important thing to get from this presentation is that Java is not “the new Cobol”.  It is adapting to the needs of developers and new types of applications and platforms.

Thank You !


4 thoughts on “Java 7 new Features !!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s