[This post is by Chris Pruett, who writes regularly about games here, and is obviously pretty cranked about this conference. — Tim Bray]
Android will descend in force upon the Game Developers Conference in San Francisco this week; we’re offering a full day packed with sessions covering everything you need to know to build games on Android.
From 10 AM to 5 PM on Tuesday the 1st, North Hall Room 121 will be ground zero for Android Developer Day, with five engineering-focused sessions on everything from compatibility to native audio and graphics. Here's a quick overview; there’s more on the Game Developer Conference site:
Building Aggressively Compatible Android Games — Chris Pruett
C++ On Android Just Got Better: The New NDK — Daniel Galpin and Ian Ni-Lewis
OpenGL ES 2.0 on Android: Building Google Body — Nico Weber
Android Native Audio — Glenn Kasten and Jean-Michel Trivi
Evading Pirates and Stopping Vampires Using License Server, In App Billing, and AppEngine — Daniel Galpin and Trevor Johns
Our crack team of engineers and advocates spend their nights devising new ways to bring high-end game content to Android, and a full day of sessions just wasn't enough to appease them. So in addition, you can find incisive Android insight in other tracks:
Finally, you can visit us in the Google booth on the GDC Expo floor; stop by, fondle the latest devices, and check out the awesome games that are already running on them. We're foaming at the mouth with excitement about the Game Developers Conference next week, and you should be too.
[This post is by Chet Haase, an Android engineer who specializes in graphics and animation, and who occasionally posts videos and articles on these topics on his CodeDependent blog at graphics-geek.blogspot.com. — Tim Bray]
One of the new features ushered in with the Honeycomb release is a new animation system, a set of APIs in a whole new package (android.animation) that makes animating objects and properties much easier than it was before.
"But wait!" you blurt out, nearly projecting a mouthful of coffee onto your keyboard while reading this article, "Isn't there already an animation system in Android?"
Animation Prior to Honeycomb
Indeed, Android already has animation capabilities: there are several classes and lots of great functionality in the android.view.animation package. For example, you can move, scale, rotate, and fade Views and combine multiple animations together in an AnimationSet object to coordinate them. You can specify animations in a LayoutAnimationController to get automatically staggered animation start times as a container lays out its child views. And you can use one of the many Interpolator implementations like AccelerateInterpolator and Bounce to get natural, nonlinear timing behavior.
But there are a couple of major pieces of functionality lacking in the previous system.
For one thing, you can animate Views... and that's it. To a great extent, that's okay. The GUI objects in Android are, after all, Views. So as long as you want to move a Button, or a TextView, or a LinearLayout, or any other GUI object, the animations have you covered. But what if you have some custom drawing in your view that you'd like to animate, like the position of a Drawable, or the translucency of its background color? Then you're on your own, because the previous animation system only understands how to manipulate View objects.
The previous animations also have a limited scope: you can move, rotate, scale, and fade a View... and that's it. What about animating the background color of a View? Again, you're on your own, because the previous animations had a hard-coded set of things they were able to do, and you could not make them do anything else.
Finally, the previous animations changed the visual appearance of the target objects... but they didn't actually change the objects themselves. You may have run into this problem. Let's say you want to move a Button from one side of the screen to the other. You can use a TranslateAnimation to do so, and the button will happily glide along to the other side of the screen. And when the animation is done, it will gladly snap back into its original location. So you find the setFillAfter(true) method on Animation and try it again. This time the button stays in place at the location to which it was animated. And you can verify that by clicking on it - Hey! How come the button isn't clicking? The problem is that the animation changes where the button is drawn, but not where the button physically exists within the container. If you want to click on the button, you'll have to click the location that it used to live in. Or, as a more effective solution (and one just a tad more useful to your users), you'll have to write your code to actually change the location of the button in the layout when the animation finishes.
It is for these reasons, among others, that we decided to offer a new animation system in Honeycomb, one built on the idea of "property animation."
Property Animation in Honeycomb
The new animation system in Honeycomb is not specific to Views, is not limited to specific properties on objects, and is not just a visual animation system. Instead, it is a system that is all about animating values over time, and assigning those values to target objects and properties - any target objects and properties. So you can move a View or fade it in. And you can move a Drawable inside a View. And you can animate the background color of a Drawable. In fact, you can animate the values of any data structure; you just tell the animation system how long to run for, how to evaluate between values of a custom type, and what values to animate between, and the system handles the details of calculating the animated values and setting them on the target object.
Since the system is actually changing properties on target objects, the objects themselves are changed, not simply their appearance. So that button you move is actually moved, not just drawn in a different place. You can even click it in its animated location. Go ahead and click it; I dare you.
I'll walk briefly through some of the main classes at work in the new system, showing some sample code when appropriate. But for a more detailed view of how things work, check out the API Demos in the SDK for the new animations. There are many small applications written for the new Animations category (at the top of the list of demos in the application, right before the word App. I like working on animation because it usually comes first in the alphabet).
In fact, here's a quick video showing some of the animation code at work. The video starts off on the home screen of the device, where you can see some of the animation system at work in the transitions between screens. Then the video shows a sampling of some of the API Demos applications, to show the various kinds of things that the new animation system can do. This video was taken straight from the screen of a Honeycomb device, so this is what you should see on your system, once you install API Demos from the SDK.
Animator
Animator is the superclass of the new animation classes, and has some of the common attributes and functionality of the subclasses. The subclasses are ValueAnimator, which is the core timing engine of the system and which we'll see in the next section, and AnimatorSet, which is used to choreograph multiple animators together into a single animation. You do not use Animator directly, but some of the methods and properties of the subclasses are exposed at this superclass level, like the duration, startDelay and listener functionality.
The listeners tend to be important, because sometimes you want to key some action off of the end of an animation, such as removing a view after an animation fading it out is done. To listen for animator lifecycle events, implement the AnimatorListener interface and add your listener to the Animator in question. For example, to perform an action when the animator ends, you could do this:
anim.addListener(new Animator.AnimatorListener() { public void onAnimationStart(Animator animation) {} public void onAnimationEnd(Animator animation) { // do something when the animation is done } public void onAnimationCancel(Animator animation) {} public void onAnimationRepeat(Animator animation) {} });
As a convenience, there is an adapter class, AnimatorListenerAdapter, that stubs out these methods so that you only need to override the one(s) that you care about:
anim.addListener(new AnimatorListenerAdapter() { public void onAnimationEnd(Animator animation) { // do something when the animation is done } });
ValueAnimator
ValueAnimator is the main workhorse of the entire system. It runs the internal timing loop that causes all of a process's animations to calculate and set values and has all of the core functionality that allows it to do this, including the timing details of each animation, information about whether an animation repeats, listeners that receive update events, and the capability of evaluating different types of values (see TypeEvaluator for more on this). There are two pieces to animating properties: calculating the animated values and setting those values on the object and property in question. ValueAnimator takes care of the first part; calculating the values. The ObjectAnimator class, which we'll see next, is responsible for setting those values on target objects.
Most of the time, you will want to use ObjectAnimator, because it makes the whole process of animating values on target objects much easier. But sometimes you may want to use ValueAnimator directly. For example, the object you want to animate may not expose setter functions necessary for the property animation system to work. Or perhaps you want to run a single animation and set several properties from that one animated value. Or maybe you just want a simple timing mechanism. Whatever the case, using ValueAnimator is easy; you just set it up with the animation properties and values that you want and start it. For example, to animate values between 0 and 1 over a half-second, you could do this:
But animations are a bit like the tree in the forest philosophy question ("If a tree falls in the forest and nobody is there to hear it, does it make a sound?"). If you don't actually do anything with the values, does the animation run? Unlike the tree question, this one has an answer: of course it runs. But if you're not doing anything with the values, it might as well not be running. If you started it, chances are you want to do something with the values that it calculates along the way. So you add a listener to it, to listen for updates at each frame. And when you get the callback, you call getAnimatedValue(), which returns an Object, to find out what the current value is.
anim.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() { public void onAnimationUpdate(ValueAnimator animation) { Float value = (Float) animation.getAnimatedValue(); // do something with value... } });
Of course, you don't necessarily always want to animate float values. Maybe you need to animate something that's an integer instead:
In fact, maybe you need to animate something entirely different, like a Point, or a Rect, or some custom data structure of your own. The only types that the animation system understands by default are float and int, but that doesn't mean that you're stuck with those two types. You can to use the Object version of the factory method, along with a TypeEvaluator (explained later), to tell the system how to calculate animated values for this unknown type:
Point p0 = new Point(0, 0); Point p1 = new Point(100, 200); ValueAnimator anim = ValueAnimator.ofObject(pointEvaluator, p0, p1);
There are other animation attributes that you can set on a ValueAnimator besides duration, including:
setStartDelay(long): This property controls how long the animation waits after a call to start() before it starts playing.
setRepeatCount(int) and setRepeatMode(int): These functions control how many times the animation repeats and whether it repeats in a loop or reverses direction each time.
setInterpolator(TimeInterpolator): This object controls the timing behavior of the animation. By default, animations accelerate into and decelerate out of the motion, but you can change that behavior by setting a different interpolator. This function acts just like the one of the same name in the previous Animation class; it's just that the type of the parameter (TimeInterpolator) is different from that of the previous version (Interpolator). But the TimeInterpolator interface is just a super-interface of the existing Interpolator interface in the android.view.animation package, so you can use any of the existing Interpolator implementations, like Bounce, as arguments to this function on ValueAnimator.
ObjectAnimator
ObjectAnimator is probably the main class that you will use in the new animation system. You use it to construct animations with the timing and values that ValueAnimator takes, and also give it a target object and property name to animate. It then quietly animates the value and sets those animated values on the specified object/property. For example, to fade out some object myObject, we could animate the alpha property like this:
Note, in this example, a special feature that you can use to make your animations more succinct; you can tell it the value to animate to, and it will use the current value of the property as the starting value. In this case, the animation will start from whatever value alpha has now and will end up at 0.
You could create the same thing in an XML resource as follows:
There is a hidden assumption here about properties and getter/setter functions that you have to understand before using ObjectAnimator: you must have a public "set" function on your object that corresponds to the property name and takes the appropriate type. Also, if you use only one value, as in the example above, your are asking the animation system to derive the starting value from the object, so you must also have a public "get" function which returns the appropriate type. For example, the class of myObject in the code above must have these two public functions in order for the animation to succeed:
public void setAlpha(float value); public float getAlpha();
So by passing in a target object of some type and the name of some property foo supposedly on that object, you are implicitly declaring a contract that that object has at least a setFoo() function and possibly also a getFoo() function, both of which handle the type used in the animation declaration. If all of this is true, then the animation will be able to find those setter/getter functions on the object and set values during the animation. If the functions do not exist, then the animation will fail at runtime, since it will be unable to locate the functions it needs. (Note to users of ProGuard, or other code-stripping utilities: If your setter/getter functions are not used anywhere else in the code, make sure you tell the utility to leave the functions there, because otherwise they may get stripped out. The binding during animation creation is very loose and these utilities have no way of knowing that these functions will be required at runtime.)
View properties
The observant reader, or at least the ones that have not yet browsed on to some other article, may have pinpointed a flaw in the system thus far. If the new animation framework revolves around animating properties, and if animations will be used to animate, to a large extent, View objects, then how can they be used against the View class, which exposes none of its properties through set/get functions?
Excellent question: you get to advance to the bonus round and keep reading.
The way it works is that we added new properties to the View class in Honeycomb. The old animation system transformed and faded View objects by just changing the way that they were drawn. This was actually functionality handled in the container of each View, because the View itself had no transform properties to manipulate. But now it does: we've added several properties to View to make it possible to animate Views directly, allowing you to not only transform the way a View looks, but to transform its actual location and orientation. Here are the new properties in View that you can set, get and animate directly:
translationX and translationY: These properties control where the View is located as a delta from its left and top coordinates which are set by its layout container. You can run a move animation on a button by animating these, like this: ObjectAnimator.ofFloat(view, "translationX", 0f, 100f);.
rotation, rotationX, and rotationY: These properties control the rotation in 2D (rotation) and 3D around the pivot point.
scaleX and scaleY: These properties control the 2D scaling of a View around its pivot point.
pivotX and pivotY: These properties control the location of the pivot point, around which the rotation and scaling transforms occur. By default, the pivot point is centered at the center of the object.
x and y: These are simple utility properties to describe the final location of the View in its container, as a sum of the left/top and translationX/translationY values.
alpha: This is my personal favorite property. No longer is it necessary to fade out an object by changing a value on its transform (a process which just didn't seem right). Instead, there is an actual alpha value on the View itself. This value is 1 (opaque) by default, with a value of 0 representing full transparency (i.e., it won't be visible). To fade a View out, you can do this: ObjectAnimator.ofFloat(view, "alpha", 0f);
Note that all of the "properties" described above are actually available in the form of set/get functions (e.g., setRotation() and getRotation() for the rotation property). This makes them both possible to access from the animation system and (probably more importantly) likely to do the right thing when changed. That is, you don't want to scale an object and have it just sit there because the system didn't know that it needed to redraw the object in its new orientation; each of the setter functions takes care to run the appropriate invalidation step to make the rendering work correctly.
AnimatorSet
This class, like the previous AnimationSet, exists to make it easier to choreograph multiple animations. Suppose you want several animations running in tandem, like you want to fade out several views, then slide in other ones while fading them in. You could do all of this with separate animations and either manually starting the animations at the right times or with startDelays set on the various delayed animations. Or you could use AnimatorSet to do all of that for you. AnimatorSet allows you to animations that play together, playTogether(Animator...), animations that play one after the other, playSequentially(Animator...), or you can organically build up a set of animations that play together, sequentially, or with specified delays by calling the functions in the AnimatorSet.Builder class, with(), before(), and after(). For example, to fade out v1 and then slide in v2 while fading it, you could do something like this:
Like ValueAnimator and ObjectAnimator, you can create AnimatorSet objects in XML resources as well.
TypeEvaluator
I wanted to talk about just one more thing, and then I'll leave you alone to explore the code and play with the API demos. The last class I wanted to mention is TypeEvaluator. You may not use this class directly for most of your animations, but you should that it's there in case you need it. As I said earlier, the system knows how to animate float and int values, but otherwise it needs some help knowing how to interpolate between the values you give it. For example, if you want to animate between the Point values in one of the examples above, how is the system supposed to know how to interpolate the values between the start and end points? Here's the answer: you tell it how to interpolate, using TypeEvaluator.
TypeEvaluator is a simple interface that you implement that the system calls on each frame to help it calculate an animated value. It takes a floating point value which represents the current elapsed fraction of the animation and the start and end values that you supplied when you created the animation and it returns the interpolated value between those two values at that fraction. For example, here's the built-in FloatEvaluator class used to calculate animated floating point values:
But how does it work with a more complex type? For an example of that, here is an implementation of an evaluator for the Point class, from our earlier example:
public class PointEvaluator implements TypeEvaluator { public Object evaluate(float fraction, Object startValue, Object endValue) { Point startPoint = (Point) startValue; Point endPoint = (Point) endValue; return new Point(startPoint.x + fraction * (endPoint.x - startPoint.x), startPoint.y + fraction * (endPoint.y - startPoint.y)); } }
Basically, this evaluator (and probably any evaluator you would write) is just doing a simple linear interpolation between two values. In this case, each 'value' consists of two sub-values, so it is linearly interpolating between each of those.
You tell the animation system to use your evaluator by either calling the setEvaluator() method on ValueAnimator or by supplying it as an argument in the Object version of the factory method. To continue our earlier example animating Point values, you could use our new PointEvaluator class above to complete that code:
Point p0 = new Point(0, 0); Point p1 = new Point(100, 200); ValueAnimator anim = ValueAnimator.ofObject(new PointEvaluator(), p0, p1);
One of the ways that you might use this interface is through the ArgbEvaluator implementation, which is included in the Android SDK. If you animate a color property, you will probably either use this evaluator automatically (which is the case if you create an animator in an XML resource and supply colors as values) or you can set it manually on the animator as described in the previous section.
But Wait, There's More!
There's so much more to the new animation system that I haven't gotten to. There's the repetition functionality, the listeners for animation lifecycle events, the ability to supply multiple values to the factory methods to get animations between more than just two endpoints, the ability to use the Keyframe class to specify a more complex time/value sequence, the use of PropertyValuesHolder to specify multiple properties to animate in parallel, the LayoutTransition class for automating simple layout animations, and so many other things. But I really have to stop writing soon and get back to working on the code. I'll try to post more articles in the future on some of these items, but also keep an eye on my blog at graphics-geek.blogspot.com for upcoming articles, tutorials, and videos on this and related topics. Until then, check out the API demos, read the overview of Property Animation posted with the 3.0 SDK, dive into the code, and just play with it.
The first tablets running Android 3.0 (“Honeycomb”) will be hitting the streets on Thursday Feb. 24th, and we’ve just posted the full SDK release. We encourage you to test your applications on the new platform, using a tablet-size AVD.
Developers who’ve followed the Android Framework’s guidelines and best practices will find their apps work well on Android 3.0. This purpose of this post is to provide reminders of and links to those best practices.
Moving Toward Honeycomb
There’s a comprehensive discussion of how to work with the new release in Optimizing Apps for Android 3.0. The discussion includes the use of the emulator; most developers, who don’t have an Android tablet yet, should use it to test and update their apps for Honeycomb.
While your existing apps should work well, developers also have the option to improve their apps’ look and feel on Android 3.0 by using Honeycomb features; for example, see The Android 3.0 Fragments API. We’ll have more on that in this space, but in the meantime we recommend reading Strategies for Honeycomb and Backwards Compatibility for advice on adding Honeycomb polish to existing apps.
Specifying Features
There have been reports of apps not showing up in Android Market on tablets. Usually, this is because your application manifest has something like this:
The new environment is different from what we’re used to in two respects. First, you can hold the devices with any of the four sides up and Honeycomb manages the rotation properly. In previous versions, often only two of the four orientations were supported, and there are apps out there that relied on this in ways that will break them on Honeycomb. If you want to stay out of rotation trouble, One Screen Turn Deserves Another covers the issues.
The second big difference doesn’t have anything to do with software; it’s that a lot of people are going to hold these things horizontal (in “landscape mode”) nearly all the time. We’ve seen a few apps that have a buggy assumption that they’re starting out in portrait mode, and others that lock certain screens into portrait or landscape but really shouldn’t.
A Note for Game Developers
A tablet can probably provide a better game experience for your users than any handset can. Bigger is better. It’s going to cost you a little more work than developers of business apps, because quite likely you’ll want to rework your graphical assets for the big screen.
There’s another issue that’s important to game developers: Texture Formats. Read about this in Game Development for Android: A Quick Primer, in the section labeled “Step Three: Carefully Design the Best Game Ever”.
We've also added a convenient way to filter applications in Android Market based on the texture formats they support; see the documentation of <supports-gl-texture> for more details.
Happy Coding
Once you’ve held one of the new tablets in your hands, you’ll want to have your app not just running on it (which it probably already does), but expanding minds on the expanded screen. Have fun!
We are pleased to announce that the full SDK for Android 3.0 is now available to developers. The APIs are final, and you can now develop apps targeting this new platform and publish them to Android Market. The new API level is 11.
Together with the new platform, we are releasing updates to our SDK Tools (r10) and ADT Plugin for Eclipse (10.0.0). Key features include:
UI Builder improvements in the ADT Plugin:
New Palette with categories and rendering previews. (details)
More accurate rendering of layouts to more faithfully reflect how the layout will look on devices, including rendering status and title bars to more accurately reflect screen space actually available to applications.
Selection-sensitive action bars to manipulate View properties.
Zoom improvements (fit to view, persistent scale, keyboard access) (details).
Improved support for <merge> layouts, as well as layouts with gesture overlays.
Traceview integration for easier profiling from ADT. (details)
Tools for using the Renderscript graphics engine: the SDK tools now compiles .rs files into Java Programming Language files and native bytecode.
[This post is by R. Jason Sams, an Android engineer who specializes in graphics, performance tuning, and software architecture. —Tim Bray]
Renderscript is a key new Honeycomb feature which we haven’t yet discussed in much detail. I will address this in two parts. This post will be a quick overview of Renderscript. A more detailed technical post with a simple example will be provided later.
Renderscript is a new API targeted at high-performance 3D rendering and compute operations. The goal of Renderscript is to bring a lower level, higher performance API to Android developers. The target audience is the set of developers looking to maximize the performance of their applications and are comfortable working closer to the metal to achieve this. It provides the developer three primary tools: A simple 3D rendering API on top of hardware acceleration, a developer friendly compute API similar to CUDA, and a familiar language in C99.
Renderscript has been used in the creation of the new visually-rich YouTube and Books apps. It is the API used in the live wallpapers shipping with the first Honeycomb tablets.
The performance gain comes from executing native code on the device. However, unlike the existing NDK, this solution is cross-platform. The development language for Renderscript is C99 with extensions, which is compiled to a device-agnostic intermediate format during the development process and placed into the application package. When the app is run, the scripts are compiled to machine code and optimized on the device. This eliminates the problem of needing to target a specific machine architecture during the development process.
Renderscript is not intended to replace the existing high-level rendering APIs or languages on the platform. The target use is for performance-critical code segments where the needs exceed the abilities of the existing APIs.
It may seem interesting that nothing above talked about running code on CPUs vs. GPUs. The reason is that this decision is made on the device at runtime. Simple scripts will be able to run on the GPU as compute workloads when capable hardware is available. More complex scripts will run on the CPU(s). The CPU also serves as a fallback to ensure that scripts are always able to run even if a suitable GPU or other accelerator is not present. This is intended to be transparent to the developer. In general, simpler scripts will be able to run in more places in the future. For now we simply leverage the CPU resources and distribute the work across as many CPUs as are present in the device.
The video above, captured through an Android tablet’s HDMI out, is an example of Renderscript compute at work. (There’s a high-def version on YouTube.) In the video we show a simple brute force physics simulation of around 900 particles. The compute script runs each frame and automatically takes advantage of both cores. Once the physics simulation is done, a second graphics script does the rendering. In the video we push one of the larger balls to show the interaction. Then we tilt the tablet and let gravity do a little work. This shows the power of the dual A9s in the new Honeycomb tablet.
Renderscript Graphics provides a new runtime for continuously rendering scenes. This runtime sits on top of HW acceleration and uses the developers’ scripts to provide custom functionality to the controlling Dalvik code. This controlling code will send commands to it at a coarse level such as “turn the page” or “move the list”. The commands the two sides speak are determined by the scripts the developer provides. In this way it’s fully customizable. Early examples of Renderscript graphics were the live wallpapers and 3d application launcher that shipped with Eclair.
With Honeycomb, we have migrated from GL ES 1.1 to 2.0 as the renderer for Renderscript. With this, we have added programmable shader support, 3D model loading, and much more efficient allocation management. The new compiler, based on LLVM, is several times more efficient than acc was during the Eclair-through-Gingerbread time frame. The most important change is that the Renderscript API and tools are now public.
The screenshot above was taken from one of our internal test apps. The application implements a simple scene-graph which demonstrates recursive script to script calling. The Androids are loaded from an A3D file created in Maya and translated from a Collada file. A3D is an on device file format for storing Renderscript objects.
Later we will follow up with more technical information and sample code.
Several weeks ago we released Android 2.3, which introduced several new forms of communication for developers and users. One of those, Near Field Communications (NFC), let developers get started creating a new class of contactless, proximity-based applications for users.
NFC is an emerging technology that promises exciting new ways to use mobile devices, including ticketing, advertising, ratings, and even data exchange with other devices. We know there’s a strong interest to include these capabilities into many applications, so we’re happy to announce an update to Android 2.3 that adds new NFC capabilities for developers. Some of the features include:
A comprehensive NFC reader/writer API that lets apps read and write to almost any standard NFC tag in use today.
Advanced Intent dispatching that gives apps more control over how/when they are launched when an NFC tag comes into range.
Some limited support for peer-to-peer connection with other NFC devices.
We hope you’ll find these new capabilities useful and we’re looking forward to seeing the innovative apps that you will create using them.
Android 2.3.3 is a small feature release that includes a new API level, 10. Going forward, we expect most devices shipping with an Android 2.3 platform to run Android 2.3.3 (or later). For an overview of the API changes, see the Android 2.3.3 Version Notes. The Android 2.3.3 SDK platform for development and testing is available through the Android SDK Manager.
I have created another Blog sit that will only focus toward the BlackBerry Themes that I create which is: BlackBerry Style and BlackBerry Torch. I will no longer "blog" or "post" anything on this site. So, PLEASE follow my new BlackBerry Blog Site by clicking this link: http://princesspanyathemes.blogspot.com/
I will leave this site up so u all can continue 2 purchase the themes on this Blog Site.
Thank You all 4 Supporting ME in what I do. ((smiles))
[This post is by Dianne Hackborn, a Software Engineer who sits very near the exact center of everything Android. — Tim Bray]
An important goal for Android 3.0 is to make it easier for developers to write applications that can scale across a variety of screen sizes, beyond the facilities already available in the platform:
Since the beginning, Android’s UI framework has been designed around the use of layout managers, allowing UIs to be described in a way that will adjust to the space available. A common example is a ListView whose height changes depending on the size of the screen, which varies a bit between QVGA, HVGA, and WVGA aspect ratios.
Android 1.6 introduced a new concept of screen densities, making it easy for apps to scale between different screen resolutions when the screen is about the same physical size. Developers immediately started using this facility when higher-resolution screens were introduced, first on Droid and then on other phones.
Android 1.6 also made screen sizes accessible to developers, classifying them into buckets: “small” for QVGA aspect ratios, “normal” for HVGA and WVGA aspect ratios, and “large” for larger screens. Developers can use the resource system to select between different layouts based on the screen size.
The combination of layout managers and resource selection based on screen size goes a long way towards helping developers build scalable UIs for the variety of Android devices we want to enable. As a result, many existing handset applications Just Work under Honeycomb on full-size tablets, without special compatibility modes, with no changes required. However, as we move up into tablet-oriented UIs with 10-inch screens, many applications also benefit from a more radical UI adjustment than resources can easily provide by themselves.
Introducing the Fragment
Android 3.0 further helps applications adjust their interfaces with a new class called Fragment. A Fragment is a self-contained component with its own UI and lifecycle; it can be-reused in different parts of an application’s user interface depending on the desired UI flow for a particular device or screen.
In some ways you can think of a Fragment as a mini-Activity, though it can’t run independently but must be hosted within an actual Activity. In fact the introduction of the Fragment API gave us the opportunity to address many of the pain points we have seen developers hit with Activities, so in Android 3.0 the utility of Fragment extends far beyond just adjusting for different screens:
Embedded Activities via ActivityGroup were a nice idea, but have always been difficult to deal with since Activity is designed to be an independent self-contained component instead of closely interacting with other activities. The Fragment API is a much better solution for this, and should be considered as a replacement for embedded activities.
Retaining data across Activity instances could be accomplished through Activity.onRetainNonConfigurationInstance(), but this is fairly klunky and non-obvious. Fragment replaces that mechanism by allowing you to retain an entire Fragment instance just by setting a flag.
A specialization of Fragment called DialogFragment makes it easy to show a Dialog that is managed as part of the Activity lifecycle. This replaces Activity’s “managed dialog” APIs.
Another specialization of Fragment called ListFragment makes it easy to show a list of data. This is similar to the existing ListActivity (with a few more features), but should reduce the common question about how to show a list with some other data.
The information about all fragments currently attached to an activity is saved for you by the framework in the activity’s saved instance state and restored for you when it restarts. This can greatly reduce the amount of state save and restore code you need to write yourself.
The framework has built-in support for managing a back-stack of Fragment objects, making it easy to provide intra-activity Back button behavior that integrates the existing activity back stack. This state is also saved and restored for you automatically.
Getting started
To whet your appetite, here is a simple but complete example of implementing multiple UI flows using fragments. We first are going to design a landscape layout, containing a list of items on the left and details of the selected item on the right. This is the layout we want to achieve:
The code for this activity is not interesting; it just calls setContentView() with the given layout:
You can see here our first new feature: the <fragment> tag allows you to automatically instantiate and install a Fragment subclass into your view hierarchy. The fragment being implemented here derives from ListFragment, displaying and managing a list of items the user can select. The implementation below takes care of displaying the details of an item either in-place or as a separate activity, depending on the UI layout. Note how changes to fragment state (the currently shown details fragment) are retained across configuration changes for you by the framework.
public static class TitlesFragment extends ListFragment { boolean mDualPane; int mCurCheckPosition = 0;
@Override public void onActivityCreated(Bundle savedState) { super.onActivityCreated(savedState);
// Populate list with our static array of titles. setListAdapter(new ArrayAdapter<String>(getActivity(), R.layout.simple_list_item_checkable_1, Shakespeare.TITLES));
// Check to see if we have a frame in which to embed the details // fragment directly in the containing UI. View detailsFrame = getActivity().findViewById(R.id.details); mDualPane = detailsFrame != null && detailsFrame.getVisibility() == View.VISIBLE;
if (savedState != null) { // Restore last state for checked position. mCurCheckPosition = savedState.getInt("curChoice", 0); }
if (mDualPane) { // In dual-pane mode, list view highlights selected item. getListView().setChoiceMode(ListView.CHOICE_MODE_SINGLE); // Make sure our UI is in the correct state. showDetails(mCurCheckPosition); } }
@Override public void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); outState.putInt("curChoice", mCurCheckPosition); }
@Override public void onListItemClick(ListView l, View v, int pos, long id) { showDetails(pos); }
/** * Helper function to show the details of a selected item, either by * displaying a fragment in-place in the current UI, or starting a * whole new activity in which it is displayed. */ void showDetails(int index) { mCurCheckPosition = index;
if (mDualPane) { // We can display everything in-place with fragments. // Have the list highlight this item and show the data. getListView().setItemChecked(index, true);
// Check what fragment is shown, replace if needed. DetailsFragment details = (DetailsFragment) getFragmentManager().findFragmentById(R.id.details); if (details == null || details.getShownIndex() != index) { // Make new fragment to show this selection. details = DetailsFragment.newInstance(index);
// Execute a transaction, replacing any existing // fragment with this one inside the frame. FragmentTransaction ft = getFragmentManager().beginTransaction(); ft.replace(R.id.details, details); ft.setTransition( FragmentTransaction.TRANSIT_FRAGMENT_FADE); ft.commit(); }
} else { // Otherwise we need to launch a new activity to display // the dialog fragment with selected text. Intent intent = new Intent(); intent.setClass(getActivity(), DetailsActivity.class); intent.putExtra("index", index); startActivity(intent); } } }
For this first screen we need an implementation of DetailsFragment, which simply shows a TextView containing the text of the currently selected item.
public static class DetailsFragment extends Fragment { /** * Create a new instance of DetailsFragment, initialized to * show the text at 'index'. */ public static DetailsFragment newInstance(int index) { DetailsFragment f = new DetailsFragment();
// Supply index input as an argument. Bundle args = new Bundle(); args.putInt("index", index); f.setArguments(args);
return f; }
public int getShownIndex() { return getArguments().getInt("index", 0); }
@Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { if (container == null) { // Currently in a layout without a container, so no // reason to create our view. return null; }
ScrollView scroller = new ScrollView(getActivity()); TextView text = new TextView(getActivity()); int padding = (int)TypedValue.applyDimension( TypedValue.COMPLEX_UNIT_DIP, 4, getActivity().getResources().getDisplayMetrics()); text.setPadding(padding, padding, padding, padding); scroller.addView(text); text.setText(Shakespeare.DIALOGUE[getShownIndex()]); return scroller; } }
It is now time to add another UI flow to our application. When in portrait orientation, there is not enough room to display the two fragments side-by-side, so instead we want to show only the list like this:
With the code shown so far, all we need to do here is introduce a new layout variation for portrait screens like so:
The TitlesFragment will notice that it doesn’t have a container in which to show its details, so show only its list. When you tap on an item in the list we now need to go to a separate activity in which the details are shown.
With the DetailsFragment already implemented, the implementation of the new activity is very simple because it can reuse the same DetailsFragment from above:
public static class DetailsActivity extends FragmentActivity {
if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE) { // If the screen is now in landscape mode, we can show the // dialog in-line so we don't need this activity. finish(); return; }
if (savedInstanceState == null) { // During initial setup, plug in the details fragment. DetailsFragment details = new DetailsFragment(); details.setArguments(getIntent().getExtras()); getSupportFragmentManager().beginTransaction().add( android.R.id.content, details).commit(); } } }
Put that all together, and we have a complete working example of an application that fairly radically changes its UI flow based on the screen it is running on, and can even adjust it on demand as the screen configuration changes.
This illustrates just one way fragments can be used to adjust your UI. Depending on your application design, you may prefer other approaches. For example, you could put your entire application in one activity in which you change the fragment structure as its state changes; the fragment back stack can come in handy in this case.
More information on the Fragment and FragmentManager APIs can be found in the Android 3.0 SDK documentation. Also be sure to look at the ApiDemos app under the Resources tab, which has a variety of Fragment demos covering their use for alternative UI flow, dialogs, lists, populating menus, retaining across activity instances, the back stack, and more.
Fragmentation for all!
For developers starting work on tablet-oriented applications designed for Android 3.0, the new Fragment API is useful for many design situations that arise from the larger screen. Reasonable use of fragments should also make it easier to adjust the resulting application’s UI to new devices in the future as needed -- for phones, TVs, or wherever Android appears.
However, the immediate need for many developers today is probably to design applications that they can provide for existing phones while also presenting an improved user interface on tablets. With Fragment only being available in Android 3.0, their shorter-term utility is greatly diminished.
To address this, we plan to have the same fragment APIs (and the new LoaderManager as well) described here available as a static library for use with older versions of Android; we’re trying to go right back to 1.6. In fact, if you compare the code examples here to those in the Android 3.0 SDK, they are slightly different: this code is from an application using an early version of the static library fragment classes which is running, as you can see on the screenshots, on Android 2.3. Our goal is to make these APIs nearly identical, so you can start using them now and, at whatever point in the future you switch to Android 3.0 as your minimum version, move to the platform’s native implementation with few changes in your app.
We don’t have a firm date for when this library will be available, but it should be relatively soon. In the meantime, you can start developing with fragments on Android 3.0 to see how they work, and most of that effort should be transferable.
[This post is by Eric Chu, Android Developer Ecosystem. —Dirk Dougherty]
Following on last week’s announcement of the Android 3.0 Preview SDK, I’d like to share some more good news with you about three important new features on Android Market.
Android Market on the Web
Starting today, we have extended Android Market client from mobile devices to every desktop. Anyone can now easily find and share applications from their favorite browser. Once users select an application they want, it will automatically be downloaded to their Android-powered devices over-the-air.
Android Market on the Web dramatically expands the discoverability of applications through a rich browsing experience, suggestion-guided searching, deep linking, social sharing, and other merchandising features.
We are releasing the initial version of Android Market on the Web in English and will be extending it to other languages in the weeks ahead.
If you have applications published on Android Market, we encourage you to visit the site and review how they are presented. If you need additional information about what assets you should provide, please visit Android Market Help Center.
Android Market lets you sell applications to users in 32 buyer countries around the world. Today we’re introducing Buyer’s Currency to give you more control over how you price your products across those countries. This feature lets you price your applications differently in each market and improves the purchase experience for buyers by showing prices in their home currencies.
We’ll be rolling out Buyer’s Currency in stages, starting with developers in the U.S. and reaching developers in other countries shortly after. We anticipate it will take approximately four months for us to complete this process.
We encourage you to watch for the appearance of new Buyer’s Currency options in the Android Market publishing console and set prices as soon as possible.
In-app Billing
After months of hard work by the Android Market team, I am extremely pleased to announce the arrival of In-app Billing on Android Market. This new service gives developers more ways to monetize their applications through new billing models including try-and-buy, virtual goods, upgrades, and more.
The In-app Billing service manages billing transactions between apps and users, providing a consistent purchasing experience with familiar forms of payment across all apps. At the same time, it gives you full control over how your digital goods are purchased and tracked. You can let Android Market manage and track the purchases for you or you can integrate with your own back-end service to verify and track purchases in the way that's best for your app.
We’ll be launching In-app Billing in stages. Beginning today, we are providing detailed documentation and a sample application to help you get familiar with the service. Over the next few weeks we’ll be rolling out updates to the Android Market client that will enable you to test against the In-app Billing service. Before the end of this quarter, the service will be live for users, to enable you to start monetizing your applications with this new capability. For complete information about the rollout, see the release information in the In-app Billing documentation.
Helping developers merchandise and monetize their products is a top priority for the Android Market team. We will continue to work hard to to make it the best marketplace for your to distribute your products. For now, we hope you’ll check out these new features to help you better deliver your products through Android Market.