Ali Khoshgozaran

Entrepreneur & Technologist

Android Power and Battery Management: A Look Underneath The Hood

When it comes to mobility, battery life is a paramount concern and a key selling feature for mobile phone manufacturers. Tech giants such as Google and Apple spend a significant amount of time to fully optimize and improve battery life and performance characteristics of their mobile devices. Recent concerns about iOS11’s hunger for power consumption indicates a 60% higher battery decay rate causing many to hold off updating to the latest iOS version which could potentially cause significant business and brand damage to a Apple. While batteries are getting better, demand for power is also increasing with the proliferation and ubiquity of services available on smartphones.

Below we try to get into the details of power and battery management in Android’s open source code. When it comes to mobile computing, Battery management is a unique challenge. On one hand it’s a critical component of any service that runs on a device (as opposed to telephony, gps and camera services that are only used in a more limited setting). Yet at the same time, battery management involves direct interaction with a hardware device (battery). In addition, interacting with a hardware device means one needs to worry about dependencies to various hardware manufacturers. Therefore, an intricate software architecture mechanism is needed to provide various levels of abstraction to satisfy above requirements.

Android’s software architecture addresses these design concerns via an intricate and fairly complex layered architecture. The high level Android Application Framework provides developers a rich set of functionalities to interface and interact with the device and its various services and components. These services are wrapped in intuitive APIs mostly managed by the system_server process. Application framework communicates with Native modules via intricate IPC proxies to enable an elegant abstraction layer while accessing a set of Android’s system services such as display, connectivity, camera, battery and power. These services implement a hardware abstraction layer (HAL) for various hardware components of the device such as battery. The chart in next page depicts a rough high level class/interface hierarchy between various C++ and Java classes for battery and power management (this chart is by no means a comprehensive graph!).

Underneath these services the Linux kernel provides device drivers with lowest levels of functionalities in additions to Linux’s core offering such as memory management or power management. Android needs to operate with very limited energy footprint. Therefore, it implements a power management driver on top of standard Linux power management.

As the chart above suggests at the top of the hierarchy sits the PowerManagerService class which imports various functionalities from other high level Java classes (such as BatteryStatsService, BatterySaverPolicy, BatteryManager, etc.) as well as employing more low-level C++ methods defined in PowerManagerService.cpp.

The PowerManagerService class is responsible for coordinating power management functions of the Android Device. Upon startup, PowerManagerService loads an instance of Power HAL which calls an init() function. The init() function performs power management setup actions at runtime startup such as setting default cpu freq parameters. There is also a setInteractive() function that performs power management actions for various system’s states (i.e., interactive, non-interactive interfacing with kernel for low level power interfaces.)

The PowerManagerService class uses various methods provided in BatteryStatsService.Java which uses JNI to get low level stats natively implemented in C++ (BatteryStatsService.cpp) such as getPlatformLowPowerStats(). It also uses native calls to PowerManagerService.cpp such as nativeSetInteractive(), nativeSendPowerHint() and nativeSetFeature() functions. The BatteryStatsService class is still fairly high level and provides a nice layer of abstraction for various functionalities implemented in other Java classes such as BatteryStatsImpl and BatteryStats.

 As discussed above, the PowerManagerService through the use of BatteryStatsService class utilizes BatteryStats class to gain access to battery usage statistics, including information on wakelocks, processes, packages, and services. The BatteryStats class also includes background timers and counters for sensor, bluetooth, Wifi and other hardware components to calculate the power use of a device subsystem or an app.

Before we move forward, we need to discuss wakelocks as they are an important component of Android’s power management architecture.


Battery power is an extremely precious and limited resource. Therefore, Android OS forces the device to quickly fall asleep if left idle. There is however the need for an application to wake up the screen, gain access to CPU and keep the screen on while it finishes a process. A wake lock is a mechanism to indicate that an application needs to have the device stay on. Wakelocks allow Android to implement a more aggressive power management policy (due to battery constraints) compared to standard Linux. To keep the CPU running to complete the work, apps use the Wakelocks system service provided by the PowerManager. PowerManager is included in the PowerManagerService class to provide access to wake locks.

The PowerManager constructor sets context, service and handler for power manager and provides a set of get interfaces regarding brightness levels, reasons for reboot, etc. Applications such Bluetooth, Calendar, Camera, Alarm clock, etc. use power manager to push a certain device state (for instance when an alarm goes off or a calendar alert is received). However, its primary API is newWakeLock() which will create a PowerManager.WakeLock object (with various lock levels). You can then use methods on the wake lock object to control the power state of the device. The acquisition of a wakelock through the acquire() method forces the screen to stay on until the release() function is called. Wakelocks provide an interface for notifying power manager of a user activity such as a touch event or key is pressed. The WakeLock object’s methods allow API users to control the power state of the device. The PowerManager class however allows certain application packages to ignore battery optimization through being whitelisted.

Lastly, the PowerManager class interfaces with kernel through native calls in IPowerManager.cpp for things like rebooting the device. The IPowerManager is part of Android’s native framework. It implements acquireWakeLock() and releaseWakeLock() methods to access underlying hardware (such as battery in this case). The standard interface is defined by a Hardware Abstraction Layer (HAL) file to create separation of concern and decouple underlying hardware implementations with higher level implementations.

The BatteryStatsImpl class extends BatteryStats. It provides various low level stats that other processes need. For instance The Class SystemClocks provides total time in milliseconds spent executing in Kernel or in user code or keeping track of battery levels during the last plug and unplug event. It also includes methods such as updateMobileRadioState() that distribute cell radio (or Bluetooth) energy information and network traffic to apps requesting them. It uses a set of helper functions in BatteryStatsHelper to retrieve power usage information for various applications and services. It also provides low level memory stats from the kernel and includes methods such as updateCpuTimeLocked() which include the logic for reading and distributing CPU usage across apps. For instance it decides to give more of CPU time to those apps holding partial wakelocks if the screen is off and we are on battery. It also uses methods like writeSummaryToParcel()[1] to provide summary statistics to be written to disk with various stats about battery and CPU usage.

The BatteryStatsImpl class uses a set of methods and attributes defined in BatteryManager class to query battery and charging properties (e.g., battery temperature, whether it’s plugged, current battery’s maximum charging voltage). The BatteryManager class also provides strings and constants used for the ACTION_BATTERY_CHANGED. These methods (such as isCharging()) are used by other services to performs things that user does not care about and need to only be performed when the battery is at full charge. It uses Native methods from BatteryService.cpp to accomplish some of the above tasks. BatteryManager also broadcasts an action to notify other services of the charging state of the battery (i.e., whether phone is plugged or not). It also uses a bp file (a modern/simpler alternative to NDK) to specify certain C++ files (such as BatteryProperties.cpp, BatteryProperty.cpp, etc.) to be included during the build.

The BatteryService.cpp class creates a static Singleton Instance of BatteryService to provide low level battery services. It also supports functions such as adding and removing sensors. These methods are used by SensorService.cpp which manages registration of sensors such as Gyroscope.

 Other Battery and Power Management Components

While the list below is not comprehensive. It includes some of the other key classes and interfaces that are used/referenced by the above classes.

A class to determine if the battery saver mode needs to be turned on for a specific service. It includes a set of attributes to track whether things like sound triggers, animation, full backup, vibration etc. are disabled during battery saver mode. Also provides a get method to retrieve state data containing battery saver policies.


This class provides a get method for retrieving battery properties that may be queried using BatteryManager.getProperty(). It sends calls to native implementations in BatteryProperty.cpp

A set of Parcel read/writes in sync w BatteryProperties.cpp for attributes such as charge/battery status/temperature/voltage etc. The two methods readFromParcel() and writeToParcel() are used to communicate with BatteryProperties.cpp

This class is only used by the system server. It provides attributes like WAKEFULNESS_ASLEEP, WAKEFULNESS_AWAKE and WAKEFULNESS_DREAMING which maintain the device state.


This Android Interface Definition Language file defines the interface that client and service agree to while communicating via Interprocess Communication (IPC). These Java-like interfaces include things related to Battery Stats such as noteStartVideo(), noteStopVideo() (and audio equivalents), noteStart/StopCamerca(), etc.

The IBatteryStats.cpp file defines a set of virtual functions such as Start and Stop of Video and Audio or Start and Stop of other sensors. Each using the writeIntefaceToken() method for efficient IPC transport.

IPower.hal (Hardware Abstraction Layer)

Given the hardware dependency of several Android components, a Hardware Abstraction Layer allows porting Android to an OEM-specific hardware where each vendor writes its own drivers. Constructor for this interface performs power management setup actions at runtime startup, such as to set default cpufreq parameters. The power.h header file allows multiple OEM-specific power headers to be defined.


Implements methods defined in Ipower HAL and is used by PowerManagerService.


Contains enums representing various battery status (e.g., charging, discharging, full) and health (e.g., good, overheat, dead, cold) states.

Contains power usage of an application, system service, or hardware type. For instance wifi, gps, other sensors, camera, flashlight, Bluetooth, etc.

The DreamManagerService class provides Service APIs for managing dreams. Dreams are interactive screensavers launched when a charging device is idle, or docked in a desk dock. Dreams provide another modality for apps to express themselves, tailored for an exhibition/lean-back experience.

As we discussed in the beginning of this post, Android’s limited available power resources and the high demand for power usage across various native and user-defined Android apps requires an elegant yet complex software architecture to allow versatility and performance for the platform. While this is not a meant to be a comprehensive overview of Android’s power and battery management components and services, we hope it sheds light into a fairly complex yet critical part of the Android Framework.

[1] Parcels are containers for data and object reference messages used for high performance IPC communication.

Storytelling in the Age of Artificial Intelligence

Software was eating the world. Then came AI.

Can you remember the last time you sat in on a panel discussion about technology without hearing the words “artificial intelligence” or “machine learning”? Reminiscent of big data in 2007, the ubiquity of AI in 2017 is profoundly impacting the world. Unlike big data, which predominantly focused on the enterprise sector, rapid advances in AI and machine learning are directly affecting end consumers in a much more tangible fashion. In only a few years, we went from smart phones and smart TVs to AI-powered shoes, strollers, luggage and bags, doors, trucks, burger flipping robots, and even more recently, Microsoft’s audacious attempt to push AI into single board computing devices the size of a red-pepper flake.

While the impact of machine learning is fairly clear for certain industries in terms of their strategic long term outlook, others will require more out-of-the-box thinking to reap the benefits. The focus of this post is to highlight the far-reaching implications of AI on one such domain: storytelling. Although historically storytelling has been an exclusively human process, it is no longer the case. Incorporating the power of artificial intelligence into the editorial workflow can massively enhance how stories are discovered, created, conveyed, and consumed. The creativity of AI is also evolving, establishing the technology as one of the most disruptive forces to impact the end-to-end cycle of storytelling. From creation to consumption, understanding machine intelligence’s impact on storytelling and embracing the change are critical to succeeding in the age of human-machine collaboration. Already, several news and media organizations are heavily experimenting with AI in various aspects of their business, leveraging the technology to gain a competitive edge. To better understand how these organizations are capitalizing on the immense power of machine learning, we will take a closer look at the impact of artificial intelligence in various aspects of storytelling.

Detection: Storytelling, news generation, and other forms of content creation are triggered by observations. These observations can emerge from immediate triggers or long-term trends. Utilizing massive computing power, software solutions are now capable of rapidly identifying short term anomalies, as well as long term patterns in large amounts of heterogeneous and multi-modal data that are invisible to the human eye. Just as Twitter is being used to predict riots up to an hour faster than the police, or how we at Tilofy analyze data to forecast future trends, machine intelligence presents myriad opportunities for storytelling. Using machine learning to identify patterns is not just faster, cheaper, and more scalable. It also removes subjectivity. The human brain is programmed to fear the unknown, implanting an inherent bias in accepting things that “seem” right to the observer. Machine intelligence, on the other hand, models out such human subjectivity in identifying what stories need to be told. Moreover, the sheer volume of data and information available around any topic creates a cognitive overhead for the human brain to identify, understand, and contextualize. Machines, however, are extremely efficient in dealing with such analytical complexities that surpass the capacity of the human mind. Case in point: The International Consortium of Investigative Journalists (ICIJ) sifted through 2.6TB of unstructured data embedded in 11.5 million documents to expose a network of tax havens being exploited by rich and powerful civic employees and politicians in what is known as Panama Papers. It would take several decades for humans to process, organize and analyze this data without the use of technology.

This example illustrates how creative use of artificial intelligence and machine learning can help save journalists massive amounts of time spent on mundane and repetitive tasks. By having machines share the workload, a storytellers’ time is freed up to do what they do best, produce high quality and comprehensive editorial content. Another interesting example of leveraging AI to simultaneously improve quality and quantity of news production was the Associated Press’ use of automation in their reporting process to increase the number of publications covering corporate earnings. Not only was the AP able to raise the number of publications by a factor of 12 and reduce human prone errors such as typos, they freed up journalists’ time by as much as 20%, leaving them more time to focus on their investigative efforts.

Creation: Pattern identification and analysis of large amounts of data are not just useful for detecting interesting topics. In fact, the role of machine automation becomes more evident when the idea for a story is already recognized. Machines are very adept at helping journalists augment their stories with more context. Several news organizations have already begun using automation for this purpose. The AP has utilized machine learning to predict race outcomes for several years, while startups such as Graphiq, are using billions of data points to auto-generate interactive visualizations. Content creation is one of the most exciting segments where technology can work hand in hand with human creativity to apply more data-driven, factual and interactive context to a story. For example, at Tilofy, we automatically generate insights and context behind all our machine generated trend forecasts.

The use of artificial intelligence in the creation of a story is not limited to machine learning approaches. Natural Language Processing (NLP) as well as Natural Language Generation (NLG) are being widely applied by editorial teams for a variety of use cases. From speech to text conversion and rapid summarization using NLP, to auto-generating a story based on the application of a predefined template to a large body of structured data (reports of earthquake, corporate earnings, sporting events, etc.) using NLG, AI has expanded its breadth of use cases and applications among a wide range of editorial teams. Tilofy’s trend forecasting platform uses similar techniques to auto-generate a trend report for each forecasted trend on the platform, alleviating the need for a large editorial team, as well as completely removing any bias in our trend reports. Other areas of artificial intelligence such as machine vision and image processing can also play an integral role in storytelling. Advances in machine vision and object recognition allow journalists to visually search massive image and video archives, even churning out auto-generated video compilations from textual data.

Distribution: With the emergence of social media, and its modern ubiquity, social platforms have become the primary source of content consumption among users. Media organizations are losing their grip on controlling how, when, where and who discovers their content. People are no longer obtaining their information exclusively from news organizations, while increasingly turning to technology backed social media platforms instead. Tech giants, such as Facebook, Instagram, and Twitter have their own internal ranking algorithms that determine what content populates a user’s feed. Similarly, search engines determine how content is discovered on the user’s search results page based on who is searching for what, when and where. Although the specific mechanics of these ranking algorithms are well kept secrets within these organizations, becoming aware of how machine learning is applied to content discovery, as well as actively partnering with technology companies can become an effective method to benefiting from artificial intelligence in content discovery as opposed to falling victim to it.

Moreover, automation can help storytellers identify a potential target audience for a story, or inversely figure out what stories a target audience would be potentially interested in learning about based on the analysis of their psychographic traits. The transparent nature and large volume of social media interactions, has allowed organization to go beyond traditional demographic characteristics in identifying like minded clusters of readers for each story. Several success stories have emerged showcasing how some internet companies, such as Buzzfeed have utilized data driven approaches to increase virality and audience engagement for their content. Another increasing use of automation among news organizations revolves around the idea of A/B testing article headlines to eventually converge on the optimal choice based on certain performance metrics (such as clickthrough rates). The Washington Post is currently using one such tool which allows editors to create multiple versions of a story (with different headlines, cover photos, snippets, etc.) to show the prevailing version more frequently to readers. Such A/B testing tools provide extremely effective training data for machine learning platforms to suggest edits/revisions to editors on the fly based on past performance of already published articles and historical data. In a similar sense, machine learning can also auto-suggest a proper length of an article or its title, image size/placement and various other factors based on the target device (tablet, phone, laptop, VR headsets) where the content is going to be consumed.

Lastly, despite advances discussed above, when it comes to accessing knowledge and information, issues of digital divide, low literacy, low internet penetration rate and poor connectivity still affect hundreds of millions of people living in rural and underdeveloped communities all around the world. This presents another great opportunity for technology to bridge the gap and bring the world closer. Microsoft use of AI in Skype’s real-time translator service has allowed people from the furthest corners of the world to connect, even without understanding each other’s native language using a cellphone or a landline. Similarly, Google’s widely popular translate service has opened a wealth of content originally created in one language to others. Due to its constant improvements in quality and number of languages covered, Google Translate might soon enhance or replace human-centric efforts like project Lingua by auto translating trending news at scale.

Whether we like it or not, artificial intelligence is here to stay. It is no longer possible to imagine a world without machine learning, as industries who refuse to adapt fall further behind. Like electricity or internet connectivity, the ubiquity of AI is becoming so widespread that it will soon be embedded in all aspects of daily life. Such a profound paradigm shift only emerges once in a generation, creating many opportunities for industries to evolve. Like all other professions, storytellers can leverage AI, by carefully and gracefully embracing the technology to enhance content creation and distribution. For storytellers to truly thrive in the modern world, they must continue to partner with engineers to better understand the mechanics of machine learning in an effort to form man-machine collaborations backed by powerful data driven intelligence and guided by human insight. While artificial intelligence is toying with writing science fiction screenplays, the ability for machines to add artful human nuance and the subtleties of culture to storytelling is still far away, making it imperative for storytellers to utilize the power of AI to augment their stories.


State of LA Startups

Starting a high tech startup 5 years ago in Los Angeles would be nothing like how it is in 2013. Los Angeles is finally earning the credit it deserves in being one of the world’s fastest growing startup communities. Events like LA Tech Summit with almost 1000 attendees are testaments to this movement. I feel lucky to have co-founded Tilofy in LA this year where we are surrounded by a vibrant community to help us succeed. Team Tilofy was featured along with thought leaders like Michael Abbott from Kleiner Perkins in a short video about the role of startups in creating a vibrant economy in Los Angeles.

Tech Startups & the LA Economy from USC Viterbi on Vimeo.

Starting a New Chapter

Peek Into The Future

Tilofy – Peek Into The Future

Around 10 years ago I came to the states to pursue my dreams. I spent the first two years as a masters student in George Washington University and the next five getting my PhD at University of Southern California. To me, it wasn’t about buying a nice car, living in a nice house or eating at the most expensive restaurants. In a world that has given us Bill Gates, Warren Buffett and Steve Jobs there are many more satisfying dreams to live for and get inspired by. Throughout these years, I had the blessing of working with some of the smartest people I have seen in my life at USC, Yahoo!, Microsoft and Samsung. People who empowered me, inspired me to dream, and supported my professional growth. I am indebted to them all.

Today was my last day as a senior technologist in Samsung Electronics R&D and hopefully my last day ever as an employee. I co-founded Tilofy for a dream. To give users a faster and easier way of discovering time and location-sensitive information on their mobile devices. My new journey has begun and I am already humbled by family, friends and colleagues supporting this effort. I wake up everyday with a dream of becoming the next agent of change through my entrepreneurial journey. To reach a point that enables me to have a positive effect on the lives of millions of people all around the world, through technology, philanthropy or hopefully both.

State of Social Media in 2012; Welcome to the jungle!

So Instagram is no longer cool enough and is getting outdated? His founder’s girlfriend gives you Lovestagram.

social media tools

Social media fragmentation

Have a cool picture of your last night’s lavish meal that you want to share? How about uploading it to path or twitting about it, or maybe posting it on facebook? You can also cheers to it and have your friends cheer at your cheers. Btw, did I mention Google plus, yelp, foursquare, Pinterest and Tumblr? And the list goes on with almost every month one or two new social apps popping up each trying to finding innovative ways to lure users into dragging them to their phones’ home screen and push the other guys to that third page to fight with their peers for retention.

But it shouldn’t be like that. And hopefully it won’t. To me the painful process of checking in at the restaurant using app A, taking a picture of your dish and uploading it to a social app B, and writing a short message about it on app C and tagging your friends in app D… is too much to worry about for users and does not scale. There are currently two alternatives to this problem. One is to totally ignore all of these apps and remain loyal to one (or a select few). The second solution is to link them all together and post to one allowing others to obliviously replicate your post. However, neither solution is optimal. The former locks you and your data in, is too restrictive, makes it more painful to migrate to new and better apps/experiences and shrinks your social media influence. The latter, while being less problematic, is not useful either as it totally ignores the “context” of each app, it’s “language”, the features specific to the target platform and finally its user expectations (yes, not everyone in the world is on twitter and familiar with a 140 character cryptic looking messages that appear on someone’s facebook timeline. Ask those who don’t use twitter). This paradigm will soon have to shift into a less painful alternative for users or apps start eating away each other at a rate that none can get even a fraction of facebook or twitter’s user base, influence or attention.

Social media fragmentation

Social media fragmentation

I use the following analogy to describe a departure from the current fragmented state of social media content generation and consumption. For a second imagine if you had to capture a new picture each time one of your users told you they were using a new image viewer application on their desktops/phones. You would have to take a different version of each picture for every single image viewer out there. However, thanks to the power of standards and the operating system, you create a picture [content] once without having to worry which or how many applications [views] will “render” your file. While this is not a very fair example due to its simplicity, it highlights the deep gap between a full separation of content and views and our current status.

While as Fred Wilson points out it is too simplistic to think about social media consolidation around a winner-takes all social platform, it is not that farfetched to think about a world where each social media outlet acts as a “view” over your singular and central “content”. This way, users generate a “multi-modal and multi-dimensional information element” that consists of any number of attributes such as name, description, location, time, image, video, etc. only once and allow a selected list of social applications to “interpret” and “translate” its content into a “language” or “form” popular in the destination platform. This way, the description of your social experience [how] of eating a fabulous Burger [what] at the awesome Father’s Office in Culver City [where] with Mary and Jane [who] along with the “video” or “image” of the burger [augmented modalities] form a single multi-dimensional multi-modal data element that can be (semi-)automatically transformed into a tweet, facebook post, youtube video, instagram picture, foursquare checkin, cheers to post, path update, and so on. With all information silos consolidating, location services becoming ubiquitous and facebook becoming everyone’s digital web identity we’re not that far. It’s a matter of solving a few (but very challenging) privacy, security & legal, UX and of course business issues. But these to entrepreneurs are hopefully what smells like teen spirit sounded like to teenagers of Seattle in 1991.

Images courtesy of The Conversation PrismPixeljointhardindd.

Powered by WordPress & Theme by Anders Norén