An engineer revealed during an industry conference how the site is working to make its backend data processing more efficient.

The problems, which Facebook has been forced to grapple with “much sooner than the broader industry,” include figuring out more efficient ways to process user behavior on the site, how to better access and consolidate different types of data across Facebook’s multiple data centers, and devising new open source software systems to process that data, Ravi Murthy, who manages Facebook’s analytics infrastructure, said Tuesday.

One major area of behind-the-scenes work relates to Facebook’s analytics infrastructure, which is designed to accelerate product development and improve the user experience through deep analysis of all the available data, whether it consists of the actions users take on the site like posting status updates or which applications they use within Facebook on different devices.

Facebook currently uses several different open source software systems known as Hadoop, Corona and Prism to process and analyze its data, which the company will focus on making faster and more efficient over the next six to twelve months.

Want to know more about this update? Read the full article here!

Facebook’s big data plans include warehouses, faster analytics

Advertisements

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s