Mastering Large Datasets with Python: Parallelize and Distribute Your Python Code
John Wolohan
Jan 2020 · Simon and Schuster
Ebook
312
Pages
family_home
Eligible
info
Sample
reportRatings and reviews aren’t verified Learn More
About this ebook
Summary Modern data science solutions need to be clean, easy to read, and scalable. In Mastering Large Datasets with Python, author J.T. Wolohan teaches you how to take a small project and scale it up using a functionally influenced approach to Python coding. You’ll explore methods and built-in Python tools that lend themselves to clarity and scalability, like the high-performing parallelism method, as well as distributed technologies that allow for high data throughput. The abundant hands-on exercises in this practical tutorial will lock in these essential skills for any large-scale data science project.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology Programming techniques that work well on laptop-sized data can slow to a crawl—or fail altogether—when applied to massive files or distributed datasets. By mastering the powerful map and reduce paradigm, along with the Python-based tools that support it, you can write data-centric applications that scale efficiently without requiring codebase rewrites as your requirements change.
About the book Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You’ll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You’ll then scale those same programs to industrial-sized datasets on a cluster of cloud servers. With the map and reduce paradigm firmly in place, you’ll explore tools like Hadoop and PySpark to efficiently process massive distributed datasets, speed up decision-making with machine learning, and simplify your data storage with AWS S3.
What's inside
An introduction to the map and reduce paradigm
Parallelization with the multiprocessing module and pathos framework
Hadoop and Spark for distributed computing
Running AWS jobs to process large datasets
About the reader For Python programmers who need to work faster with more data.
About the author J. T. Wolohan is a lead data scientist at Booz Allen Hamilton, and a PhD researcher at Indiana University, Bloomington.
Table of Contents:
PART 1
1 ¦ Introduction
2 ¦ Accelerating large dataset work: Map and parallel computing
3 ¦ Function pipelines for mapping complex transformations
4 ¦ Processing large datasets with lazy workflows
5 ¦ Accumulation operations with reduce
6 ¦ Speeding up map and reduce with advanced parallelization
PART 2
7 ¦ Processing truly big datasets with Hadoop and Spark
8 ¦ Best practices for large data with Apache Streaming and mrjob
9 ¦ PageRank with map and reduce in PySpark
10 ¦ Faster decision-making with machine learning and PySpark
PART 3
11 ¦ Large datasets in the cloud with Amazon Web Services and S3
12 ¦ MapReduce in the cloud with Amazon’s Elastic MapReduce
Computers & technology
About the author
J.T. Wolohan is a lead data scientist at Booz Allen Hamilton and a PhD researcher at Indiana University, Bloomington, affiliated with the Department of Information and Library Science and the School of Informatics and Computing. His professional work focuses on rapid prototyping and scalable AI. His research focuses on computational analysis of social uses of language online.
Rate this ebook
Tell us what you think.
Reading information
Smartphones and tablets
Install the Google Play Books app for Android and iPad/iPhone. It syncs automatically with your account and allows you to read online or offline wherever you are.
Laptops and computers
You can listen to audiobooks purchased on Google Play using your computer's web browser.
eReaders and other devices
To read on e-ink devices like Kobo eReaders, you'll need to download a file and transfer it to your device. Follow the detailed Help Center instructions to transfer the files to supported eReaders.