This course is an introduction to probabilistic modeling, including random processes and the basic elements of statistical inference. The ability to think probabilistically is a fundamental component of scientific literacy. You will learn the relevant models, skills, and tools that are the keys to analyzing data and making scientifically sound predictions under uncertainty. We emphasize the basic concepts and methodologies, and include dozens of examples and applications.

Probabilistic Systems Analysis has been offered and continuously refined at MIT for more than fifty years. The class is offered through the Electrical Engineering Department and has, over time, served multiple constituencies from engineering, operations research, and the sciences. Unlike traditional mathematics classes, it aims to develop the skills and intuition that are most useful to practicing engineers and scientists. On campus, it is taken by a large number of students with diverse backgrounds and a broad range of interests. They span the entire spectrum from freshman to beginning graduate students, and from the Engineering School to the School of Management.

The prerequisites for this course are *18.01 Single Variable Calculus* and *18.02 Multivariable Calculus*

Although this is not a mathematics course, it does rely on the language and some tools from mathematics. It requires a level of comfort with mathematical reasoning, familiarity with sequences, limits, infinite series, and the chain rule, as well as the ability to work with ordinary or multiple integrals.

Until quite recently, scientific literacy meant knowing calculus, some physics, and some chemistry. Even with the introduction of computers and computation, this was all that you needed to know in order to make sense of the world. But these days, you cannot understand what is going on around you if you do not understand the uncertainty attached to nearly every phenomenon. This is why probability is now a central component of scientific literacy.

What is it that has changed and has caused this shift? There are two main factors:

- Increasing complexity. As science and engineering move forward, we end up dealing with more and more complex systems. In a complex system, we cannot expect to have a perfect model of each component or to know the exact state of every piece of the system. So, uncertainty is now at the foreground, and needs to be modeled.
- Abundance of information. We live in an information society. Data and information play a central role both in our individual lives and in the economy as a whole. They are only useful because they can tell us something we did not know. Their reason for existence is to reduce uncertainty. But if your goal is to reduce uncertainty, to fight it, you’d better understand its nature, so you’d better have the tools to describe it and analyze it. And this is why understanding probability theory and its children–statistics and inference–is a must.

If these arguments sound a bit abstract, just think of any scientific field, and you quickly realize that pretty much everything is subject to uncertainty and calls for probabilistic models. For example:

- Quantum mechanics has taught us that nature is inherently uncertain.
- Biological evolution progresses through the accumulation of many random effects (such as mutations) within an uncertain environment.
- Biological data is rapidly amassing, and that data needs to be sifted, using statistical tools, to extract useful information.
- Communications and signal processing are largely a fight against noise. The effort to clean signals from noise that nature has added is essential to successful communication.
- Customer demand is random, yet you need to be able to model it and predict it.
- Financial markets are uncertain. Whoever has the best methods to analyze financial data has an advantage.
- Transportation systems are subject to random disruptions, due to weather or accidents, and are a major concern.
- Social network trends spread like epidemics and in ways that are hard to predict.

The message is clear: most phenomena of interest involve significant randomness, and the only reason we collect and manipulate data is because we want to fight this randomness as much as we can. The first step in fighting an enemy like randomness is to study and understand it.

Conceptual:

- Master the basic concepts associated with probability models.
- Be able to translate models described in words to mathematical ones.
- Understand the main concepts and assumptions underlying Bayesian and classical inference.
- Obtain some familiarity with the range of applications of inference methods.

More technical:

- Become familiar with basic and common probability distributions.
- Learn how to use conditioning to simplify the analysis of complicated models.
- Have facility manipulating probability mass functions, densities, and expectations.
- Understand the power of laws of large numbers and be able to use them when appropriate.
- Develop a solid understanding of the concept of conditional expectation and its role in inference.
- Learn how to formulate simple dynamical models as Markov chains and analyze them.
- Become familiar with the basic inference methodologies (for both estimation and hypothesis testing) and be able to apply them.

The text for this course is:

Bertsekas, Dimitri, and John Tsitsiklis. *Introduction to Probability. *2nd ed. Athena Scientific, 2008. ISBN: 9781886529236.

This OCW Scholar course, designed for independent study, is closely modeled on the course taught on the MIT campus. The on-campus course has three types of class sessions: lectures, recitations, and tutorials. The lectures and recitations each meet twice a week, and the optional tutorial meets once a week.

- Lectures introduce new concepts. They have an overview character, but also include some derivations and motivating applications.
- In recitations, the instructor elaborates on concepts presented in lecture, working through new examples with student participation, and answers questions.
- In tutorials, students discuss and solve new examples with some help from classmates and the instructor. Tutorials are active sessions designed to help students develop confidence in thinking about probabilistic situations in real time.

MIT students who take the corresponding residential class typically report an average of 11-12 hours spent each week, including lectures, recitations, readings, homework, and exams.

The OCW Scholar course combines content previously published on the* *Fall 2010 OCW site *6.041 Probabilistic Systems Analysis and Applied Probability* with 51 new videos recorded in 2013 by MIT Teaching Assistants. The Scholar course has four major learning units. Each unit has been divided into a sequence of lecture sessions that include

- A lecture video by Professor Tsitsiklis
- The slides shown in that lecture
- Suggested textbook readings

Most sessions also include:

- Recitation Problems and Solutions
- Tutorial Problems and Solutions

To help guide your learning, some of these problems have an accompanying Help Video where an MIT Teaching Assistant solves the same problem.

In addition to the Recitation and Tutorial Problems, the course also has Problem Sets and Exams with Solutions.

It's difficult to estimate how long it will take you to complete the course, but you can probably expect to spend several hours working through each individual lecture session.

This OCW Scholar course was developed by John Tsitsiklis, Professor of Electrical Engineering, with the Department of Electrical Engineering and Computer Science (EECS) at MIT. The Help Session Videos were developed by MIT Teaching Assistants Qing He, Jimmy Li, Jagdish Ramakrishnan, Katie Szeto, and Kuang Xu. To learn more, visit the Meet the Team page.