Neda Jabbari, Ph.D.: Academic To Data Scientist

Neda Jabbari, Ph.D., spent more than a decade in academia building a data science skillset before transitioning into a private-sector biotech role. 

She shares her journey from academic to Data Scientist below. 

A Background in Academia

Neda Jabbari spent the first decade of her career in academia doing what academics do – namely, acquiring degrees and research experience. Accumulating a Bachelor’s, Master’s, and Ph.D. in Molecular Biology, Neda followed all that school up with three years as a postdoctoral fellow at the Institute for Systems Biology. 15 years in, however, she decided to transition into the private sector by way of Flatiron School.

“I built a career in data science [in academic],” Neda said. “After completing my postdoctoral research, I [wanted to join] the data science bootcamp to explore broader topics.”

Despite leaving academia behind, Neda brought her acquired scientist’s skillset with her and planned to put it to use. 

“I wanted to apply my skills in experimentation and problem-solving to broader tech.” 

Her Bootcamp Experience

Regarding her decision to join Flatiron School, Neda cites word-of-mouth and the program’s transparency as being primary drivers. 

“I had heard about the campus in New York and its reputation,” she explained. “I liked that it provided statistics on student success outcomes.” 

But, going from learning in years and semesters to days and weeks presented different challenges in approaching material. Where academia allows researchers to go deep into a single problem, the accelerated bootcamp experience demands that students learn a new skill and then move on to the next phase.

“It was a very compact program, as expected,” she recalled. “This made it challenging at times to fully explore the course material.”

Despite the expedited learning schedule, Neda, like many students, found support and camaraderie in her instructors and classmates.

“Interacting with other students in my cohort and instructors with different backgrounds was my favorite part [of the bootcamp].”

Working In Tech

Neda graduated from the Flatiron School Data Science program in October 2019, later accepting a Data Scientist position at Adaptive Biotechnologies Corp. Working in the private sector, she said, has been enjoyable. 

“I perform analytics and data pipeline development to support product and operations,” she explained when we spoke with her in March 2023. “I have always liked working with data and optimizing processes.”

Advice For Current Students

Neda’s advice for current Flatiron School Data Science students is two-pronged. The first is purely practical. 

“Track your code. Practice using Github.”

Her second bit of advice is more foundational to the career change journey, delivered in a numbered, methodical way that marks her as still being an academic at heart. 

“1. plan, 2. try, 3. look back at your progress, 4. evaluate and modify if needed, 5. repeat this cycle as many times as you need while enjoying the process and learning from others who made it work. You will get there.”

Ready To Dig Deeper Into Data, Just Like Neda Jabbari, Ph.D.?

Apply Now to join other students like Neda Jabbari, Ph.D. in a program that sets you apart from the competition. 

Not ready to apply? Try out our Free Data Science Prep Work and test-run the material we teach in the course. Or, review the Data Science Course Syllabus that will set you up for success and help launch your new career.

Read more stories about successful career changes on the Flatiron School blog.

Chuck Utterback: Diving Deeper Into Data

Chuck Utterback, a June 2021 Data Science graduate from Flatiron School, came to his program with decades of professional experience. 

He shares why he chose to attend Flatiron School to upskill in data analytics below. 

Background in Business

Chuck Utterback began his 30-year career with an MBA in finance. While he initially followed the traditional path by working as a Financial Analyst, he soon transitioned toward data. 

“I derive intense satisfaction in developing business insights through data engineering and analytics. Today’s technology, such as cloud computing, advanced data visualization tools, and analytics programming languages, enable iterative and rapid time to insight.”

In the subsequent years, he held a string of data-focused leadership roles in prominent companies and founded his own analytic consulting company.

“Within years of starting my career, I migrated into cross-functional business analytics. [But] to answer complex business questions, I found the data sources insufficient,” he said. “To remedy this, I began building data warehouses, leading to 15 years of consulting projects in data warehousing, data visualization, and process improvement.” 

Bootcamp Experience

After 15 years, however, technology had changed. To keep up with evolving capabilities, Chuck determined it was time to go back to school and update his skillset. 

“I decided to pause client work and invest in developing more profound data science and machine learning skills,” he said. “I aimed to bring more robust statistics and machine learning techniques into my existing analytics practice. ” 

Choosing Flatiron School, Chuck said, was the result of an analysis combining several factors and features of its programs.

“Flatiron [School] had the best combination of course materials, commitment length, and remote flexibility. I chose the full-time data science bootcamp and focused all my energy there for six months.” 

Even before the program began, Chuck’s skillset quickly began to expand. 

“The onboarding assignment was to use Python to code gradient descent calculus equations,” he explained. “The project stretched me from the start but was foundational to getting comfortable with Python functions.” 

The coursework, he said, had relevancy to his practice throughout the program.

“I liked learning classification models due to their relevance to solving business problems like customer segmentation/churn and predicting various outcomes across any business process.” 

Working With Data

Unlike most Flatiron School students, after graduating in June 2021, Chuck returned to where he began – using data to reduce his customer’s friction, find revenue opportunities, and expose process inefficiencies. The transition back into using data to meet clients’ needs – as opposed to creating projects for a grade – has been a fluid one. 

“I love [what I do]. I target to keep at least 80% of my time heads-down (versus meetings), solving problems using data.” 

As for his new skills, he’s already putting them into practice. 

“I use Python to extract customer data from Salesforce into Google Big Query and automate data quality applications and customer insights. Overcoming poor data quality is a considerable impediment to advanced techniques.”

Reflecting On His Journey

Looking back on his time at Flatiron School, Chuck’s main takeaway is to always bet on oneself, and never stop learning.

“Embracing continuous learning by investing in myself – upskilling within analytics through Flatiron – drives a positive return on investment.” 

His advice for other data science students is to remember that Flatiron School is just one piece of the puzzle, it’s critical to prioritize being a well-rounded applicant. 

“Formalize daily habits in data science coding and reading that go beyond the Flatiron curriculum so you can sustain momentum after graduating.”

Connect with Chuck on Medium or LinkedIn

Ready To Dig Deeper Into Data, Just Like Chuck Utterback?

Apply Now to join other students like Chuck Utterback in a program that sets you apart from the competition. 

Not ready to apply? Try out our Free Data Science Prep Work and test-run the material we teach in the course. Or, review the Data Science Course Syllabus that will set you up for success and help launch your new career.

Read more stories about successful career changes on the Flatiron School blog.

Data Science vs. Software Engineering: Industry Trends and Future Predictions

The road to the future isn’t paved with asphalt—it’s a path defined by ones and zeros. For centuries, infrastructure was in the hands of physical labor engineers. Today, it’s the masters of data science and software engineers who will move today’s societies into the next stage of technological advances.

In this piece, we’ll examine industry trends in data science vs. software engineering, forecast the direction of both fields, and consider how they’ll impact each other in the future.

Industry Trends in Data Science

Hiring Growth

The industry is actively seeking data scientists. Estimates show data science employment is expected to grow by a staggering 36% from 2021 to 2031. The future is in the hands of skilled data analysts, data engineers, and data architects who can use data analysis to extract valuable, actionable insights.

Gathering Big Data

Projections suggest more than 150 billion devices will be generating 175 zettabytes of data by 2025. Much data will be generated and analyzed in real-time, providing almost-instant feedback for improved results (think content recommendation systems).

These mountains of captured data will drive company decisions, strategies, and future projections. People who can design complex new analytical models and then train machine learning systems on those models will be invaluable.

Analyzing TinyML and Small Data for Data-Driven Devices

The Internet of Things (IoT) has ramped up the need for data scientists who work with TinyML and small data. IoT devices are being developed for nearly every industry, calling for experts to gather and implement that data.

Smart homes. Smart transit. Entire smart cities. All of these call for small, low-powered devices that compute their ML and datasets. These TinyML devices require the expertise of data scientists to collect and analyze billions of data points. They’re then stored in the cloud, streaming new command instructions for these smart devices to act upon in real time.

This is where data science meets the true cutting edge of the future of data-driven devices.

Using AutoML

Analyzing monumental amounts of data collected from databases, platforms, and devices calls for integrating metrics using Automatic Machine Learning (AutoML). 

Data scientists will rely on automated tasks to gather accurate data streams. Likewise, industries will need their expertise to define what tasks are suitable for ML and to train ML on their information models to improve accuracy.

Industry Trends in Software Engineering

High Growth

Like data science, software engineering will likely see extremely fast growth. The industry is expected to grow 25% from 2021 to 2031.

For over a decade, businesses have relied more heavily on digital solutions to traditional scaling challenges. From digital platforms to app development in emerging fields like AI and ML, software engineers are in high demand to build the future.

Utilization of Agile Methodology and DevOps

Software development teams have significantly benefited from advances in Agile Methodology and DevOps programming and share centralized datasets while developing quickly and efficiently. 

Teams consistently share daily progress in small sections of completed applications, later joined to build a final product. Redundant processes like testing for errors and security issues are automated.

Agile and DevOps will trend upwards together as both continue to benefit from increased usage and advances in AI and ML.

Cloud-Based Platform Development

The reliance on cloud computing will increase. Centralized data repositories shared by development teams, massive amounts of storage, and an added layer of security will help businesses scale at cost.

AI and ML Automation

While the AI and ML fields are currently experiencing an explosion of public interest, they’ve been trending for many years. From every sector, automating processes reliably while using ML to train systems for specific tasks to increase accuracy is critical. The roles of AI and ML in Agile and DevOps are expected to continue driving innovations in development.

Advances in Cybersecurity

Cyber threats are rising exponentially. In 2022, there was an average of 1,168 weekly cyberattacks—a 38% rise compared to 2021. Paired with a proportionate increase in complexity and sophistication, companies both large and small are seeking news ways to secure their systems against the onslaught. 

There will continuously high demand for Software Engineering skills in Cybersecurity to keep systems and information secure.

Mobile App Development

The more we rely on smartphones for our daily needs, the greater the need for mobile-friendly app development and support systems. Whether it’s for companies needing to develop AI-based customer support, for apps using gamification to help people in personal development, or even for GPS-based beacon technology, there’s never been a greater need for mobile-first software development.

The Intersection of Data Science vs. Software Engineering: How the Two Fields Will Impact Each Other

It’s easy to think of these two fields as separate entities – as data science vs. software engineering. But, as multiple revolutionary technologies grow exponentially, we’ll see the intersection of technology fueled by advances in both fields—data science and software engineering. 

Increasingly sophisticated software tools will integrate cutting-edge AI and ML, requiring new skill sets from data scientists and software engineers alike. 

Software engineers will draw from massive repositories to streamline and optimize code development. 

Data privacy and the ethics surrounding intellectual property will call on advanced data analytics. At the same time, combining 5G, IoT, and advances in AR and VR will transform every industry.

Data scientists and software engineers will play fundamental roles in shaping the future of banks, hospitals, customer service, and ethical data mining. At the same time, new fields will be emerging, creating demand for roles that don’t exist yet.

Advances in AI and ML

With new applications developed every day that tap into the power of neural networks and large language models, the world will need analysts and other data experts. Data analysts will formulate how to use millions of new data points. Meanwhile, software engineers will experience increased demand to help companies accelerate data use to gain an edge over competitors.

Programs designed to help the user automate tasks will require the expertise of data scientists and programmers to integrate AI and ML upgrades to features. Companies based on a digital platform model will need constant new integrations for greater datasets from users and features to help the user make smarter, faster purchases using advanced predictive models.

Advances in IoT

IoT is going to change the way we look at the world. Every device, appliance, and gadget will operate on analytics designed by today’s data scientists, with code optimized for even the simplest systems. At scale, everything from traffic patterns on our roads to railways and airport runways will require complex models that other machines will read and interpret. 

Cities will rely on well-crafted information systems and dynamic analytics to manage electricity use, waste management, hospital equipment, and other systems. The potential use cases are virtually endless.

Advances in AR and VR

Along with powerful CPUs and other advances, we’ll probably see accelerated movement into practical applications for AR and VR. They’ll factor into production tools, virtual work environments, live events, and virtual game environments — supporting and enhancing experiences in reality. This technology means coding opportunities galore, but it also means opportunities to collect unique new user data.

Advances in 5G

5G does for internet bandwidth what AI is doing for intelligent computing. This technology will open doors for front end developers to design radically upgraded visuals for desktop and mobile.

New predictive models will transform the market in ways we have yet to imagine. Improvement in computing data means collecting real-time information on users and their connections to one another.

Join the Technology Revolution in Data Science and Software Engineering

If you’re considering a future in data science or software engineering, get a taste of what Flatiron School offers: download a syllabus and try your hand at our free lessons.

Or, you may be looking to hire the best talent. If so, learn how we help businesses with our hire-to-retire dream team recruitment services and multiple technology training solutions.

Artificial Intelligence vs. Data Science

Navigating the fast-changing currents of the tech sector can challenge even the strongest sailor. The winds change, the ocean swells, and suddenly a ship that seemed completely seaworthy is wrecked on the rocks of irrelevance. New technologies emerge from the storm every day, promising grand sights just over the horizon. Nowhere is this more true than with the two racing yachts of applied data: data science and artificial intelligence (AI).

What is Artificial Intelligence?

Before going any further, I’d like to make clear that for the purposes of this article, “AI” refers to the generative models that have captured the public imagination over the last two years – applications like OpenAI’s ChatGPT and DALL-E. The applications described as “AI” in the news are algorithms trained for specific tasks on specific datasets. If an algorithm has been trained to play chess, it will play chess exactly as well as it has been trained to.  It will not be able to carry on a human-sounding conversation about chess, or even explain why it makes its moves. That said, if you haven’t played around with ChatGPT or DALL-E or any of these generative models, you should. They’re really a blast!

These two apparent competitors have traded off the lead several times, at least in the imaginations of the tech press and blogosphere. Five years ago, “data scientist” was still the sexiest career of the 21st century. Now, the headline writers seem to think that data science will be rendered obsolete by the advent of AI. Someone who hadn’t been paying any attention at all might even think at this point that “data science” and “AI” are interchangeable. The reality, though, is that AI is really just an application of data science – a set of tools and methods for making data useful.

The Modern Flood Of Data

The world is drowning in data. Estimates put daily data generation globally at 328.77 quintillion bytes. (A quintillion is a 1 followed by 18 zeros!) That’s approximately 328 million maxed-out iPhone 13s – enough for just about every human in the United States. It is practically beyond conception. That volume represents everything: every video, picture, email, and spreadsheet. Every like, favorite, and follow is recorded and stored, but so is every action taken by an Internet of Things (IoT) device like your doorbell video camera, or your fitness tracker.

Data sitting in a data warehouse doesn’t do anyone any good, though. Someone has to make sense of it and make it useful. That is what data science does. Using code, statistics, and machine learning, data science is the field concerned with extracting knowledge and insight from the floods of data running around the world. 

What Does A Data Scientist Do?

We can think of the data scientist as a detective. Clad in a deerstalker cap and Inverness cape or, more likely, a hoodie and noise-canceling headphones, they meticulously sift through the mountains of data looking for gems of insight and understanding. The data scientist uses their analytical skills and abilities, and old-fashioned human curiosity and intuition, augmented by machine learning algorithms and statistical analysis, to pose interesting questions of data. They develop inferential models that can help companies better understand and serve their customers. And they build predictive models that ensure customers find what they are looking for before they know they’re looking for it!

In broad strokes, a data scientist’s job is to make sense of data, to make it useful in some context. And while data science is a tremendous amount of fun to do on its own, data scientists earn our salaries by being useful to businesses or other enterprises. 

Data Science In Action

Let’s take a look at a hypothetical project involving online shopping and customer service. This example will take us through a data science workflow, even up to the construction of a generative model that could help a retailer improve its customer experience. (It should go without saying that not every data scientist can do everything described in this example. This is a rather fanciful scenario to illustrate the vast breadth of work that falls under the heading of data science.)

Imagine a large online retailer called Congo. As a retail platform, Congo wants to make sure that customers make as many purchases as possible on their platform, in part by providing a positive customer experience. 

Asking A Question

A data scientist, charged with improving customer service, begins their work with a question. The question could be simple, like “What do people contact customer support the most often for?” It could be more complex like “How can we improve customer satisfaction while reducing the amount of time a representative spends on a ticket?” The important thing is to have a clear question that, in theory, can be answered with data.

Collecting Data

Along with selling products to customers, Congo collects information about what people are doing on its website, every interaction from the first log-in to the final purchase is tracked and recorded. This goes for interactions with customer support as well as regular interactions. The data scientist can have access to all of this information, but will usually perform their first forays into answering their question on a portion of the data, a dataset. This dataset is collected from whatever database, data warehouse, or data lake Congo uses, and refined using SQL or some other query language.

Initial Analysis

Once they have data in hand, the data scientist can begin analyzing it for patterns. The specifics of how they do this depend on their tech stack, but also on the question they are asking. The two main stacks (‘stack’ is tech-speak for a person’s or organization’s preferred collection of programming tools and frameworks) for data science at the moment build on the R and Python programming languages. Other languages, like Rust and Julia, are growing in popularity, but have not yet gained major holds in the field. 

The initial stages of answering a question tend to look the same. A data scientist needs to get a handle on what is actually in their dataset, and they almost invariably start with descriptive analysis and data visualization. Descriptive analysis is built on the basic calculations of things like mean, median, and mode, as well as variance and standard deviation. Unfortunately, numbers alone don’t tell a complete story, so the data scientist will also likely produce a number of visualizations: scatter plots, histograms, and heatmaps for correlation coefficients. 

Natural Language Processing

Since the questions our data scientist is attempting to answer involve things like problems submitted to customer support, they have to use an area of data science called natural language processing (NLP) in order to get the data into a format that can be analyzed. Computers, for now, do not really know what to do with natural language. If you were to give a regular scripting language, or even Excel, a plain sentence like “The quick brown fox jumps over the lazy dog” and asked it to sum the words, you would get an error. NLP lets data scientists get around this problem, among other things, by creating computer-friendly representations of the words, called tokens.

If the question is a simple one, like “What is the most common complaint on a Tuesday?” or “How long does the average ticket take to get closed?” the data scientist’s work is likely to end with this initial analysis. (In point of fact, a question this simple is more likely to go to a business analyst or data analyst than to a data scientist, but a good data scientist has to be familiar with every stage of the process.) 

Remember that the data scientist’s ultimate goal in this scenario is to improve Congo’s customer service with basic analysis. A more complicated question, like, “How can we improve time-to-close on our customer support tickets?” requires more advanced tools and techniques than simple means and standard deviations. When looking for answers to a question like this, the data scientist is likely to turn to machine learning.

Using Machine Learning To Answer Complex Questions

Machine learning (ML) is a set of algorithms and processes that enable computers to ‘learn’ by recognizing patterns in data and using those patterns to identify new examples of that pattern. Keeping with our customer service example: if we showed a computer thousands of complaints (the data) and told it how to look at those complaints (the algorithm), it would pick out all the things that are similar among those complaints and use those similarities to identify complaints in new customer service requests. This particular example of machine learning is called “classification,” which is sorting things into groups based on common attributes. Depending on how it is implemented, it could also involve an NLP task called “sentiment analysis,” which evaluates the emotional content of text.

Depending on how far our data scientist wanted to go, an ML-based filter identifying customer service tickets that are simply complaints could be sufficient for the task. With the filter in place, the customer service team could create a protocol for dealing with messages flagged as ‘complaint’ and improve their clearance time that way. 

But let’s say that our data scientist is not satisfied with good enough. With their new ML-enabled filter, the data scientist could process every customer support ticket Congo has ever received, effectively creating a new dataset of complaints. They could build a new algorithm on the complaints data to identify which complaints were resolved successfully, based on customer feedback for the interaction, and use the results of that analysis to inform a response protocol for customer service representatives to use.

The Role Of AI Assistants

Having done quite a lot of analysis at this point of what is and is not a successful customer complaint interaction, it may even make sense for the data scientist to create an AI assistant just for handling complaints. Something like ChatGPT (GPT stands for ‘Generative Pre-Trained Transformer’) requires a mind-boggling volume of data to train effectively. A recent estimate of its training data put it at around 600GB. For comparison, the whole of all the English language text on Wikipedia is about 25GB. But with sufficient data, and a good model selected, a data scientist could build a proof-of-concept assistant without too much difficulty. Of course, scaling it to support an entire customer service team would require the services of data engineers, software developers, and a host of other technical specialties. It may even turn out to be too expensive in terms of a straight cost-benefit analysis to implement, but it is absolutely within the realm of possibilities.

The Future Of Artificial Intelligence vs. Data Science

Data science and AI exist in an elegant symbiosis. AI requires enormous amounts of data to produce its wondrous creations. It uses algorithms to learn the patterns contained in trillions of words written by actual humans. All of that data requires processing, selection, and analysis by data scientists, who also have to observe the AI’s behavior and monitor it for flaws. Data scientists, furthermore, develop the algorithms and models that help drive improvements to AI. Although it might seem like the advent of generative models has rendered the data scientist obsolete, or at least endangered, the reality is that a good data scientist will learn how to use the strengths of the generative models to help with their own work.

About Charlie Rice

Charlie Rice is a Data Science Instructor with the Flatiron School. A former journalist, he got interested in data science when an editor showed him a news story written by a computer. Since making the switch eight years ago, he’s been involved in blockchain research, FinTech development, and helping to develop the forthcoming CompTIA Data Science certification.

What is Artificial Intelligence?

Artificial Intelligence is seemingly everywhere these days. Recent innovations have peppered the technology throughout our lives, with applications in just about every industry and field. But, if you’ve found yourself wondering what exactly this new technology is, you’ve come to the right place. In this post, we’ll cover what AI is, where it comes from, and how it’s used. 

What is Artificial Intelligence?

Artificial Intelligence (AI) is the field of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. It involves developing algorithms and systems that can analyze vast amounts of data, recognize patterns, learn from experience, and make informed decisions or predictions.

There are several different terms often used to refer to AI, as the technology encompasses a wide range of technologies and techniques. Commonly used terms include machine learning, natural language processing, computer vision, and robotics. 

Machine learning, a subset of AI, enables machines to learn and improve from experience without being explicitly programmed.

Natural language processing allows machines to understand, interpret, and respond to human language. This is the foundation of applications like voice assistants and language translation.

Computer vision empowers machines to analyze and interpret visual data, facilitating tasks like object recognition and image classification.

Robotics combines AI with mechanical engineering, enabling the development of intelligent machines that can interact with the physical world.

Who invented Artificial Intelligence?

Alan Turing

Alan Turing was a pioneering figure in the field of computer science. He made several significant contributions that influenced the development of artificial intelligence (AI). Turing’s ideas and concepts laid the groundwork for AI research and continue to shape the field to this day. As a result, he is often considered its inventor.  

One of Turing’s most influential contributions was his proposal of the “Turing test” in 1950. The Turing test is a measure of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. According to the test, if a machine can engage in natural language conversations and convince a human evaluator that it is human, then it can be considered artificially intelligent. 

The Turing test became a benchmark for AI researchers, encouraging the development of conversational agents and natural language processing capabilities.

Dartmouth College

AI also has its roots in the Dartmouth Conference held in 1956. It was at the conference when researchers first coined the term “artificial intelligence” and laid the foundation for the field.

Since its inception, AI has undergone significant evolutions, driven by advancements in computing power, data availability, and algorithmic improvements.

Late 20th Century AI

Late 20th century AI experienced an “AI winter” from the late 1970s to early 1990s. It was a period characterized by decreased interest and progress in AI. High expectations, limited computing power, and challenges in solving complex problems contributed to the decline. 

The AI winter ended due to advances in computing power, and practical applications demonstrating value, big data availability, improved algorithms, successful commercial products, and interdisciplinary collaborations. These factors renewed interest, leading to the resurgence of AI research and applications in the late 1990s and early 2000s.

Early 21st Century AI

The early 21st-century AI resurgence was driven by the availability of Big Data, which enabled improved algorithms, pattern recognition, real-world applications, and iterative improvement of AI models.

  • Data Availability The emergence of Big Data provided AI researchers with access to vast and diverse datasets for training and validation.
  • Enhanced Algorithms: AI algorithms, particularly in machine learning, are improved by leveraging large datasets, leading to better performance and accuracy.
  • Pattern Recognition: Big Data allowed for the identification of complex patterns and correlations that were previously difficult to uncover.
  • Real-World Applications: Industries leveraged Big Data and AI to gain insights, make better decisions, and improve operational efficiency.
  • Iterative Improvement: The feedback loop created by Big Data enabled iterative improvement of AI models through continuous learning from real-world data.

What Does AI Do?

The purpose of AI is to automate tasks, enhance decision-making, improve efficiency and productivity, enable personalization, augment human capabilities, and drive innovation and research.

Automate Tasks

AI automates routine and repetitive tasks, freeing up human resources and allowing them to focus on more complex and creative endeavors.

Enhance Decision-Making

AI helps in making informed and data-driven decisions by analyzing large volumes of information, identifying patterns, and providing valuable insights to support decision-making processes.

Improve Efficiency and Productivity

AI technologies optimize processes, streamline operations, and increase efficiency, leading to improved productivity across various industries and sectors.

Enable Personalization

AI enables personalized experiences by analyzing user preferences, behavior, and data, allowing businesses to tailor products, services, and recommendations to individual needs and preferences.

Augment Human Capabilities

AI complements human abilities by enhancing their cognitive and physical capabilities, enabling humans to perform tasks faster, with higher accuracy, and with reduced effort.

Advance Innovation and Research

AI fuels innovation by enabling breakthroughs in various fields, driving advancements in healthcare, science, engineering, and other disciplines, leading to new discoveries and solutions.

Different Types of Artificial Intelligence

AI systems are categorized based on how generalizable or specific they are (Narrow vs General) or by the way they make decisions (rule-based versus machine learning).

Narrow AI versus General AI

There are different types of AI, ranging from narrow or weak AI to general or strong AI.

Narrow/Weak AI refers to systems designed to perform specific tasks, such as facial recognition or voice assistants, and operates within predefined boundaries.

General/Strong AI aims to replicate human-level intelligence, possessing the ability to understand, learn, and apply knowledge across various domains.

While narrow AI is prevalent today, achieving general AI remains an ongoing challenge, and its development raises ethical and societal considerations.

Rule-Based versus Machine Learning

Rule-based AI, also known as expert systems, relies on predefined rules created by human experts to make decisions or solve problems. These rules are encoded into the AI system, and the system matches input data against these rules to determine the appropriate output or action.

Benefits:

  • Suited for well-defined domains with known and explicitly defined rules

Limitations:

  • It may struggle with handling ambiguity or learning from new data
  • Requires human expertise to create and maintain the rules

Machine Learning AI, in contrast, learns from data without explicit rules, using algorithms that analyze patterns and create mathematical models.

Benefits:

  • Adapts internal parameters to optimize performance and makes predictions or decisions based on new, unseen data
  • Excels in complex domains with large amounts of data, discovering intricate patterns and generalizing from examples
  • Adapts and improves performance over time as new data becomes available
  • Relies on training data and algorithms to learn autonomously

Limitations:

  • Need for large amounts of data
  • Overfitting/an inability to generalize
  • Potential to duplicate biases present in data

Both approaches have their strengths and limitations, and the choice between them depends on the specific problem domain and the availability of labeled data and expert knowledge.

Oftentimes, both approaches are used at different stages in the life cycle of an AI project.

AI Uses In Industry

Self-Driving Cars (Tesla)

Tesla’s self-driving cars utilize a combination of AI techniques, including machine learning and expert systems.

Machine learning algorithms analyze vast amounts of data from cameras, radar, and other sensors to recognize and interpret the surrounding environment. Expert systems encode rules and decision-making processes, allowing the car to make real-time decisions based on input from sensors and the learned models.

Large Language Models (Chat GPT)

Large language models, like those Chat GPT, primarily rely on unsupervised machine learning techniques, particularly large language models. Engineers train these systems on large datasets of text, enabling them to learn patterns, language structures, and context.

By leveraging deep learning algorithms, the models generate coherent and contextually relevant prompts or responses based on the input they receive.

Editing and Proofreading (Grammarly)

Grammarly uses a combination of expert systems and machine learning approaches to provide editing and proofreading suggestions. Expert systems encode grammar rules, style guidelines, and best practices.

Machine learning algorithms analyze text patterns and linguistic features to detect errors, suggest corrections, and provide contextual recommendations.

Learn To Wield The Power Of AI

While Artificial Intelligence is currently exploding in popularity, it is still considered to be a new field. The rules are still being written, and the first to move often takes an advantage over those late to adapt. At Flatiron School, we’re teaching the skills to help you adapt to the AI revolution.

For enterprise clients, we’ve released entirely new AI training programs. If your organization wants to use AI to work smarter, move faster, and be prepared to innovate with the latest technology, Flatiron School’s suite of AI training programs is just what you’re looking for. Explore our AI training programs today. 

For students, each of our programs has been enhanced with AI. We teach our students how to use the power of AI to accelerate their output and results in Software Engineering, Data Science, Cybersecurity, and Product Design and be ready to adapt to the next innovation coming down the pipe. 

About Christine Egan

Christine is a Python Developer and Natural Language Processing Engineer, as well as a Senior Data Science Curriculum Developer at Flatiron School. She holds a Bachelor of Arts in Linguistics and Philosophy from Stony Brook University and is also an alum of the Flatiron School Data Science Bootcamp. Before joining Flatiron School’s curriculum team, Christine worked as a consultant for various federal agencies. When not working on Python code, you might find her writing data science articles for Medium, or playing Stardew Valley. 

Ace Interview Prep With AI

This article on interview prep with AI is part of the Content Collective series, featuring tips and expertise from Flatiron School Coaches. Every Flatiron School graduate receives up to 180 days of 1:1 career coaching with one of our professional coaches. This series is a glimpse of the expertise you can access during career coaching at Flatiron School.

My grandfather was known to say to his kids and grandkids when they received an invitation to interview for a job, “You get the interview. You get the job!” It was his enthusiastic way of communicating his confidence in our abilities. His belief, in part, meant that the hard work put into acquiring the interview (studying, relationship-building, practicing or honing our skills) would also serve us well performing in the interview itself. 

There’s no question that the job market has evolved quite a bit since my grandfather’s day. With the introduction of AI assistant tools like ChatGPT, preparing for interviews has become more efficient than ever. If used creatively alongside thoughtful and responsible editing and practice, the following approach can turbo-charge your confidence.

How To Use AI Assistant Tools For Interview Prep

If you’ve been on an interview for a job or been on a first date, you may recall how much preparation can go into such an event. From selecting the right outfit to finding the words to express why you’re interested without coming across as desperate, it can be a lot to think about. 

One of the most nerve-wracking parts is preparing to answer an interviewer’s questions that you can’t see ahead of time while weaving in your prior experience and how it will bring value to the organization.  

When it comes to interviewing for a job, AI assistant tools like ChatGPT can alleviate this prep work with key entries or “prompts,” a term (and even a new job category of “prompt engineer”) being promulgated by this new technology. 

Time to (role) Play!

Before diving into specific interview prompts, it’s helpful to view ChatGPT and other AI assistants like BingAI and Bard as just that – an assistant, or rather, a person. Give the assistant a specific role to play and give it as much information – or context – as it may need to play the role (that of an interviewer in this case) as accurately as possible. 

Context is critical in interviewing, whether you’re using AI to help you prepare or a real person like your Career Coach. A company is hiring you not to do just any job and not to solve just any problem. A good company has a very specific problem, has very specific jobs or responsibilities to tackle the problem, and if hiring, is looking for a specific person who has the right experience, skills, and adaptability to help solve the problem. 

Put It Into Action With AI

Enter the following prompt into the chatbot to give the AI assistant a role to play and as much context as possible to generate interview questions. 

Play the role of a <job title> for <insert company of your choice> (e.g., Spotify) who is hiring for a <insert job title> to join their team. The job description is <paste in the text from the job description on the company’s website>. What questions will you ask the candidate during the 45-minute interview to determine whether or not you will offer them a job to join your team? 

Watch as the AI assistant creates a list of questions that have remarkable relevancy to help you begin preparing for how best to respond.

Additional iterations:

  • In the same chat, enter the prompt “Now condense this list of questions to the top 5 questions.” 
  • If you have a LinkedIn profile of the hiring manager, try entering the text from the profile and regenerating the first prompt to give the chatbot even more context about the role it’s playing as a hiring manager. See if the questions change. What do you notice?

Now, what to do with those AI-generated questions?

With these questions in mind, it’s time to start creating what you might say in response. 

You can also use AI assistants like ChatGPT to generate some ideas. And, just like the example above, you’ll want to give ChatGPT as much context as possible to produce the most accurate “like you” types of responses.

Put into Action:

Enter the following prompt into the open chat you started above (i.e., do not start a new chat from scratch) to give the AI assistant a role to play and as much context as possible. 

Now play the role of an interview candidate who has been selected to interview for the same open position of <insert job title>. Your resume is <paste into the chatbot the text of your resume> (tip: use your LinkedIn profile if it contains even more details and experience). How would you answer <insert one question at a time from the list above> in 60 seconds? 

Watch as the AI assistant creates an answer using the inputs you gave it. What do you notice? Does it sound like your voice? Does it pull in the right experience? What’s missing from the answer it gives you? Should it be longer or shorter? 

Additional iterations:

  • Ask the chatbot to regenerate responses using the STAR method. 
  • Begin a new chat and ask the chatbot to create a 60-second introduction about yourself. Give it different scenarios in which you might introduce yourself (e.g., at the beginning of an interview, at an industry conference, or at a cocktail party). 
  • Use elements of the prompts above to create a follow-up thank you note to the interview that takes place, adding new information you gather from the interview itself (e.g., the questions actually asked of you and any other important or fun details from the interview you want to use to personalize your follow-up).

Rules of Thumb For Interview Prep With AI

When employing an AI assistant, it’s beneficial to follow a couple of key guidelines to achieve the best outcomes. Carefully read the output, maybe 2-3 times, and reflect on several introspective questions. 

Personalize & Contextualize

Does the AI-generated response sound like something I would say? Do the responses integrate the right context, accounting for nuances in my experience, and what I’ve learned about the company’s business, culture, and challenges? Not sure? Record yourself reading the response and listen back. What do you notice? What would you change? 

Be Honest

Do I fully understand the response it generated? Do I understand the terms and concepts enough to answer follow-up questions reasonably well? Not to worry if the answer is no. Being honest about your responses to these questions will provide a helpful list of topics to prepare yourself confidently for the interview. 

Practice and Seek Feedback

Now that you’ve got a solid starting point with your AI-generated questions, it’s time to put those responses into practice. Reading from a script is obvious. Not to mention, it defeats the point. Hiring managers are real people who are still hiring real people, even if AI tools are growing in acceptance in work environments. 

First, schedule an interview prep session with your Career Coach. Send them the job description ahead of time along with the questions ChatGPT gave you. Ask them to alter the wording of each question slightly, putting them into new words or in a different order.

Next, put away the responses to the questions from ChatGPT and practice responding in your own words. How does it feel? 

Keep practicing after your coaching session, recording your responses with tools like Loom or Riveter to listen and give yourself feedback. Send the recordings to your coach or a friend for additional feedback. 

The more practice we have relaying our experience and skills – hard and soft – to a variety of interview questions, the more comfortable we become when under pressure. Once you’re comfortable with 70% of the questions, practice the tougher questions at least a couple more times for good measure, and then take a break. Let your brain do work while you sleep, and rest easy(ier) knowing you’ve put in a solid effort preparing to ace your interview!

No job interview yet? No problem!

Repurpose the above prompts to fit prep work for an upcoming informational interview or networking conversation. Or, ask the chatbot to help you create an outreach message requesting an informational chat. 

In whichever scenario you choose to employ an AI assistant, remember that it’s merely a tool and those who use tools thoughtfully, responsibly, and creatively often create impressive results. 

About Lindsey Williams

Lindsey Williams is the Senior Manager of Coaching at Flatiron School. She has more than 15 years of experience in the tech and edtech spaces and has held a variety of roles from Recruiter and HR to Campus Director and Training Director.

Mike Roth: Fine Arts to Data Science

Mike Roth, an August 2022 Data Science graduate from Flatiron School, began his career learning computer engineering before a love for creating pulled him towards a degree in fine arts. A decade later, however, he’s come full circle.

He shares his journey from the arts to Data Science below.

A Foundation In Fine Arts

Mike Roth has spent his career in the pursuit of creation. Initially beginning his education studying Computer Engineering, he ultimately graduated with a degree in Fine Arts. While many would question the transition between the two fields, Roth says that they overlapped at their core and differed only in the method of creation.

“I didn’t see much difference between the two [majors] since they are both highly creative fields, and I wanted to combine the two interests to take advantage of the power of coding in art.” 

Post-graduation, he used his combined skillset in a variety of positions including graphic design, web development, and marketing. But, a decade into his career, the financial pressures of living in a major city pushed him to consider a new career. 

“I was using my coding skills to create art and design, but I still struggled to make enough money to survive in New York with just a degree in Fine Arts,” he explained. “I’d designed graphics and websites my entire career and was looking for a new challenge.”

Roth didn’t have to look far to settle on his next path. He simply went back to the beginning – back to his enjoyment of coding.

“I love to code and wanted to pursue a career where I could code all day.”

His Bootcamp Experience

While looking into fields where his coding skills would be a valuable asset, Roth discovered Data Science and bootcamps. 

“Initially, Data Science seemed more interesting to me because it was one of the most challenging courses in a bootcamp,” he recalled. “Then I realized that I could do so much more with math and science on top of my software engineering skills.”

A referral from a friend spurred his interest in Flatiron School’s Data Science program

“I had an artist friend who graduated from Flatiron’s software engineering program a few years before me and has had a lot of success since. His experience made Flatiron one of my top choices for bootcamps. I wanted stability and progress in my career, and I knew from his experience it was achievable.”

Roth applied to Flatiron School’s full-time, 15-week Data Science course during the pandemic, but delayed his start date until in-person classes at the NYC Campus resumed.

“I really wanted to learn data science from people around me, not just online tutorials,” he explained. “Attending the bootcamp on campus was an amazing experience.”

He recalled how challenging the accelerated pace of the program was, but highlighted the support he received and the connections he made with those around him on campus. 

“The coursework is very demanding. Keeping up with every topic and project often required me to work late at night,” Roth said. “But my favorite part [of the bootcamp] was learning from my peers and professors, who would discuss complex math and neural network ideas.”

Job Search Experience

Mike Roth graduated from the Flatiron School Data Science program in August 2022. Unfortunately, his job search initially got off to a rocky start.

“I think because of my untraditional background I had trouble getting interviews. It was very difficult and disheartening at times.” 

But, throughout his job search, his dedicated Flatiron School career coach was there to keep him moving forward.

“My career coach was extremely helpful and supportive, and I owe all my interviewing and applying skills to him,” Roth said. “I called him my job therapist because while most of the job search work is on you, my career coach was there to back me up technically and emotionally.”

Despite the trying start to the search, Roth ultimately accepted a role as a Senior Consultant at GCOM Software. When we spoke with him in early 2023, he had only good things to say about his new career. 

“I love it! I didn’t know how much I would enjoy Data Science before I applied to Flatiron, but I really can’t get enough of it. I’d do personal science projects all day if I could, but I’m so happy to get paid for it and work with an amazing team of engineers and scientists. I can’t wait to see where my career leads.”

Reflecting On His Journey

Looking back at his journey from Flatiron School student to professional Data Scientist, Roth is particularly proud of the projects he completed while in bootcamp. Those projects, fittingly, combined his love of the arts with his new data skills. 

“In one project, I used informational entropy and neural networks to authenticate any artist’s work from fraudulent copies, specifically Bob Ross’ paintings. For my final project, I created a sound wave similarity search engine that uses data from Spotify’s API to find songs that are similar sounding. Try out a working demo here.”

Roth commented that he’d also learned to let go of societal notions around changing careers.

“My biggest takeaway from the bootcamp is that I’m not too old or unworthy to pursue a career change and that I can always expand my knowledge and experience, even if it seems different from my background.”

The fact that he’s come full circle is not lost on Roth either. 

“This was the path I had always been on to begin with; headed toward something challenging and new. I still have a bit of an imposter feeling about my math and science abilities, but I’m really excited to do this kind of work and I’m proud of what I’ve learned.”

His Advice For Other Students

Roth’s advice to others pivoting to a new career by way of Flatiron School is to lean into the uncertainty and inherent struggle in learning something new. 

“Don’t get too worried about whether you understand everything the first time. These concepts can be really difficult to understand or visualize the first time around, and take time to sink in.”

He also emphasizes the fact that, even after graduation, they should expect to continuously be improving and expanding their skillsets.

“I’m still constantly learning and feeling frustrated when I don’t understand something right off the bat, but I know it will come eventually. Work is work, but the work you put in always pays off – you learn more from your mistakes and difficulties than anything else.”

As for his love of creation, that passion is here to stay. 

“I’m working as a data scientist now, but I think I’ll always be an artist, no matter what my job is. Plus, at times Data Science can be more of an art than a science.”

Ready For A Change, Just Like Mike Roth?

Apply Now to join other career changers like Mike in a program that sets you apart from the competition. 

Not ready to apply? Try out our Free Data Science Prep Work and test-run the material we teach in the course. Or, review the Data Science Course Syllabus that will set you up for success and help launch your new career.

Read more stories about successful career changes on the Flatiron School blog.

Teacher Appreciation Week 2023

This Teacher Appreciation Week we are celebrating the people who help deliver our immersive programs: our curriculum designers, faculty managers, and instructors. Their hard work delivers the practical, applicable skills that help our students succeed.

In this post, we’re featuring several impactful members of the Flatiron School instruction and curriculum team – career changers themselves like many of our graduates – and revealing their advice to students.

Greg Damico: Technical Faculty Manager & Data Science Lecturer

Image of Greg Damico

Greg Damico, Technical Faculty Manager and Data Science Lecturer, began his career by spending more than twenty years in academia. He accumulated advanced degrees in Physics, Ancient Greek, Philosophy, and Applied Mathematics before ultimately pivoting into Data Science and teaching at Flatiron School.

Looking back at his career thus far, Greg is most proud of his impact on his students. 

“Maybe this is a little trite, but I’m very proud of helping to jump-start new careers. Watching students go from zero to hero never gets old.”

His advice for those students is to harness the power of collective learning. 

“Do not be shy about asking for help, especially from your peers! Two heads are better than one, and collaboration will be important wherever you go anyway.”

Jesse Pisel: Data Science Curriculum Manager

Image of Jesse Pisel

Jesse Pisel spent a decade in geology-related academic and industry positions before pivoting into tech, citing the desire for a quicker-paced work environment. He is now a Data Science Curriculum Manager at Flatiron School, pulling on his extensive experience to develop coursework for Data Science students. 

Jesse’s advice for students interested in pursuing data science is perhaps a result of his experience moving among different industries throughout his career.

“There are so many unique areas of data science to pursue. Getting a broad understanding of data science and all the different areas (statistics, machine learning, deep learning, visualizations, etc.) will help you identify what you find the most interesting. Once you know what you are interested in, you can then spend time deep diving into the topic to become an expert.”

Bani Phul-Anand: Lead Product Design Instructor

Image of Bani Phul-Anand

Bani Phul-Anand, Lead Instructor of Product Design, has more than 12 years of experience in the field. She began her career in luxury beauty and fashion, but a pivot into tech eventually led her to a career in Product Design and a teaching position at Flatiron School.

She advises students to continuously work at honing their skills throughout their careers, and lean into constructive feedback.

Practice more than you think you need to – that’s the only thing that will make you better at what you do. But don’t get stuck on tools or software, they change. And don’t be precious with your work – seek criticism, not validation.

Jeffrey Hinkle: Data Science Curriculum Writer

Image of Jeffrey Hinkle

Jeffrey Hinkle, a Junior Curriculum Writer for Data Science, spent more than two decades in the restaurant industry as a chef before pivoting into tech. The driving force behind his life change was the desire to spend more time with his family, and the work/life balance he now has as a Data scientist allows him to do just that.

His advice for students, and other career changers like him, is to lean into the struggle and embrace the learning journey. 

“Don’t give in to imposter syndrome, if you are uncomfortable with something you are doing or working on, you are expanding your knowledge. Staying in your comfort zone will not allow you to push yourself.”

Learn From The Best At Flatiron School

If you’re interested in seeing some of our instructors and curriculum managers in action, why not check out some of our past events? 

Exploring America’s Pastime with Bayes’ Theorem: Technical Faculty Manager Greg Damico explores the connection between Data Science and Baseball

Why Every Developer Should Learn Python: Senior Curriculum Manager Alvee Akand explains the importance of Python

Don’t Gamble on Your Cybersecurity: Cybersecurity Instructor Eric Keith talks about how cyber risks and modern gambling intersect

Playful Design – Exploring the Fun Side of UX: Director of Product Design Joshua Robinson discusses how we can use elements of fun in UX Design

If you’re ready to experience our curriculum and instructors firsthand, apply today.

Zachary Greenberg: Musician To Data Scientist

Zachary Greenberg, a May 2021 Data Science graduate from Flatiron School, spent a decade as a professional musician until the COVID pandemic made him rethink his career path. 

He shares his journey from professional musician to Data Scientist below.

Bit By The Music Bug

Zachary began his professional career by earning a bachelor’s degree in psychology with a specialization in statistics. It was during college however that he “was bit by the music bug”. 

“After graduation, I decided to pursue a singing career which led me to become a lead vocalist for theme parks and major cruise lines.” 

But, like many other artists, he was soon out of work when the 2020 pandemic heated up. He took time during the lockdown to evaluate the path he was on, ultimately deciding to make a career change to Data Science. 

“I was drawn to data science for 2 reasons. One, I already had a statistics background and was randomly learning Python in my spare time. Two, when I started getting more serious about it, I was amazed at the effect a data science project could have on people.”

His Bootcamp Experience

After researching bootcamps, Zachary applied to Flatiron School’s full-time, 15-week Data Science program. He cites the school’s reputation as a contributing factor to his decision to apply.

“I was particularly impressed by Flatiron’s word of mouth,” he recalled. “I was hoping that it would give me the tools and confidence I needed to enter the data science workforce.”

Zachary had previous experience coding before enrolling at Flatiron School. His twin brother – a Software Engineer – had taught him the basics as a hobby. But, once he reached the advanced concepts taught at the tail-end of the course, he recalls it being a challenge. 

“Making the switch from coding and statistics into machine learning [was hard]. It’s a very quick turn, but if you stick with it and lean on the support of your cohort you’ll come out successful.”

But once he made it through the advanced modules, he thoroughly enjoyed using everything he’d learned to create a capstone project. 

“It’s a passion project that not only shows you have the skills to see a project through from start to finish, but it also helps you to learn who you are as a data scientist and helps your audience to learn who you are as both a data scientist and a person.”

Working In Tech

Zachary graduated from Flatiron School in May 2021. He first interned at Sentara Healthcare before landing a full-time position with Guidehouse as a Data Scientist Consultant. Almost two years on from graduation, he is enjoying his new career.

“I am loving working in Data Science. I get to work with and learn from a great team of talented people every day,” he said. “I couldn’t ask for anything more than that. Reality absolutely lives up to the dream.”

Looking back on his journey, Zachary says he is “proud of the journey itself”.

“It’s crazy for me to think about where I am now from where I started. I’ve gained many new skills and made many valuable connections on this ongoing journey. It may be a little cliche, but it is that hard work pays off.”

As for his advice to other current or future Data Science students, he recommends looking at the big picture when things get hard.

“If you focus on your work’s impact on others, you’ll know exactly what you need to do to succeed.”

Ready For A Change, Just Like Zachary Greenberg?

Apply Now to join other career changers like Zachary in a program that’ll give you the tech skills you need to land a job in tech.

Not ready to apply? Try out our Free Data Science Prep Work and test-run the material we teach in the course. Read more stories about successful career changes on the Flatiron School blog.

March Madness Results: The Tale of A Busted Bracket

After 3 weeks and 67 games, March Madness ended with disappointment for most fans. While you may still be recovering from the gray-hair-inducing stress-fest that is the annual tournament (we recommend ripping up your paper bracket – it’s very cathartic), it’s a good time to look back at where we started and how things went so, very wrong for the official Flatiron School bracket. 

Our Machine Learning Bracket Prediction

In March, Data Science Curriculum Developer Brendan Purdy used Machine Learning to develop a March Madness bracket, which you can see below. Visit this blog post to learn how he used Machine Learning to develop his March Madness bracket.

Machine Generated March Madness Bracket
Machine Generated March Madness Bracket

Unfortunately, the Machine Learning generated bracket did not perform well. Purdy’s bracket correctly predicted only 2 of the final 8 teams and none of the final 4.

March Madness Results

This year’s bracket had quite a few surprises, with favorite teams like UCLA and Purdue not even making it to the final 8. And with San Diego beginning at a 25.7% win probability, it shocked many that they made it all the way to the National Championship.

NCAA March Madness Results
NCAA March Madness Results

For a team-by-team breakdown of each defeat and unexpected upset, visit ESPN’s March Madness Results Pain Scale and be comforted by the fact that your agony is shared.

So, What Happened?

First off, let’s put some numbers into perspective around the March Madness Results. There are 9,223,372,036,854,775,808 possible outcomes for a bracket, so you’re more likely to win the lottery (or several lotteries) than guess a perfect bracket. And despite the more than 70 official brackets submitted each year, the longest (verifiable) streak of an NCAA men’s bracket ever was only 49 games, where the person predicted all of the teams who got into the Sweet 16 in 2022. 

So, whether Machine Learning and AI are used to generate a bracket or not, the odds are slim.

Machine Learning Constraints

Data Sets and Inputs

The algorithm uses certain assumptions to generate outputs based on provided inputs, and so makes predictions based on data trends. So, if team A has consistently beaten B, then there is a high probability that they’ll do it again, and that is what the AI will predict.

Where the training data is obtained from and the different weights they attribute to ranking factors such as historical seeding, performance (both season and postseason), box scores, geography, coaches, etc. can greatly impact the linear regression model’s predictions.

Preprocessing/ Feature Engineering

The preprocessing or feature engineering stage of creating a Machine Learning model is one of the most challenging steps. This requires bringing disparate data sets together, getting the variables in the proper form so that we can use the algorithm, and other cleaning of the data to focus the model on certain variables. 

Naturally, this can result in varied inputs and thus varied outputs. If fact, two Data Scientists given the same data set will inevitably preprocess it in slightly different ways, leading to distinct results. 

Dumb Luck

No matter how perfectly ranked your stats are, how precisely programmed the data set is, or the number of iterations your model runs, there are certain things a Machine Learning model won’t be able to account for. The model makes predictions based on previous data and past performance and predicts outcomes based on the same conditions. 

So if, for example, a star player is out of the game, the whole team got food poisoning the night before, or a hail mary shot somehow made it through the next in the last second of the game, the model does not expect nor account for random good or bad luck.

Conclusion

As fans can attest, there is no greater torment than watching your bracket inevitably go bust. And, while Machine Learning may increase your chances of hanging in longer, it’s almost inevitable that your bracket predictions will eventually prove incorrect. But if we’re honest, isn’t that half of the fun? From one busted bracket to another – better luck next year.  

Wanna try your hand at the Data Science fundamentals needed to make a Machine Learning model like the one discussed in this post? Try out our Free Data Science Prep Work – no strings attached.