Demystifying Machine Learning: What AI Can Do for You

In the realm of modern technology, machine learning stands as a cornerstone, revolutionizing industries, transforming businesses, and shaping our everyday lives. At its core, machine learning represents a subset of artificial intelligence (AI) that empowers systems to learn from data iteratively, uncover patterns, and make predictions or decisions with minimal human intervention. It is important to demystify machine learning since it’s an invitation to explore the transformative potential of AI in your life.

In a world where technology increasingly shapes our experiences and decisions, understanding machine learning opens doors to unprecedented opportunities. From personalized recommendations that enhance your shopping experience to predictive models that optimize supply chains and improve healthcare outcomes, AI is revolutionizing industries and revolutionizing how we interact with the world around us. 

This article explores the essence of machine learning, its fundamental concepts, and real-world applications across diverse industries, as well as its limitations and ethical considerations. By demystifying, we empower individuals and businesses to harness the power of data-driven insights, unlocking new possibilities and driving innovation forward. Whether you’re a seasoned data scientist or a curious novice, exploring what AI can do for you is a journey of discovery, empowerment, and endless possibilities. 

Understanding Machine Learning

Machine learning empowers computers to learn from experience, enabling them to perform tasks without being explicitly programmed for each step. It operates on the premise of algorithms that iteratively learn from data, identifying patterns and making informed decisions. 

Unlike traditional programming, where explicit instructions are provided, machine learning systems adapt and evolve as they encounter new data. This adaptability lies at the heart of machine learning’s capabilities, enabling it to tackle complex problems and deliver insights that were previously unattainable. 

Before turning to the two types of machine learning, viz. supervised and unsupervised learning, mention should be made of the primary programming language that is used in data science.

The program language Python, which is taught and used extensively in the Flatiron School Data Science Bootcamp program, has emerged as the de facto language for machine learning since it has simple syntax, an extensive ecosystem of libraries, and excellent community support and documentation. It is also robust and has scalability, and integrates with other data science tools and workflow such as Jupyter notebooks, Anaconda, R, SQL, SQL, and Apache Spark.

Supervised learning

Supervised learning involves training a model on labeled data, where inputs and corresponding outputs are provided. The model learns to map input data to the correct output during the training process. Common algorithms in supervised learning include linear regression, decision trees, support vector machines, and neural networks. Applications of supervised learning range from predicting stock prices and customer churn in businesses to medical diagnosis and image recognition in healthcare.

Unsupervised learning

In unsupervised learning, the model is presented with unlabeled data and tasked with finding hidden patterns or structures within it. Unlike supervised learning, there are no predefined outputs, and the algorithm explores the data to identify inherent relationships. Clustering, dimensionality reduction, and association rule learning are common techniques in unsupervised learning. Real-world applications include customer segmentation, anomaly detection, and recommendation systems.

Machine learning algorithms

Machine learning algorithms serve as the backbone of data-driven decision-making. These algorithms encompass a diverse range of techniques tailored to specific tasks and data types. Some prominent algorithms include:

  • Linear Regression: A simple yet powerful algorithm used for modeling the relationship between a dependent variable and one or more independent variables.
  • Decision Trees: Hierarchical structures that recursively partition data based on features to make decisions. Decision trees are widely employed for classification and regression tasks.
  • Support Vector Machines (SVM): A versatile algorithm used for both classification and regression tasks. SVM aims to find the optimal hyperplane that best separates data points into distinct classes.
  • Neural Networks: Inspired by the human brain, neural networks consist of interconnected nodes organized in layers. Deep neural networks, in particular, have gained prominence for their ability to handle complex data and tasks such as image recognition, natural language processing, and reinforcement learning.

It should be noted that all of these can be implemented within Python using very similar syntax.

Real-world Applications Across Industries

Machine learning’s transformative potential transcends boundaries, permeating various industries and sectors. Some notable applications include healthcare, financial services, retail and e-commerce, manufacturing, and transportation and logistics.

Healthcare

In healthcare, machine learning aids in medical diagnosis, drug discovery, personalized treatment plans, and predictive analytics for patient outcomes. Image analysis techniques enable early detection of diseases from medical scans, while natural language processing facilitates the extraction of insights from clinical notes and research papers. 

Finance 

In the finance sector, machine learning powers algorithmic trading, fraud detection, credit scoring, and risk management. Predictive models analyze market trends, identify anomalies in transactions, and assess the creditworthiness of borrowers, enabling informed decision-making and mitigating financial risks. 

Retail and e-commerce

For retail and e-commerce, machine learning enhances customer experience through personalized recommendations, demand forecasting, and inventory management. Sentiment analysis extracts insights from customer reviews and social media interactions, guiding marketing strategies and product development efforts.

Manufacturing

In manufacturing, machine learning optimizes production processes, predicts equipment failures, and ensures quality control. Predictive maintenance algorithms analyze sensor data to anticipate machinery breakdowns, minimizing downtime and maximizing productivity. 

Transportation and logistics

Lastly, for transportation and logistics, machine learning optimizes route planning, vehicle routing, and supply chain management. Predictive analytics anticipate demand fluctuations, enabling timely adjustments in inventory levels and distribution strategies.

Limitations and Responsible AI Use

While machine learning offers immense potential, it also presents ethical and societal challenges that demand careful consideration. 

Bias and fairness

Machine learning models may perpetuate or amplify biases present in the training data, leading to unfair or discriminatory outcomes. It is imperative to mitigate bias by ensuring diverse and representative datasets and implementing fairness-aware algorithms. 

Privacy concerns 

Machine learning systems often rely on vast amounts of personal data, raising concerns about privacy infringement and data misuse. Robust privacy-preserving techniques such as differential privacy and federated learning are essential to safeguard sensitive information

Interpretability and transparency

Complex machine learning models, particularly deep neural networks, are often regarded as black boxes, making it challenging to interpret their decisions. Enhancing model interpretability and transparency fosters trust and accountability, enabling stakeholders to understand and scrutinize algorithmic outputs. 

Security risks

Machine learning models are vulnerable to adversarial attacks, where malicious actors manipulate input data to deceive the model’s predictions. Robust defenses against adversarial attacks, such as adversarial training and input sanitization, are critical to ensuring the security of machine learning systems.

Conclusion

Now that machine learning has been demystified, we can see what AI can do for us. Machine learning epitomizes the convergence of data, algorithms, and computation, ushering in a new era of innovation and transformation across industries. From healthcare and finance to retail and manufacturing, its applications are ubiquitous, reshaping the way we perceive and interact with the world. 

However, this technological prowess must be tempered with a commitment to responsible and ethical use, addressing concerns related to bias, privacy, transparency, and security. By embracing ethical principles and leveraging machine learning for societal good, we can harness its full potential to advance human well-being and prosperity in the digital age. Thus, by demystifying, we unveil a world of possibilities where AI becomes not just a buzzword, but a tangible tool for enhancing productivity, efficiency, and innovation.

Flatiron School Teaches Machine Learning

Our Data Science Bootcamp offers education in fundamental and advanced machine learning topics. Students gaining hands-on AI skills to prep them for high-paying careers in fast-growing fields like AI engineering and data analysis. Download the bootcamp syllabus to learn more about what you’ll learn. If you would like to learn more about financing, including flexible payment options and scholarships, schedule a 10-minute call with our Admissions team.

Enhancing Your Tech Career with Remote Collaboration Skills

Landing a career in the tech industry requires more than just technical/hard skills; it requires soft skills like effective communication, adaptability, time management, problem-solving abilities, and remote collaboration skills. Remote collaboration is especially key for those who work in tech; according to U.S. News & World Report, the tech industry leads all other industries with the highest percentage of remote workers.

At Flatiron School, we understand the importance of these skills in shaping successful tech professionals. Hackonomics, our AI-focused hackathon event happening between March 8 and March 25, will see participants sharpen remote collaboration skills (and many others) through the remote team-based building of an AI-driven personal finance platform. We’ll reveal more about Hackonomics later in the article; right now, let’s dive deeper into why remote collaboration skills are so important in today’s work world.

Mastering Remote Collaboration Skills

Remote collaboration skills are invaluable in today’s digital workplace, where teams are often distributed across different locations and time zones. Whether you’re working on a project with colleagues halfway across the globe or collaborating with clients remotely, the ability to effectively communicate, problem-solve, and coordinate tasks in a remote work setting is essential for success. Here are some other key reasons why this skill is becoming so important. 

Enhanced Productivity and Efficiency

Remote collaboration tools and technologies empower teams to communicate, coordinate, and collaborate in real-time, leading to increased productivity and efficiency. With the right skills and tools in place, tasks can be completed more quickly, projects can progress smoothly, and goals can be achieved with greater ease.

Flexibility and Work-life Balance

Remote work offers unparalleled flexibility, allowing individuals to balance their professional and personal lives more effectively. However, this flexibility comes with the responsibility of being able to collaborate effectively from anywhere, ensuring that work gets done regardless of physical location.

Professional Development and Learning Opportunities

Embracing remote collaboration opens doors to a wealth of professional development and learning opportunities. From mastering new collaboration tools to honing communication and teamwork skills in virtual settings, individuals can continually grow and adapt to the evolving demands of the digital workplace.

Resilience in the Face of Challenges

Events such as the COVID-19 pandemic—and the massive shift to at-home work it caused—has highlighted the importance of remote collaboration skills. When faced with unforeseen challenges or disruptions, the ability to collaborate remotely ensures business continuity and resilience, enabling teams to adapt and thrive in any environment.

Join Us for the Hackonomics Project Showcase and Awards Ceremony

Come see the final projects born out of our Hackonomics teams’ remote collaboration experiences when our Hackonomics 2024 Showcase and Awards Ceremony happens online on March 28. The event is free to the public and offers those interested in attending a Flatiron School bootcamp a great opportunity to see the types of projects they could work on should they enroll.

The 8 Things People Want Most from an AI Personal Finance Platform

Great product design is one of those things you just know when you see it, and more importantly—use it. It’s not just about being eye-catching; it’s about serving a real purpose and solving a real problem—bonus points if you can solve that problem in a clever way. If there ever was a time to build a fintech app, that time is now. The market is ripe, the problems to solve are plenty, and the tools and resources are readily available. Flatiron School Alumni from our Cybersecurity, Data Science, Product Design, and Software Engineering bootcamps have been tasked to help me craft Money Magnet, an AI personal finance platform that solves common budget-making challenges. They’ll tackle this work during Hackonomics, our two-week-long hackathon that runs from March 8 to March 25.

There is one goal in mind: to help individuals and families improve their financial well-being through an AI financial tool.

A loading screen mockup for AI personal finance platform Money Magnet
A loading screen mockup for AI personal finance platform Money Magnet

My Personal Spreadsheet Struggle

The concept for Money Magnet sprang from personal frustration and mock research around user preferences in AI finance. As a designer, I often joke, “I went to design school to avoid math.” Yet, ironically, I’m actually quite adept with numbers. Give me a spreadsheet and 30 minutes, and I’ll show you some of the coolest formulas, conditional formats, and data visualization charts you’ve ever seen.

Despite this, in my household, the responsibility of budget management falls squarely to my partner. I prefer to stay blissfully unaware of our financial details—knowing too much about our funds admittedly tends to lead to impulsive spending on my part. However, occasionally I need to access the budget, whether it’s to update it for an unexpected expense or to analyze historical data for better spending decisions.

We’re big on goal-setting in our family—once we set a goal, we stick to it. We have several future purchases we’re planning for, like a house down payment, a new car, a vacation, and maybe even planning for children. 

But here’s the catch: None of the top AI financial tools on the market incorporate the personal finance AI features that Money Magnet proposes bringing to the market. Families need an AI personal finance platform that looks into our spending patterns from the past and projects into the future to tell users when the budget gets tighter. This product should be easy to use with access to all family members to make changes without fear of wrecking the budget.

For more context, each year, my partner forecasts a detailed budget for us. We know some expenses fluctuate—a grocery trip might cost $100 one time and $150 the next. We use averages from the past year to estimate and project those variable expenses. This way, we manage to live comfortably without having to scale back in tighter months, fitting in bigger purchases when possible, and working towards an annual savings goal.

Top financial apps chart from Sensor Tower
Top financial apps chart from Sensor Tower

But here’s where the challenge lies: My partner, as incredible as he is, is not a visualist. He can navigate a sea of spreadsheet cells effortlessly, which is something I struggle with (especially when it comes to budgeting). I need a big picture, ideally represented in a neat, visual chart or graph that clearly illustrates our financial forecast.

Then there’s the issue of access and updates. Trying to maneuver a spreadsheet on your phone in the middle of a grocery store is far from convenient. And if you make an unplanned purchase, updating the sheet without disrupting the formulas can be a real hassle, especially on a phone. This frustration made me think, “There has to be a better solution!”

Imagining the Ultimate AI Personal Finance Platform

Imagine an AI personal finance platform that “automagically” forecasts the future, securely connects to your bank and credit cards to pull transaction histories, and creates a budget considering dynamic and bucketed savings goals. This dream app would translate data into a clear dashboard, visually reporting on aspects like spending categories, monthly trends in macro and micro levels, amounts paid to interest, debt consolidation plans, and more.

It’s taken eight years of experiencing my partner’s budget management to truly understand a common struggle that many other families in the U.S. face: Advanced spreadsheet functions, essential in accounting and budgeting, are alien to roughly 73% of U.S. workers.

The extent of digital skills in the U.S. workforce according to OECD PIAAC survey data. Image Source: Information Technology and Innovation Foundation
The extent of digital skills in the U.S. workforce according to OECD PIAAC survey data. Image Source: Information Technology and Innovation Foundation

Money Magnet aims to automate 90% of the budgeting process by leveraging AI recommendations about users’ personal finances to solve eight of the key findings outlined in a mock research study based on some of the challenges I had faced when developing a budget of my own.

Features to Simplify Your Finances

This dream budgeting tool is inspired by my own financial journey and the collective wish list of what an ideal personal finance assistant should be. Here’s a snapshot of the personal finance AI features that aims to position Money Magnet as one of the top AI financial tools on the market:

  1. Effortless Onboarding: Starting a financial journey shouldn’t be daunting. Money Magnet envisions a platform where setting up accounts and syncing banking information is as quick and effortless as logging into the app, connecting your bank accounts, and establishing some savings goals (if applicable).
  2. Unified Account Dashboard: Juggling multiple banking apps and credit card sites can be a circus act, trying to merge those separate ecosystems as a consumer is nearly impossible. Money Magnet proposes a unified dashboard, a one-stop financial overview that could declutter your digital financial life.
  3. Personalized AI Insights: Imagine a platform that knows your spending habits better than you do, offering bespoke guidance to fine-tune your budget. Money Magnet aims to be that savvy financial companion, using AI to tailor its advice just for you.
  4. Vivid Data Visualization: For those of us who see a blur of numbers on statements and spreadsheets, Money Magnet could paint a clearer picture with vibrant graphs and charts—turning the abstract into an understandable, perceivable, engaging, and dynamic visual that encourages you to monitor the trends.
  5. Impenetrable Security: When dealing with informational and financial details, security is non-negotiable. Money Magnet will prioritize protecting your financial data with robust encryption and authentication protocols, so your finances are as secure as Fort Knox.
  6. Intelligent Budget Optimization and Forecasting: No more cookie-cutter budget plans that force your spending to fit conventional categorization molds! Money Magnet will learn user preferences in AI finance and forecast from your historic spending, suggesting ways to cut back on lattes or add to your savings—all personalized to improve your financial well-being based on your real-world spending and forecast into the future to avoid pinch-points.
  7. Smooth Bank Integrations: Another goal of Money Magnet is to eliminate the all-too-common bank connection hiccups where smaller banks and credit unions don’t get as much connectivity as the larger banks, ensuring a seamless link between your financial institutions and the app.
  8. Family Financial Management: Lastly, Money Magnet should be a tool where managing family finances is a breeze. Money Magnet could allow for individual family profiles, making it easier to teach kids about money and collaborate on budgeting without stepping on each other’s digital toes or overwriting a budget. It’s important for those using Money Magnet to know it can’t be messed up, and that any action can always be reverted.

See the Money Magnet Final Projects During Our Closing Ceremony on March 28

Attend the Hackonomics 2024 Showcase and Awards Ceremony on March 28 and see how our participating hackathon teams turned these eight pillars of financial management into a reality through their Money Magnet projects. The event is online, free of charge, and open to the public. Hope to see you there!

Software Engineering in the Age of AI

The landscape is shifting. The reality is that artificial intelligence (AI) is fundamentally altering everything—upending industries, redefining roles, and transforming how we approach everyday tasks like writing emails and touching up selfies. In the last three years, Generative AI models have advanced significantly, making tools like OpenAI’s ChatGPT accessible to just about everyone for optimizing workflows and enhancing productivity. This integration of AI across such a vast array of platforms signifies a new baseline for business operations and innovation. 

It’s hard to miss—almost every headline about tech concerns AI’s potential impact on the future. However, no one has a magic ball to predict the norm of the future. Executives don’t understand AI or their team’s proficiency with AI tools, so they are uncertain about how to implement AI in their organizations. Analysts and futurists are making educated guesses about the effects of AI. Some are predicting the automation of everything. Some are predicting the creation of a new era of human flourishing. It’s confusing, leaving us with significant uncertainty about the potential and limitations of AI technologies and the ways specific industries and jobs may change.

This article discussed the continued importance of software engineering in the AI era and how AI can complement and expand these skills in the coming years.

Is Software Engineering Still a Viable Career Path?

In short, yes. The tech industry is constantly changing and adapting. The creation of personal computers was a massive technological shift that was met with trepidation and concern and resulted in an enormous explosion in products and jobs. Frameworks, testing, and automation techniques have evolved for decades, creating significant productivity gains. The truth is that AI-assisted coding has been available to developers for years, and most of the potential gains of emerging technologies aren’t far out of line with the work that has happened in the past. 

Despite all of this, software engineering skills remain essential. The demand for skilled engineers is expected to grow by 25% in the next 5-6 years. That growth is driven by digital transformation and AI integration across all sectors. Software engineering is evolving to accommodate AI, necessitating a shift in skills while remaining foundational to the development of digital products. Its foundational pillars—programming, problem-solving, creativity, and complex system design—are as relevant as ever.

Programming Proficiency & Application Development

The fundamental role of coding in software engineering isn’t likely to change any time soon. Python and JavaScript are pivotal languages that every programmer will need to know. These languages support AI and ML projects and the frameworks that power modern applications. 

Python libraries, like TensorFlow, NumPy, Keras, and Scikit-learn, are foundational tools for AI and machine learning development. JavaScript has front-end and back-end development applications through frameworks like Node.js, Vue, and React, bringing AI capabilities to web interfaces. As AI integration deepens, the essence of coding as a skill—conceptualizing and creating digital solutions—will be invaluable. The development of future products will require deep programming and product development knowledge.

We teach these languages in most of our programs because of the popularity and versatility of Python and JavaScript, but they aren’t the only viable options. Languages like Java, PHP, and C# are also highly utilized in modern programs. Whatever language you learn, coding skills transcend specific languages; by learning to code, you learn problem-solving, system design, and adaptability. With AI tools automating tasks and generating code, software engineers can focus on higher-level problem-solving and creativity. This partnership with AI enhances efficiency and highlights the importance of programming knowledge. Engineers need to understand code to oversee AI’s contributions effectively, ensuring applications are efficient, scalable, and ethical.

Understanding AI and ML Principles

Engagement with AI projects is growing—a look at Github’s annual report shows a massive spike in AI-related projects. Developers are adapting to incorporate these new technologies in their toolkits. Software engineers must understand how to integrate AI into their projects, extending beyond traditional applications to include AI-driven functionalities like image recognition and language translation.

Knowledge of AI principles will be critical for addressing complex challenges. Not every engineer will need to be a data scientist, but familiarity with AI and ML concepts will become more essential with time. This knowledge is vital for software engineers in two ways:

  1. The ability to implement existing AI models. You must know how to use AI tools and incorporate them into products. For example, programming knowledge will help you interact with APIs, but you’ll also need to understand the model parameters and how to tune them to get the output you want. This takes some familiarity with AI concepts and a working knowledge of manipulating models for a desired outcome. Your knowledge of Python and development practices will be helpful here, as many of the most advanced AI and machine learning models are accessible via Python.
  2. Understanding how these technologies can be leveraged to solve real-world problems. This will soon become a real differentiator. Understanding models well enough to leverage them for specific circumstances will be critical in the future. Most of the recent discussion has been around Generative AI language models. Still, dozens of models exist for specialized purposes and work far better than ChatGPT for solving particular problems. For instance, we could implement a chatbot in a web application. What model should we use? Why that model? How can it be customized for the best user experience? These are the questions that developers will be asked in the future.

Creativity, Problem-solving, and Ethics

As AI becomes more embedded in software development and our everyday lives, the emphasis on ethical considerations and responsible use of AI will be magnified, and unique human skills such as creativity, empathy, and ethics will become more critical. AI can automate tasks, enhance workflow efficiencies, and augment the capabilities of software developers through tools like GitHub Copilot for code suggestions or automated testing and debugging tools. However, the essence of product design and development—understanding user needs and ethical implications, as well as ensuring accessibility—remain deeply human tasks that AI cannot replicate.

This evolving landscape necessitates a collaborative approach, requiring software engineers to work closely with data scientists, ethicists, and other stakeholders to ensure AI is used responsibly and beneficially.

Navigating the Future of Software Engineering with AI

Integrating AI into software engineering is a shift towards a more dynamic, efficient, and innovative approach to technology development. However, the human element is still as relevant today as it was 20 years ago. We may not know what the future holds, but we do know a few things:

  1. AI is reshaping all industries, not just tech. This means that technical skills will become increasingly important regardless of profession because you’ll need to work with these technologies whether you are a developer or not. Even if you move into another industry—retail, aerospace, medical, finance, etc.—all these industries will soon require some understanding of AI and the skills to work with it. 
  2. Coding is becoming table stakes for everyone. Many middle and high schools in the US already teach some basic coding to prepare learners for a future where all industries are more dependent on a tech-savvy workforce. Prompt engineering, software development, and communication skills will become more valuable over time, so getting a head-start by learning to code is always a solid career choice.
  3. The world needs intelligent, creative, and informed professionals to create the next generation of technologies. As AI technology becomes more accessible, one’s ability to use AI as a platform for innovation and advancement in all sectors will be the differentiating skill set. The reality is that companies are currently deciding how to optimize their workforces by augmenting current products with AI, but that won’t last long. 

Next-Gen AI Tools

The next generation of AI-powered tools and processes will enable the rapid development of new products and experiences. Efficiency gains may help companies in the short term by reducing costs. But, that effect will diminish significantly as product development cycles speed up. To stay competitive, companies must innovate and build products faster and at a higher quality. More products, more experiences, more competition. In the long run, AI will almost certainly create more technical jobs than it will displace. Still, future jobs will require workers to display high efficiency, communication skills, intelligence, and training in multiple technical domains.

Future Roles in Software Engineering with AI Integration

As AI becomes more intertwined with software engineering, new roles may emerge that will displace some traditional programming roles. New roles like Prompt Engineer, AI Quality Assurance Manager, and AI Ethics Officer are emerging and growing in response to the rapid adoption of AI into workflows and product solutions. These roles will also likely adapt with time, so we can’t expect to know the exact titles 5-10 years from now.

However, considering Generative AI’s known capabilities and limitations, we can speculate how it will impact software engineering roles.

  • Full-Stack Developer: Developers manage front-end and back-end systems. They write business logic, implement user experiences, and incorporate AI features to enhance user experiences and backend efficiencies. These developers will use languages like Python and JavaScript to develop full-stack products incorporating adaptive content and intelligent data systems. Understanding AI will enable these developers to create more responsive and smart applications.
  • Front-end Developer: Front-end developers create the interfaces we interact with every day. They create every page you see on the web with Javascript, HTML, and CSS and build applications using popular frameworks like React, Vue, and Svelt. Front-end developers can leverage user data to create personalized experiences, utilizing AI algorithms to tailor content and interfaces to individual preferences.
  • Back-end Developer: These developers create the server applications that talk to other systems and serve content to front-end applications. They build APIs, interact with databases, and make secure web applications by implementing authentication and validation. These developers will increasingly rely on AI for data processing and analysis, optimizing server-side operations, and enabling more sophisticated data-driven functionalities.

The Future is Bright

As AI continues to evolve, so will the roles and skills required in the field. Learning software development will give you many essential skills for the future. You’ll learn to code, work through complex problems, collaborate and communicate with stakeholders, work with AI tools, and start a lifelong growth journey.

Now is the time to embrace a life of continuous learning and ethical considerations that will be essential for those looking to lead the way in this new era. It’s never too late to start coding. We’ll see you at the keyboards!

How to Achieve Portfolio Optimization With AI

Here’s a fact: Employers are seeking candidates with hands-on experience and expertise in emerging technologies. Portfolio optimization using Artificial Intelligence (AI) has become a key strategy for people looking to break into the tech industry. Let’s look at some of the advantages of having an AI project in a portfolio, and how portfolio optimization with AI can be a possible game changer in regards to getting your foot in the door at a company.

The Pros of Having AI Projects in a Portfolio

For people seeking to transition into the tech industry, having AI projects in their portfolios can be a game-changer when it comes to landing coveted roles and advancing their careers. By showcasing hands-on experience with AI technologies and their applications in real-world projects, candidates can demonstrate their readiness to tackle complex challenges and drive innovation in any industry. Employers value candidates who can leverage AI to solve problems, optimize processes, and deliver tangible results, making AI projects a valuable asset for aspiring tech professionals.

Achieving portfolio optimization with AI by integrating AI into portfolios is quickly becoming a cornerstone of success for tech job seekers. However, portfolio optimization with AI involves more than just adopting the latest technology. It requires a strategic business approach and a deep understanding of Artificial Intelligence. Below are details about Hackonomics, Flatiron School’s AI-powered budgeting hackathon

The Components of Flatiron’s AI Financial Platform Hackathon

Identifying the Right Business Problem

The Hackonomics project revolves around cross-functional teams of recent Flatiron graduates building an AI-driven financial platform to increase financial literacy and provide individualized financial budgeting recommendations for customers. Identifying the right business problem entails understanding the unique needs and challenges of a target audience, ensuring that a solution addresses critical pain points and that the utilization of AI delivers tangible value to users.      

AI Models

At the core of Hackonomics are machine learning models meticulously designed to analyze vast amounts of financial data. These models will enable the uncovering of valuable insights into user spending patterns, income sources, and financial goals, laying the foundation for personalized recommendations and budgeting strategies.

Software and Product Development

As students develop their Hackonomics projects, continuous product development and fine-tuning are essential for optimizing performance and usability. This involves iterating on platform features (including UI design and SE functionality) and refining AI algorithms to ensure that the platform meets the evolving needs of users and delivers a seamless and intuitive experience.

Security and Encryption

Ensuring the security and privacy of users’ financial data is paramount. The Hackonomics project incorporates robust security measures, including encryption techniques, to safeguard sensitive information from outside banking accounts that need to be fed into the platform. Additionally, multi-factor authentication (MFA) adds an extra layer of protection, mitigating the risk of unauthorized access and enhancing the overall security posture of our platform.

Join Us at the Hackonomics Project Showcase on March 28

From March 8 to March 25, graduates of Flatiron School’s Cybersecurity, Data Science, Product Design, and Software Engineering bootcamps will collaborate to develop fully functioning AI financial platforms that analyze user data, provide personalized recommendations, and empower individuals to take control of their financial futures.

The Hackonomics outcomes are bound to be remarkable. Participants will create a valuable addition to their AI-optimized project portfolios and gain invaluable experience and skills that they can showcase in job interviews and beyond.

The judging of the projects will take place from March 26 to 27, followed by the showcase and awards ceremony on March 28. This event is free of charge and open to prospective Flatiron School students, employers, and the general public. Reserve your spot today at the Hackonomics 2024 Showcase and Awards Ceremony and don’t miss this opportunity to witness firsthand the innovative solutions that emerge from the intersection of AI and finance. 

Unveiling Hackonomics, Flatiron’s AI-Powered Budgeting Hackathon

Are you interested in learning about how software engineering, data science, product design, and cybersecurity can be combined to solve personal finance problems? Look no further, because Flatiron’s AI-powered budgeting hackathon—Hackonomics—is here to ignite your curiosity.

This post will guide you through our Hackonomics event and the problems its final projects aim to solve. Buckle up and get ready to learn how we’ll revolutionize personal finance with the power of AI.

Source: Generated by Canva and Angelica Spratley
Source: Generated by Canva and Angelica Spratley

Unveiling the Challenge

Picture this: a diverse cohort of recent Flatiron bootcamp graduates coming together on teams to tackle an issue that perplexes and frustrates a huge swath of the population—personal budgeting.

Hackonomics participants will be tasked with building a financial planning application named Money Magnet. What must Money Magnet do? Utilize AI to analyze spending patterns, income sources, and financial goals across family or individual bank accounts.

The goal? To provide personalized recommendations for optimizing budgets, identifying potential savings, and achieving financial goals through a dynamic platform that contains a user-friendly design with interactive dashboards, a personalized recommendation system to achieve budget goals, API integration of all financial accounts, data encryption to protect financial data, and more.

The Impact of AI in Personal Finance

Let’s dive a little deeper into what this entails. Integrating AI into personal finance isn’t just about creating fancy algorithms; it’s about transforming lives through the improvement of financial management. Imagine a single parent struggling to make ends meet, unsure of where their hard-earned money is going each month. With AI-powered budgeting, they can gain insights into their spending habits, receive tailored recommendations on how to save more effectively, and ultimately, regain control of their financial future. It’s about democratizing financial literacy and empowering individuals from all walks of life to make informed decisions about their money.

Crafting an Intuitive Technical Solution Through Collaboration

As the teams embark on this journey, the significance of Hackonomics becomes abundantly clear. It’s not just about building an advanced budgeting product. It’s about building a solution that has the power to vastly improve the financial health and wealth of many. By harnessing the collective talents of graduates from Flatiron School’s Cybersecurity, Data Science, Product Design, and Software Engineering bootcamps, Hackonomics has the opportunity to make a tangible impact on people’s lives.

Let’s now discuss the technical aspects of this endeavor. The platforms must be intuitive, user-friendly, and accessible to individuals with varying levels of financial literacy. They also need to be up and running with personalized suggestions in minutes, not hours, ensuring that anyone can easily navigate and understand their financial situation. 

Source: Generated by Canva and Angelica Spratley
Source: Generated by Canva and Angelica Spratley

Embracing the Challenge of Hackonomics

Let’s not lose sight of the bigger picture. Yes, the teams are participating to build a groundbreaking platform, but they’re also participating to inspire change. Change in the way we think about personal finance, change in the way we leverage technology for social good, and change in the way we empower individuals to take control of their financial destinies.

For those participating in Hackonomics, it’s not just about building a cool project. It’s about honing skills, showcasing talents, and positioning themselves for future opportunities. As participants develop their AI-powered budgeting platforms, they’ll demonstrate technical prowess, creativity, collaborative skills, and problem-solving abilities. In the end, they’ll enhance their portfolios with AI projects, bettering their chances of standing out to potential employers. By seizing this opportunity, they’ll not only revolutionize personal finance but also propel their careers forward.

Attend the Hackonomics Project Showcase and Awards Ceremony Online

Participation in Hackonomics is exclusively for Flatiron graduates. Participants will build their projects from March 8 through March 25. Winners will be announced during our project showcase and awards ceremony closing event on March 28.

If you’re interested in attending the showcase and ceremony on March 28, RSVP for free through our Eventbrite page Hackonomics 2024 Showcase and Awards Ceremony. This is a great opportunity for prospective students to see the types of projects they can work on should they decide to apply to one of Flatiron’s bootcamp programs.

Hyperbolic Tangent Activation Function for Neural Networks

Artificial neural networks are a class of machine learning algorithms. Their creation by Warren McCullough and Walter Pitts in 1944 was inspired by the human brain and the way that biological neurons signal one another. Neural networks are a machine learning algorithm since the algorithm will analyze data with known labels so it can be trained to recognize images that it has not seen before.. For example, in the Data Science Bootcamp at Flatiron School one learns how to use these networks to determine whether an image shows cancer cells present in a fine needle aspirate (FNA) of a breast mass.

Neural networks are comprised of node (artificial neuron) layers, containing the following:

  • an input layer
  • one or more hidden layers
  • an output layer

A visual representation of this is on view in the figure below. (All images in the post are from the Flatiron School curriculum unless otherwise noted.)

Visual representation of neural networks with input, hidden, and output layers.

Each node connected to another has an associated weight and threshold. If the output is above the specified threshold value, then the node activates.. This activation results in data (the sum of the weighted inputs) traveling from the node to the next layer that is composed of nodes. However, if the node is not activated, then it does not pass data along to the next layer. A popular subset of neural networks are deep learning models, which are neural networks that have a large number of hidden layers.

Neural Network Activation Functions

In this post, I would like to focus on the idea of activation, and in particular the hyperbolic tangent as an activation function. Simply put, the activation function decides whether a node should be activated or not. 

In mathematics, it is common practice to start with the simplest model. In this case, the most basic activation functions are linear functions such as y=3x-7 or y=-9x+2. (Yes, this is the y=mx+b that you still likely recall from algebra 1.) 

However, if activation functions are linear for each layer, then all of the layers would be equivalent to a single layer by what are called linear transformations. It will take us too far afield to discuss linear transformations, but the upshot is that nonlinear activation functions are needed so that the neural network can meaningfully have multiple layers. The most basic nonlinear functions that we can think of would be a parabola (y=x^2), which can be seen in the diagram modeling some real data.

line graph representing quadratic regression best-fit line with augmented data points.

While there are a number of popular activation functions (e.g., Sigmoid/Logistic, ReLU, Leaky ReLU) that all Flatiron Data Science students learn, I’m going to discuss the hyperbolic tangent function for a couple of reasons.

First, it is a default activation function for Keras, which is the industry standard deep learning API written in Python that runs on top of TensorFlow, which is taught in detail within the Flatiron School Data Science Bootcamp.

Second, the hyperbolic function is an important function even outside of machine learning and worth learning more about. It should be noted that the hyperbolic tangent is typically denoted as tanh, which to the mathematician looks incomplete since it lacks an argument such as tanh(x). That being said, tanh is the standard way to refer to this activation function, so I’ll refer to it as such.

line graph titled "tanh" with two lines - original (y) and derivative (dy)

Neural Network Hyperbolic Functions

The notation of hyperbolic tangent is pointing at an analog to trigonometric functions. We hopefully recall from trigonometry that tan(x)=sin(x)/cos(x). Similarly, tanh(x)=sinh(x)/cosh(x), where sinh(x) = (e^x-e^-x)/2 and cosh(x) = (e^x+e^-x)/2.

So we can see that hyperbolic sine and hyperbolic cosine are defined in terms of exponential functions. These functions have many properties that are analogous to trigonometric functions, which is why they have the notation that they do. For example, the derivative of tangent is secant squared and the derivative of hyperbolic tangent is hyperbolic secant squared.

The most famous example of a hyperbolic function is the Gateway Arch in St. Louis, MO. The arch, technically a catenary, was created with an equation that contains the hyperbolic cosine.

Image of the Gateway Arch in St. Louis, MO

(Note: This image is in the public domain)

High voltage transmission lines are also catenaries. The formula for the description of ocean waves not only uses a hyperbolic function, but like our activation function uses than.

Hyperbolic Activation Functions

Hyperbolic tangent is a sigmoidal (s-shaped) function like the aforementioned logistic sigmoid function. Where the logistic sigmoid function has outputs between 0 and 1, the hyperbolic tangent has output values between -1 and 1. 

This leads to the following advantages over the logistic sigmoid function. The range of [-1, 1] tends to make:

  •  negative inputs mapped to strongly negative, zero inputs mapped to near zero, and positive inputs mapped to strongly positive on the tanh graph
  • each layer’s output more or less centered around 0 at the beginning of training, which often helps speed up convergence

The hyperbolic tangent is a popular activation function that is often used for binary classification and in conjunction with other activation functions that have many nice mathematical properties.

Interested in Learning More About Data Science?

Discover information about possible career paths (plus average salaries), student success stories, and upcoming course start dates by visiting Flatiron’s Data Science Bootcamp page. From this page, you can also download the syllabus and gain access to course prep work to get a better understanding of what you can learn in the program, which offers full-time, part-time, and fully online enrollment opportunities.

Why Do We Need Statistics for Data Science?

I began the preceding post “Learning Mathematics and Statistics for Data Science” with the following definition: Data science is used to extract meaningful insights from large data sets. It is a multidisciplinary approach that combines elements of mathematics, statistics, artificial intelligence, and computer engineering. Previously, I described why we need mathematics for data science and in this article I’ll answer the companion question: 

Why do we need statistics for data science? 

However, before we can turn to that question, we need to talk about statistics in general.

What is Statistics?

Statistics is the study of variation of a population using a sample of the population. 

For example, suppose we want to know the average height of American adult men. It is impractical to survey approximately all 80 million adult men in the United States; we’d just survey the heights of a (random) sample of them. This leads us to the two types of statistics that we need to know for data science: viz. descriptive statistics and inferential statistics.

The Two Types of Statistics for Data Science

Descriptive statistics is the branch of statistics that includes methods for organizing and summarizing data. Inferential statistics is the branch of statistics that involves generalizing from a sample to the population from which the sample was selected and assessing the reliability of such generalizations. Let’s look at some examples of each.

Descriptive Statistics

Data can be represented with either images or values. The following graph is a histogram of the distribution of heights of American adult men in inches.

graph chart showing the bell curve of the distribution of height of american men.

You are likely familiar with common descriptive statistics such as means, medians, and standard deviations. For example, the average height of American men is 69 inches. While there are more sophisticated descriptive statistics than these, they all serve the same purpose: to describe the data in some way.

Inferential Statistics

Inferential statistics uses descriptive statistics and probability to draw inferences regarding the data. The most common types of inferential statistics are confidence intervals and hypothesis testing. 

Confidence intervals allow us to estimate an unknown population value (e.g., the height of American men in inches). 

A hypothesis test is a method that helps decide whether the data lends sufficient evidence to support a particular hypothesis; for example,is the average height of American men greater than 69 inches? 

Returning to our definition of statistics, we can see that the fundamental issue that statistics is dealing with, whether it is descriptive or inferential, is variation in the sample data. Also, using the sample data to draw conclusions about the population or populations of interest.

While statistics is used in many disciplines and applications, the way that data science uses statistics has some unique attributes. That said, what we have described forms the basis of the statistics that are used in data science. So let’s turn to that. I will note, of course, that we’re speaking in generalizations (the details are discussed in programs like the Flatiron School Data Science program). 

There are three primary goals using data science with statistics.

  1. Regression
    • Predicting an attribute associated with an object
  2. Classification
    • Identifying which category an object belongs to
  3. Clustering
    • Automatic grouping of similar objects

Statistical Learning in Data Science

Regression, or prediction, models are the oldest of the machine learning models; in fact, they preexist computers. The basic version (simple linear regression) of these models is taught in introductory statistics classes. In the case of linear regression, the idea is to get a line that best fits two variable data.

Line graph example with sample data points

For other more sophisticated models, the idea is the same. For example, it could be the case that the data can be better modeled by a function other than a line such as a quadratic curve, as seen in the below image.

Line graph showing quadratic regression best-fit line with actual data points

Classification models are used to determine which group a particular class a datum belongs to. The canonical example comes from the iris dataset where the data contains the sepal length (cm), sepal width (cm), and the three classes of irises: Setosa, Versicolor, and Virginica. The popular K-nearest neighbors model classifies each iris based on the sepal measurements, as can be seen in the below image.  

chart showing 3-class classifications

Clustering models seem similar to classification models since the algorithms are also grouping data together; however, there is a fundamental difference. Classification models are an example of supervised learning. In other words, the machine learning algorithm is able to train on data (known as labels) with known outcomes so it can classify with new data. Clustering models are an example of unsupervised learning, where the algorithm is determining how to group the data. Clustering models like K-means separate the data into groups with similar statistical properties.

k-means clustering on the digits dataset (PCA-reduced data) Centroids are marked with white cross.

A common place where K-means is used is for customer or market segmentation of data, which is the process of clustering a customer base into distinct clusters of individuals that have similar characteristics.

The reason for statistics for data science is that we need statistics to understand the data through descriptive and inferential statistics. Further, in order to use the power of artificial intelligence, we need to be able to use statistical learning techniques.

Ready To Get Started In Data Science?

As a next step, we’d encourage you to Try out our Free Data Science Prep Work to see if data science is right for you.

If you realize you like it, apply today to start learning the skills you need to become a professional Data Scientist.

Not sure if you can do it? Read stories about students just like you who successfully changed careers on the Flatiron School blog.

Demystifying AI and ML: An Intro for Bootcamp Grads

ChatGPT catapulted AI tools to the forefront of the public consciousness when it reached 100+ million weekly users last year. Several more apps like ChatGPT have since been launched, such as Bing Copilot and Bard. Although these services can be helpful for software developers, most of the time they’ll use other tools or work with technologies that are in a completely different category.

Over 70% of developers are already using or plan to use AI tools in their development process according to the 2023 StackOverflow survey. The same survey shows that the overwhelming majority of developers (75%+) have a favorable or highly favorable view towards using AI tools as part of their development workflow. 

In this blog post we will be going over what developers are using these tools for, which tools they are using, and how machine learning fits into the development process in companies.

Artificial Intelligence and Machine Learning

Before we dive into what kind of technologies and processes developers use in their day-to-day work, it’s important to understand the difference between artificial intelligence (AI) and machine learning (ML).

An image of three overlapping circles labeled deep learning, machine learning, and artificial intelligence

Artificial Intelligence

Artificial intelligence describes any computer program that performs tasks that require human intelligence, such as speech recognition, reasoning, perception, and problem solving. John McCarthy, a Turing Award-winning computer scientist and one of the founders of the AI field, said, “AI is the science and engineering of making intelligent machines, especially intelligent computer programs.”

Machine Learning

Machine learning is a more specific term that describes a field of study concerned with the development of algorithms and statistical models that enable computer programs to perform human-like tasks. The core idea behind ML is to use large datasets to identify patterns and learn from them to perform specific tasks. IBM has a good video on the differences between AI and ML that will provide further clarification.

You may also come across the term “Deep Learning” which is a subset of machine learning. We won’t go into it here but if you’re curious, 3Blue1Brown has an excellent video on how neural networks work.

AI in Software Engineering

Software engineers or developers work with AI based technologies in two key areas: development workflow and product feature development.

Development Workflow

Development workflow refers to the processes used to plan, develop, test, deploy, and maintain software products and services. Tools like GitHub Copilot can significantly speed up these processes since they can be trained on specific codebases on top of having insights from billions of lines of publicly available repositories. Having a tool that can provide suggestions and answer questions based on the current project and context can drastically improve the developer experience. In fact, over 70% of developers say AI coding tools will allow them to write better quality code, produce more code, resolve incidents faster, and collaborate more effectively according to a 2023 GitHub Survey

The AI tools used for improving developer experience are usually available off-the-shelf. These tools still need to be trained on the existing codebase for the best results but they don’t require a significant amount of developer work hours. Developers can quickly get up to speed on the basics of these tools and start incorporating them into their workflow to boost productivity since they don’t require any specialized skills.

Product Feature Development

Companies are constantly looking to improve their products by offering novel features. Product features that incorporate advanced ML algorithms can give companies a competitive edge by providing a better customer experience. 

For example, the following diagram gives an overview of how Netflix incorporated an ML model to improve search results for users:

a diagram showing a flowchart overview of how Netflix incorporated an ML model to improve search results for users

You can read about how this model works in the “Augmenting Netflix Search with In-Session Adapted Recommendations” research paper.

A team needs to have people with various skills in order to develop and maintain a system like this since the feature may require custom algorithms, infrastructure, and code. Working on these systems usually requires knowledge of machine learning and MLOps. 

Where to Go From Here?

If you want to build AI systems and not just use AI tools, you’ll need a solid theoretical foundation and practice building applications or creating infrastructure. Here are a few free courses to get you started on your journey:

Courses for Beginners:

Courses on AI Ethics:

Practical Courses:

These should be enough to keep you busy for a while and give you a solid AI and ML foundation for building your own AI/ML products or services. 

Ready to Learn Software Engineering Foundations?

Any ML role requires a foundational knowledge of software engineering. If you are not a bootcamp grad but are ready to start your journey as a software engineer, apply now to Flatiron’s program.

Not ready to apply? Try out our Free Software Engineering Prep. Or, review the Software Engineering Course Syllabus that will set you up for success and can help launch you into a new and fulfilling career.

You can also read more stories about successful career changes on the Flatiron School blog.

AI and Cybersecurity

This piece on the future of AI and Cybersecurity was created by Matthew Redabaugh, Cybersecurity Instructor at Flatiron School.

There’s a fascinating conversation happening today about AI and the impact it may have as it gets adopted. There’s a wide variety of opinions on the 5 Ws.

  1. Who will be impacted? 
  2. Who might lose their job or have their jobs adapted? 
  3. Will particular industries need more personnel thus the impact of AI will create more jobs? 
  4. What will change in everyday life as the technologies we have been accustomed to change due to AI?
  5. Will that change be subtle or drastic?

These are the kinds of questions that people are asking, especially in the field of cybersecurity. The main question I want to answer today is, “What is the relationship between AI and cybersecurity and how might the industry change with AI advancements?”

In this blog post, we’ll delve into the intricate relationship between AI and cybersecurity, debunk common misconceptions, and explore how AI is reshaping the landscape of digital defense.

What is Artificial Intelligence?

Let’s begin by addressing some common misconceptions about what AI is. 

The primary goal of AI is to give computers the ability to work as a human brain does. While this definition isn’t particularly narrow, AI’s scope is also quite broad. For a computer to be considered AI, it must encompass the ability to reason, learn, perceive, and plan. This is often accomplished through the development and implementation of algorithms that rely on statistics and probability to achieve a desired outcome.

Applications for Artificial Intelligence

Some use cases for AI that are being actively worked with are speech recognition and understanding languages, as well as the AI that is being used for travel assistance (updating maps, using AI to scan roads and create efficient routes.) AI empowers cybersecurity professionals to enhance their security posture through automated responses to attacks, to identify phishing schemes, to detect anomalous activity on networks (previously done manually), by analyzing weak passwords and then requiring users to update them, and more.

Is AI Conscious?

A common misconception about AI is that it is currently conscious or will become so in the near future.

One of the most interesting use cases for AI is Sophia, a humanoid robot introduced in 2016. It is the first robot to have been granted personhood and citizenship status in Saudi Arabia. Sophia can hold simple conversations and express facial expressions. Her code is 70% open source and critics who have reviewed her code have said that she is essentially a chatbot with a face because her conversation is primarily pre-written responses to prompted questions. Her existence has sparked an interesting debate over the possibility of having AGI (artificial general intelligence) in the future.

While Sophia’s sophistication in robotics is undeniable, the notion of her “consciousness” remains contested.

AI vs. ML vs. DL

There are two other terms that are often misconstrued or used interchangeably with AI. These are Machine Learning (ML) and Deep Learning (DL). It depends on use context and who may be using these terms as to what their more specific definition is. I consider them as subsets. ML is a subset of AI and DL is a subset of ML.

What is Machine Learning?

Machine Learning is set apart by its ability to learn and respond differently and uniquely by ingesting large amounts of data using human-built algorithms. This is done through either supervised learning, where the computer is given specific parameters by the developer to compare data inputs. Or unsupervised learning, where the computer is fed data and the algorithms allow for the computer to find relationships on its own. 

Applications for Machine Learning

In our daily lives, Machine Learning shapes experiences on music platforms like Spotify and Soundcloud. These platforms use algorithms to predict the best song choice for a user based on their preferences. Youtube employs a similar video-generating algorithm to select a video after one is finished.

Machine Learning in Cybersecurity

Machine Learning is used a lot in the cybersecurity world. Its tools may be used to ingest large amounts of data from networks and highlight security risks based on that data, like malicious access to sensitive information from hackers. This makes threat hunters’ jobs much more manageable. Instead of having to set security alerts and then respond to those alerts, we can use machine learning tools to monitor our environment. Based on prior attacks and knowledge of an organization’s systems and networks we better understand that an attack might be taking place in real time. As you can imagine, these tools are far from perfect, but they’re definitely a step in the right direction. 

What Is Deep Learning?

Deep learning is again an even more precise subset of Machine Learning. It functions in nearly the same way as ML but is able to self-adjust whereas ML requires human intervention to make adjustments.

Applications For Deep Learning

Some examples that are being used today are computers that can do image and pattern recognition. We’ve also seen this done with computers being able to ingest hours and hours of sound from an individual and then mimic their speech patterns. Self-driving cars would also fall into this category as they actively ingest data about the conditions of the road and other cars and road hazards to correct the car’s driving.

The common large language models like ChatGPT and Google’s Bard are considered deep learning as well.

Deep Learning In Cybersecurity

The ability for DL tools to mimic speech poses a genuine concern for cybersecurity professionals as it will allow for attackers to perform spear phishing attacks that are much more convincing.

Using AI For Good In Cybersecurity

Elevating Cybersecurity Blue Teams

One of the most important tools in the field of cybersecurity is something we call a SIEM. This stands for Security Information and Event Management. Traditionally a SIEM tool would be used by security operations center analysts to give us a clear picture of what is happening on an organization’s computer networks and applications, detect any malicious activity and provide alerts to the analysts so that they can respond accordingly. 

With Machine Learning, these tools have been upgraded so that if a security event occurs, the response is automated instead of the security team having to do this manually. 

These new tools we call SOARs: Security Orchestration, Automation, and Response. To give you an example, if a user in your organization was hacked and their account was being used by someone else, with a SIEM, if it’s working as intended, it may alert the security team that an account is being used maliciously. The analyst would inform the necessary parties and take that account offline or take the network down where that compromised account is being used. 

With a SOAR, whatever response that would be taken by the security analyst to remediate the issue, is now automated. SOARs use the concept known as playbooks, prebuilt and automated remediation steps that initiate when certain conditions are met. This transition not only expedites incident response but also minimizes potential human errors, significantly enhancing an organization’s cybersecurity posture. This still requires human intervention because this technology is still far from perfect.

Combat Phishing Attacks & Spam

AI is being used in the cybersecurity field to help our security personnel identify and classify phishing attacks and spam. It’s also being used to help with malware analysis where we can run the code of a discovered exploit through an AI tool and it may tell us what the outcome of that malware would have on our environment.

Expedite Incident Response

We can use AI to help us with Incident Response, as I mentioned earlier, with the automated remediation efforts that can happen with SOAR tools. AI can also be used to gather data to predict fraudulent activity on our networks which can help the security team address a potential liability before data is stolen or malware is installed on a system.

Prevent Zero-Day Attacks

With Machine Learning, cybersecurity professionals have a much better chance of protecting themselves against zero-day attacks. This is when a system or application vulnerability was previously unknown to the application’s developer. With Machine Learning, that vulnerability could be identified before an exploit occurs. In addition, machine learning could identify an intrusion before data is stolen or an exploit is carried out.

AI Uses for Bad Actors

Even with all the positive possibilities of AI and Cybersecurity, there is a dangerous side. The same technologies being used to protect our networks can and are being used to make hacking easier. 

Trick Network Security

If machine learning tools are implemented on a network, proficient hackers may be able to identify this. They can then act accordingly to deceive the machine learning tool into thinking that the hacker is a regular user.

Elaborate Phishing Campaigns

A very scary use case for AI being used by hackers is to create far more convincing phishing campaigns. The major cause of breaches is still mainly a human element. And, phishing is still one of the most common ways that hackers cause data breaches.

At the moment, phishing attacks are generally pretty easy to identify. International hackers may use bad grammar or send from an obviously fake email. They may try to hide links to websites that can easily be determined to be falsified. But with the introduction of AI, all of these mistakes can be fixed. 

ChatGPT can easily pass as a human. It can converse seamlessly with users without spelling, grammatical, and verb tense mistakes. That’s precisely what makes it an excellent tool for phishing scams.

Convincing Impersonations Of Public Figures

Another thing cybersecurity professionals are worried about is AI being used to mimic speech patterns, which would make spear phishing campaigns much more difficult to detect. I can easily imagine a world in which Twitter employees are being bombarded with fake emails from Elon Musk, or fake phone calls because his voice would be so easily recreated by AI. And this could happen with just about any CEO or any personnel from any organization.

The Road Ahead

AI is going to make us more efficient and more productive, as almost all technologies have done throughout history. But, as we navigate the evolving landscape of AI in cybersecurity, it is paramount to remain vigilant against its misuse.

I’ll leave you with this quote from Sal Khan, the CEO and founder of Khan Academy:

“If we act with fear, and say, ‘hey we just need to stop doing this stuff’ what’s really going to happen is the rule followers might pause, might slow down, but the rule breakers, the totalitarian governments, the criminal organizations, they’re only going to accelerate. And that leads to what I am pretty convinced is THE dystopian state, which is the good actors have worse AIs than the bad actors. We must fight for the positive use cases. Perhaps the most powerful use case, and perhaps the most poetic use case, is if AI (artificial intelligence) can be used to enhance HI (human intelligence), human potential, and human purpose.”