The course is designed to impart awareness about the specific machine learning skills to fortify ML applications. Your growing interest in undermining machine learning solutions emphasizes the importance of safeguarding the model from potential threats and vulnerabilities. In this regard, the course offers state-of-the-art attack methodologies and protection techniques within the machine learning domain. You will learn about cyber security basics, security features, errors, security testing, time and state, and how to use vulnerable components in the organization.
While working on ML models, it is essential to learn from the core and as a form of software. Consequently, the course covers fundamental secure coding skills and delves into the security pitfalls related to Python programming language. You will learn about security testing methodologies and approaches and how cryptographic APIs are correctly used in Python. Moreover, you will become familiar with common security testing techniques and tools to manage vulnerabilities in third-party components. The overall goal of the course is to equip you with the challenges posed by the dark side of the forces and ensure that you are prepared with potential ways to deal with the threats in the realm of machine learning.
Loading...
Throughout this hands-on learning experience facilitated by our experts, you will achieve the following course objectives:
• Manage vulnerabilities in the organization when working with third-party components
• Gain extensive knowledge of attacks and defense techniques in adversarial ML
• Explore threats and vulnerabilities as a potential attack on the systems
• Familiarize yourself with the essential cyber security concepts
• Learn to securely apply best practices in Python
• Explore ML security aspects
• Evaluate common security testing techniques and tools
• Understand security testing methodology and approaches
• Validate input approaches and principles
• Identify and learn cryptographic APIs correctly in Python
• Integrate availability threats and deal with ML software security
• Gain insights on ML anomaly detection and network security
• Deal with AI/ML threats and learn to exploit vulnerabilities
This course is tailored for those working on implementing or maintaining machine learning applications. Key roles that stand to gain significant benefits from the course include-
• Python Developers are the most suitable audience for the course, whether you are looking to establish a solid foundation in machine learning as a beginner or as a professional trying to secure coding against potential threats.
• What is security?
• The dark side
• Categorization of bugs
• The Seven Pernicious Kingdoms
• Common Weakness Enumeration Threat and risk
• Cybersecurity threat types
• Consequences of Insecure Software
• Constraints and the market
• CWE Top 25 Most Dangerous Software Errors
• Cyber security in machine learning
• Security requirements
• Attack surface
• Integrity threats (model)
• ML-specific cyber security considerations
• Inadvertent AI failures
• ML threat model
• Creating a threat model for machine learning
• What makes machine learning a valuable target?
• Possible consequences
• Machine learning assets
• Integrity threats (data, software)
• Availability threats
• Limitations of ML in security
• Lab – Compromising ML via model editing
• Using ML in cybersecurity
• Static code analysis and ML
• Social engineering attacks and media manipulation
• Vulnerability exploitation
• Endpoint security evasion
• Threats against machine learning
• Common white-box evasion attack algorithms
• Lab – ML evasion attack
• Case study – Classification evasion via 3D printing
• Transferability of poisoning and evasion attacks
• Lab – Transferability of adversarial examples Attacks against machine learning integrity
• Poisoning attacks
• Poisoning attacks against unsupervised and reinforcement learning
• Lab – ML poisoning attack
• Case study – ML poisoning against Warfarin dosage calculations
• Some defense techniques against adversarial samples
• Adversarial training
• Gradient masking
• Using reformers on adversarial data
• Lab – Adversarial training
• Model extraction attacks
• Defending against model extraction attacks
• Lab – Model Extraction
• Model inversion attacks
• Simple practical defenses
• Defending against model inversion attacks
• Lab – Model inversion
• Denial of Service
• Resource exhaustion
• Cash overflow
• Accuracy reduction attacks
• Denial-of-information attacks
• Catastrophic forgetting in neural networks
• Resource exhaustion attacks against ML
• Best practices for protecting availability in ML systems
• Input validation principles
• Output sanitization
• Encoding challenges
• Lab – Encoding challenges
• Validation with regex
• Blacklists and whitelists
• Data validation techniques
• Lab – Input validation
• What to validate – the attack surface
• Regular expression denial of service (ReDoS)
• Lab – Regular expression denial of service (ReDoS)
• Injection principles
• Additional considerations
• Lab – SQL injection best practices
• Case study – Hacking Fortnite accounts
• SQL injection and ORM
• Injection attacks
• SQL injection
• SQL injection basics
• Lab – SQL injection
• Attack techniques
• SQL injection best practices
• Input validation
• Parameterized queries
• Code injection
• Code injection via input()
• OS command injection
• Lab – Command injection in Python
• OS command injection best practices
• Avoiding command injection with the right APIs in Python
• Lab – Command injection best practices in Python
• Case study – Shellshock
• Lab – Shellshock
• Case study – Command injection via ping
• Python module hijacking
• Lab – Module hijacking
• General protection best practices
• Representing signed numbers
• Integer visualization
• Integer overflow with ctypes and numpy
• Path traversal-related examples
• Lab – Path traversal
• Additional challenges in Windows
• Virtual resources
• Path traversal best practices
• Lab – Integer problems in Python
• Other numeric problems
• Division by zero
• Other numeric problems in Python
• Working with floating-point numbers
• Files and streams
• Path traversal
• Format string issues
• Native code dependence
• Lab – Unsafe native code
• Best practices for dealing with native code
• Misleading the machine learning mechanism
• Sanitizing data against poisoning and RONI
• Typical ML input formats and their security
• Authentication
• Authentication weaknesses – spoofing
• Salting
• Adaptive hash functions for password storage
• Password policy
• NIST authenticator requirements for memorized secrets
• Password length
• Password hardening Case study – PayPal 2FA bypass
• Password management
• Inbound password management
• Storing account passwords
• Password in transit
• Lab – Is just hashing passwords enough?
• Dictionary attacks and brute forcing
• Using passphrases
• Password change
• Forgotten passwords
• Lab – Password reset weakness
• Outbound password management
• Hard coded passwords
• Best practices
• Lab – Hardcoded password
• Case study – The Ashley Madison data breach
• The dictionary attack
• The ultimate crack
• Exploitation and the lessons learned
• Password database migration
• Protecting sensitive information in memory
• Challenges in protecting memory
• Exposure through extracted data and aggregation
• Case study – Strava data exposure
• Privacy challenges in classification algorithms
• Machine unlearning and its challenges
• System information leakage
• Privacy violation
• Privacy essentials
• Related standards, regulations and laws in brief
• Privacy violation and best practices
• Privacy in machine learning
• Leaking system information
• Information exposure best practices
• File race condition
• Time of check to time of usage – TOCTTOU
• Insecure temporary file
• Avoiding race conditions in Python
• Thread safety and the Global Interpreter Lock (GIL)
• Case study: TOCTTOU in Calamares
• Mutual exclusion and locking
• Deadlocks
• Synchronization and thread safety
• Error and exception handling principles
• Error handling
• Returning a misleading status code
• Exception handling
• Empty catch block
• Lab – Exception handling mess
• Assessing the environment
• Hardening
• Malicious packages in Python
• Vulnerability management
• Patch management
• Bug bounty programs
• Vulnerability databases
• ML supply chain risks
• Dependency checking in Python
• Lab – Detecting vulnerable components
• the attack surface
• Case study – BadNets
• Protecting data in transit – transport layer security
• Protecting data in use – multi-party computation
• ML frameworks and security
• General security concerns about ML platforms
• TensorFlow security issues and vulnerabilities
• Case study – TensorFlow vulnerability in parsing BMP files (CVE-2018-21233)
• Cryptography basics
• Cryptography in Python
• Elementary algorithms
• Random number generation
• Pseudo random number generators (PRNGs)
• Cryptographically strong PRNGs
• Seeding hashing mistakes
• Hashing in Python
• Lab – Hashing in Python
• Confidentiality protection
• Using virtual random streams
• Weak and strong PRNGs in Python
• Using random numbers in Python
• Case study – Equifax credit account freeze
• True random number generators (TRNG)
• Assessing PRNG strength
• Lab – Using random numbers in Python
• Hashing
• Hashing basics
• Common
• Symmetric encryption
• Block ciphers
• Types of homomorphic encryption
• FHE in machine
• ECC
• Modes of operation
• Modes of operation and IV – best practices
• Symmetric encryption in Python
• Lab – Symmetric encryption in Python
• Asymmetric encryption
• The RSA algorithm
• learning
• Integrity protection
• Message Authentication Code (MAC)
• MAC in Python
• Lab – Calculating MAC in Python
• Digital signature
• Digital signature with RSA
• Digital signature with Using RSA – best practices
• RSA in Python
• Elliptic Curve Cryptography
• The ECC algorithm
• Using ECC – best practices
• ECC in Python
• Combining symmetric and asymmetric algorithms
• Digital signature in Python
• Public Key Infrastructure (PKI)
• Certificates
• Chain of trust
• Certificate management – best practices
• Security testing methodology
• Misuse case examples
• Risk analysis
• Security testing techniques and tools
• Code analysis
• Security aspects of code review
• Static Application Overview of security testing processes
• Threat modeling
• SDL threat modeling
• Mapping STRIDE to DFD
• DFD example
• Attack trees
• Attack tree example
• Misuse
• Security Testing (SAST)
• Lab – Using static analysis tools
• testing at runtime
• Penetration testing
• Stress testing
• Dynamic analysis tools
• Dynamic Application Security Testing (DAST)
• Fuzzing
• Lab – Finding vulnerabilities via ML
• Dynamic analysis
• Security
• Fuzzing techniques
• Fuzzing – Observing the process
• ML fuzzing
• Software security sources and further reading
• Python resources
• Secure design principles of Saltzer and Schröder
• security resources
What is the course duration?
MLSEC, Machine Learning Security, is a 04-day hands-on course to prepare you to secure your machine learning applications from the forces on the darker side of the threats.
What is the course code to access the Machine Learning Security course?
The course code through which it can be accessed is MLSEC.
Why should I enroll in this course by Vinsys?
Machine Learning Security is a hands-on course experience crafted by Vinsys experts. Their assistance will help you enhance your learning through the hands-on lab sessions to acquire skills in essential cyber security concepts, Python, cryptography, and other security testing methodologies and approaches. Our experts have more than ten years of experience in the field, further ensuring that the lectures are meaningful. You will learn to leverage ML techniques effectively to produce results after enrolling in a course by Vinsys.
Can a beginner enroll in this course?
The course is suitable for those seeking to utilize ML techniques to secure their applications from potential threats present on the systems. This includes those working majorly in the field of programming language, including Python Developers.
How will the course help me in my professional development?
Designed with experts, the course can unlock the potential of creating threat models, ML-specific cyber security considerations, attack surface, static code analysis, and more. The course directly applies to enhance professional machine learning capabilities.
How is the course program carried out at Vinsys?
Our courses are delivered through instructor-led training (ILT), private group training, and virtual instructor-led training (vLIT). We boost your odds of success by helping you prepare for required exams and earn the certification. Effective course material accessed throughout the program makes learning about concepts beyond the class easier. You can choose your learning path to upskill with Vinsys' subject matter experts upon customizing training needs to ensure 100% results.
How will this course help the learners?
The course will help the learners cover a range of topics, including identifying and evaluating ML applications, designing effective test plans, navigating the ML testing lifecycle, evaluating ML model performance, and understanding security and ethical considerations in ML. You will be able to apply for a diverse range of opportunities in the corporate world.
Can learners interact with the instructors?
Yes, learners will have an opportunity to interact with the instructors till the time their confusion and queries are resolved. You can enjoy 24*7 support from Vinsys even after the course completion.
What are the job opportunities after MLSEC?
There are various options to choose from, including Cybersecurity Analyst, AI Ethics and Compliance Specialist, Penetration Tester with ML Expertise, Security Consultant – Machine learning, and more, which help you identify, resolve, analyze, and resolve threats and vulnerabilities faster.