Decision tree hyperparameters in machine learning geeksforgeeks. Decision trees use both classification and regression.
In machine learning, clustering is the unsupervised learning technique that groups the data based on similarity between the set of data. In decision trees, it depends on the algorithm. It is a form of machine learning in which the algorithm is trained on labeled data to make predictions or decisions based on the data inputs. Last Updated : 06 Oct, 2023. Conclusion: The Power of Weights and Biases in Machine Learning. Jan 6, 2024 · Conclusion. Mar 21, 2024 · Comparing the results of SVM and Decision Trees. In supervised learning, the algorithm learns a mapping between Sep 16, 2022 · We can notice that the two leaves give us the same class: 6 (which explains the entropy value). It is designed for solving a wide range of machine learning tasks, including classification, regression, and ranking, with a particular emphasis on handling categorical features efficiently. 1. Unleash your inner data scientist with our Machine Learning & Data Science program. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign May 17, 2024 · Gaussian Naive Bayes Classifier: It is a probabilistic machine learning algorithm that internally uses Bayes Theorem to classify the data points. However, hyperparameter tuning can be a time-consuming and challenging task Oct 19, 2023 · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. Two popular algorithms used in ensemble learning are Support Vector Machines (SVMs) and Decision Trees. import matplotlib. In a Jan 30, 2023 · Support vector regression (SVR) is a type of support vector machine (SVM) that is used for regression tasks. This hyperparameter allows to get a trade-off between an under-fitted and over-fitted decision tree. You should also familiarize yourself with the key concepts and terminologies used in Linear algebra. As we all know that model development is a multi-step process and a check should be kept on how well the model generalizes future predictions. The process of creating Jan 11, 2023 · Here, continuous values are predicted with the help of a decision tree regression model. Machine learning models require vast amounts of data to train effectively. In deep learning, MTL refers to training a neural network to perform multiple tasks by sharing some of the network's layers and parameters across tasks. Catboost stands out for its speed Mar 4, 2024 · The role of categorical data in decision tree performance is significant and has implications for how the tree structures are formed and how well the model generalizes to new data. Jul 1, 2024 · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. Here, Linear Discriminant Analysis uses both axes (X and Y) to create a new axis and projects data onto a new axis in a way to maximize the separation of the two categories and hence, reduces the 2D graph into a 1D graph. Indeed, our data being complex, we will need a Decision Tree with a higher depth to discriminate all our classes. It consists of various steps. Data is typically divided into two types: Labeled data. In this article, we'll e Mar 26, 2024 · Conclusion. Set filled=True to fill the decision tree nodes with colors representing majority class. Labeled data includes a label or target variable that the model is trying to predict, whereas Jul 9, 2024 · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. In ELM, the hidden layer’s weights and biases are randomly initialized. May 23, 2023 · Monte Carlo Tree Search (MCTS) is a search technique in the field of Artificial Intelligence (AI). You should start by learning about the different types of Linear equations, matrices, mathematical operations, and their applications. Jan 10, 2022 · Ensemble learning is a machine learning technique that combines multiple individual models to improve predictive performance. Q. In decision tree, a flow-chart like structure is build where each internal nodes denotes the features, rules are denoted using the branches and the leaves denotes the final result of the algorithm. Compared to other boosting libraries, CatBoost has a number of benefits, including: It can handle categorical features auto 2 days ago · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. In this article, we'll e Feb 8, 2024 · AutoML, short for automated machine learning, is the process of automating various machine learning model development processes so that machine learning can be more accessible for individuals and organizations with limited expertise in data science and machine learning. By modelling the algorithms on the bases of historical data, Algorithms find the patterns and relationships that are difficult for humans to detect. Jul 4, 2024 · Support Vector Machine. . It is the gold standard in ensemble learning, especially when it comes to gradient-boosting algorithms. Returns: self. Algorithm for Random Forest Work: Step 1: Select random K data points from the training set. May 14, 2024 · Decision Tree is one of the most powerful and popular algorithms. In the context of modeling hypotheses, Bayes’ theorem allows us to infer our belief in a Mar 11, 2024 · In data mining and statistics, hierarchical clustering analysis is a method of clustering analysis that seeks to build a hierarchy of clusters i. The Decision Tree has several hyperparameters. Random Forest Classifier: Random Forest is an ensemble learning-based supervised machine learning classification algorithm that internally uses multiple decision trees to make the classification. Deep decision trees can memorize the training data, leading to poor generalization on unseen data. Support Vector Machine (SVM) is a supervised machine learning algorithm used for both classification and regression. It uses a web camera to gather images or videos, and then uses those images to train a machine learning model. Holdout Validation: Split the dataset into training and testing sets. Dec 30, 2022 · Hyperparameters are the parameters that determine the behavior and performance of a machine-learning model. Explore a 360-degree learning experience designed for geeks who wish to get hands-on Data Science and ML. Python Decision-tree algorithm falls under the category of supervised learning algorithms. XGBoost is an optimized distributed gradient boosting library designed for efficient and scalable training of machine learning models. Let’s build a shallow tree and then a deeper tree, for both classification and regression, to understand the impact of the parameter. The maximum depth of the tree. import numpy as np . In this article, We are going to implement a Decision tree in Python algorithm on the Balance Scale Weight & Distance Jun 20, 2024 · Answer: Machine learning is used to make decisions based on data. The user can then use the model to classify new images or videos. The theorem can be mathematically expressed as: P (A∣B)= \frac {P (B∣A)⋅P (A)} {P (B)} P (A∣ B) = P (B)P (B∣A)⋅P (A) where. Jan 24, 2024 · Classification is a process of categorizing data or objects into predefined classes or categories based on their features or attributes. Python’s machine-learning libraries make it easy to implement and optimize this approach. Compared to other boosting libraries, CatBoost has a number of benefits, including: It can handle categorical features auto May 9, 2023 · Machine learning models play a pivotal role in data-driven decision-making processes. These days NLP (Natural language Processing) uses the machine learning model to recognize the unstructured text into usable data and insights. The primary goal of gradient descent is to identify the model parameters that Nov 18, 2020 · 1 Answer. It takes 2 important parameters, stated as follows: The hyperparameter max_depth controls the overall complexity of a decision tree. In this article, we'll delve into the concepts of Logistic Regression and KNN and understand their functions and their dif Feb 26, 2024 · Decision tree regression is a widely used algorithm in machine learning for predictive modeling tasks. Train the model on the training set and evaluate its performance on the testing set. tree-type structure based on the hierarchy. Thus the above-given output validates our theory about feature selection using Extra Trees Classifier. tree_. For gradient boosting on decision trees, CatBoost is a well-liked open-source toolkit. Evelyn Fix and Joseph Hodges developed this algorithm in 1951, which was subsequently expanded by Thomas Cover. Mar 20, 2024 · Linearly Separable Dataset. The above example is a 3*4 grid. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign May 24, 2024 · CatBoost, (Categorical Boosting), is a high-performance, open-source, gradient-boosting framework developed by Yandex. Jul 2, 2024 · A decision tree classifier is a well-liked and adaptable machine learning approach for classification applications. A policy is a mapping from S to a. In tree search, there’s always the possibility that the current Jul 8, 2024 · A confusion matrix is a matrix that summarizes the performance of a machine learning model on a set of test data. The most basic ones are : Jul 3, 2024 · Hyperparameter tuning is crucial for selecting the right machine learning model and improving its performance. Jul 10, 2024 · In the context of machine learning, Bayes’ theorem is often used in Bayesian inference and probabilistic models. ai, and RapidMiner are good for big projects with lots of data. Why Model Complexity is Important? Finding the optimal model complexity is important because: Jun 5, 2023 · Teachable Machine is a web-based tool developed by Google that allows users to train their own machine learning models without any coding experience. The conclusion, such as a class label for classification or a numerical value for regression, is represented by each leaf node in the tree-like structure that is constructed, with each internal node representing a judgment or test on a feature. Practitioners can improve a model’s generalisation capabilities by implementing preventive measures such as cross-validation, regularisation, data augmentation, and feature selection. Please check User Guide on how the routing mechanism works. Machine Learning is a subset of artificial intelligence (AI) that focus on learning from data to develop an algorithm that can be used to make a prediction. Feb 23, 2024 · Decision Tree is very popular supervised machine learning algorithm used for regression as well as classification problems. figure (figsize= (12, 8)). Insufficient or biased data can lead to inaccurate predictions and poor decision-making. Q2. In traditional programming, rule-based code is written by the developers depending on the problem statements. Step 1: Import the required libraries. The Voting in Sklearn is an ensemble method that combines multiple individual classifiers or regressors to make Dec 12, 2023 · For gradient boosting on decision trees, CatBoost is a well-liked open-source toolkit. Bayesian Approach. Machine Learning classification is a type of supervised learning technique where an algorithm is trained on a labeled dataset to predict the class or category of new, unseen data. Basically, SVM finds a hyper-plane that creates a boundary between the types of data. By recursively partitioning the feature space Jan 17, 2022 · A decision tree in machine learning is a versatile, interpretable algorithm used for predictive modelling. Ross Quinlan, is a development of the ID3 decision tree method. In this tutorial, you’ll learn how to create a decision tree classifier using Sklearn and Python. The nodes represent different decision Jan 19, 2024 · Data transformation is the most important step in a machine learning pipeline which includes modifying the raw data and converting it into a better format so that it can be more suitable for analysis and model training purposes. Return the depth of the decision tree. It continues the process until it reaches the leaf node of the tree. This article delves into the components, terminologies, construction, and advantages of decision trees, exploring their applications Apr 4, 2024 · Identifying overfitting in machine learning models, including those built using Scikit-Learn, is essential to ensure the model generalizes well to unseen data. If it is regularized logistic regression, then the regularization weight is a hyper-parameter. It is a powerful tool that can handle both classification and regression problems, making it versatile for various applications. The grid has a START state (grid no 1,1). It works for both continuous as well as categorical output variables. SVR can use both linear and non-linear kernels. Step-2: Find the best attribute in the dataset using Attribute Selection Measure (ASM). However, like any other algorithm, decision tree regression has its strengths and weaknesses. Jan 8, 2024 · The Voting in sci-kit-learn (Sklearn) allows us to combine multiple machine-learning modules and use a majority vote or a weighted vote to make predictions. It is designed for efficiency, scalability, and accuracy. The process of creating Mar 31, 2023 · Gradient boosting algorithms (GBMs) are ensemble learning methods that excel in various machine learning tasks, from regression to classification. Feb 28, 2024 · For instance, decision trees are considerably simpler than neural networks. SVMs are often preferred for text classification tasks due to their ability to handle Jan 11, 2023 · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. It is used in machine learning for classification and regression tasks. Course Description. An example of a decision tree is a flowchart that helps a person decide what to wear based on the weather conditions. Ridge Regularization – L2 Regularization. Step 2:Build the decision trees associated with the selected data points (Subsets). Jul 12, 2024 · Disadvantages of Machine Learning. Python3. the output of the first steps becomes the input of the second step. Each tree focuses on the errors left by the previous ones, gradually building a stronger collective predict Dec 6, 2023 · XGBoost, or Extreme Gradient Boosting, is a state-of-the-art machine learning algorithm renowned for its exceptional predictive performance. May 6, 2024 · Teachable Machine is a web-based tool developed by Google that allows users to train their own machine learning models without any coding experience. It develops a series of weak learners one after the other to produce a reliable and accurate Jul 5, 2024 · For gradient boosting on decision trees, CatBoost is a well-liked open-source toolkit. It structures decisions based on input data, making it suitable for both classification and regression tasks. Hyperparameters: Settings such as the learning rate, number of hidden layers, and regularization parameters can influence the complexity of a machine learning model. It tries to find a function that best predicts the continuous output value for a given input value. Apr 5, 2023 · Machine learning algorithms use data to learn patterns and relationships between input variables and target outputs, which can then be used for prediction or classification tasks. These platforms offer not only ample computational resources but also efficient tools for managing and analyzing extensive datasets in large-scale machine-learning projects. max_depth int. Though we say regression problems as well it’s best suited for classification. pipeline module called Pipeline. A popular example of reinforcement learning is a chess engine. In MTL, the goal is to improve the generalization performance of May 25, 2024 · A model of machine learning is a set of programs that can be used to find the pattern and make a decision from an unseen dataset. Hyperparameters control the behavior of the model/algorithm, while model parameters are learned from data. Mar 18, 2024 · Regularization in Machine Learning. It is based on decision trees designed to improve model efficiency and reduce memory usage. It incorporates several novel techniques, including Gradient-based One-Side Sampling This Machine Learning Self-Paced Course will help you get started with the basics of ML, before moving on to advanced concepts. 4. It is used to decide what action to take at t+1 based on data up to time t. GridSearchCV and RandomSearchCV are systematic ways to search for optimal hyperparameters. For classification problems, the C5. It is a common tool used to visually represent the decisions made by the algorithm. The treatment of categorical data becomes crucial during the tree Jul 19, 2022 · Hyperparameters, on the other hand, are a different class of parameters that cannot be directly learned through routine training. These algorithms are broadly classified into the three types, i. By efficiently updating model parameters using random subsets of data, SGD is instrumental in handling large datasets and online learning. max_features helps to find the number of features to take into account in order to make the best split. It has a set of techniques and tools that automate the process of selecting Jul 12, 2024 · The final prediction is made by weighted voting. It represents the inability of the model to learn the training data effectively result in poor performance both on the training and testing data. It is a supervised learning algorithm that learns from labelled data to predict unseen data. Apr 21, 2023 · Reinforcement Learning is a branch of Machine Learning, also called Online Learning. Jul 13, 2021 · The execution of the workflow is in a pipe-like manner, i. Feb 29, 2024 · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. A linear kernel is a simple dot product between two input vectors, while a non-linear May 3, 2019 · The decision trees that are created in CatBoost are symmetric. This concept is used in Artificial Intelligence applications such as walking. The commonly used regularization techniques are : Lasso Regularization – L1 Regularization. The process of finding the optimal values for these hyperparameters is known as hyperparameter optimization, and it is an important step in the Mar 12, 2024 · Frequentist vs. It is one of the most used methods for changing a model’s parameters in order to reduce a cost function in machine learning projects. Hyperparameter tuning is an important step in developing machine learning models because it can significantly improve the model’s performance on new data. It’s a way to ensemble different models for potentially better performance. For classification, this article examined the top six machine learning algorithms: Decision Tree, Random Forest, Naive Bayes, Support Vector Machines, K-Nearest Neighbors, and Gradient Boosting. In data transformation, we usually deal with issues such as noise, missing values, outliers, and non-normality. It is a probabilistic and heuristic driven search algorithm that combines the classic tree search implementations alongside machine learning principles of reinforcement learning. What is Ensemble Learning?By merging many models (also referred to as "base learners" or "weak learners"), ensemble lear Feb 27, 2024 · Supervised learning is a machine learning technique that is widely used in various fields such as finance, healthcare, marketing, and more. May 15, 2024 · Visualize Decision Tree: Create a figure with specified size using plt. max_features: Random forest takes random subsets of features and tries to find the best split. The structure of decision trees resembles the flowchart of decisions helps us to interpret and explain easily. May 22, 2024 · Tools like Azure, Google Cloud, H2O. In this tutorial, you’ll learn how the algorithm works, how to choose different parameters for May 31, 2024 · A. XGBoost stands for “Extreme Gradient Boosting” and it has become one of the Apr 17, 2022 · April 17, 2022. Overfitting must be avoided if machine-learning models are to be robust and reliable. Unlabeled data. It was created by Yandex and may be applied to a range of machine-learning issues, including classification, regression, ranking, and more. import pandas as pd . In simple terms, an underfit model’s are inaccurate Nov 5, 2021 · Multi-Task Learning (MTL) is a type of machine learning technique where a model is trained to perform multiple tasks simultaneously. Returns: routing MetadataRequest Jun 19, 2024 · Machine learning algorithms are techniques based on statistical concepts that enable computers to learn from data, discover patterns, make predictions, or complete tasks without the need for explicit programming. Overfitting: Decision trees are prone to overfitting, especially when they grow too deep or when the dataset is noisy. In the context of R Programming Language, saving machine learning models is a crucial step for various reasons, ranging from reusability Mar 14, 2024 · Gradient Descent is an iterative optimization process that searches for an objective function’s optimum value (Minimum/Maximum). As there is a unique sort of approach for handling categorical datasets, CatBoost works very well on categorical datasets compared to any other algorithm in the field of machine learning. Techniques like pruning or limiting the tree depth can mitigate this issue. May 22, 2024 · Understanding Decision Trees. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign Nov 4, 2023 · Conclusion. The complete process can be better understood using the below algorithm: Step-1: Begin the tree with the root node, says S, which contains the complete dataset. The article explores the fundamentals, workings, and implementation of the KNN algorithm. You may have heard about image recognition which is used to identify The first 10 days of your Machine Learning journey should focus on understanding the basics of Linear Algebra. Dec 14, 2023 · The C5 algorithm, created by J. Step 2: Initialize and print the Dataset. It creates a model in the shape of a tree structure, with each internal node standing in for a “decision” based on a feature, each branch for the decision’s result, and each leaf node for a regression value or class label. They work by iteratively adding decision trees that correct the mistakes of their predecessors. Feb 15, 2024 · For gradient boosting on decision trees, CatBoost is a well-liked open-source toolkit. Oct 6, 2023 · CatBoost Parameters and Hyperparameters. It is an ensemble learning method that combines the predictions of multiple weak models to produce a stronger prediction. A decision tree is a tree-like structure that represents a series of decisions and their possible consequences. In the ever-evolving landscape of machine learning, neural networks have emerged as powerful tools for solving complex problems and making sense of vast datasets. Finding the optimal combination of hyperparameters Feb 2, 2023 · Support Vector Machine (SVM) is a relatively simple Supervised Machine Learning Algorithm used for classification and/or regression. ELM stands apart from traditional feedforward neural networks due to its unique training approach. The quality, quantity, and diversity of the data significantly impact the model’s performance. The categorical features in CatBoost are encoded based on the output columns. Model evaluation is the process that uses some metrics which help us to analyze the performance of the model. A flexible and comprehensible machine learning approach for classification and regression applications is the decision tree. e. Here, the agent decides upon a series of moves d 5 days ago · Classification and Regression Trees (CART) is a decision tree algorithm that is used for both classification and regression tasks. e supervised learning, unsupervised learning, and reinforcement learning. This can reduce overfitting and improve generalization. Decision trees, being a non-linear model, can handle both numerical and categorical features. It is more preferred for classification but is sometimes very useful for regression as well. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign Jul 5, 2024 · A Policy is a solution to the Markov Decision Process. Compared to other boosting libraries, CatBoost has a number of benefits, including: It can handle categorical features auto 4 days ago · Decision trees are powerful models extensively used in machine learning for classification and regression tasks. At the core of these networks lie two fundamental components: weights and biases. You will start off by getting introduced to topics such as: What is ML, Data in ML, and other basic concepts required to help build a strong base. Sorted by: There is none in Logistic Regression (although some might say the threshold is one, it is actually your decision algorithm's hyper-parameter, not the regression's). It indicates the action ‘a’ to be taken while in state S. These parameters describe crucial model characteristics including complexity and learning rate. Step 4: Visualizing and Comparing the results. Step 3:Choose the number N for decision trees that you want to build. May 10, 2024 · Tree-based algorithms are a fundamental component of machine learning, offering intuitive decision-making processes akin to human reasoning. It is a means of displaying the number of accurate and inaccurate instances based on the model’s predictions. Let’s see the Step-by-Step implementation –. Regularization is a technique used to reduce errors by fitting the function appropriately on the given training set and avoiding overfitting. By recursively dividing the data according to information gain—a measurement of the entropy reduction achieved by splitting on a certain attribute—it constructs decision trees. Mar 11, 2024 · A statistical model or a machine learning algorithm is said to have underfitting when a model is too simple to capture data complexities. You will get also get introduced to other important ML and AI concepts Jan 9, 2023 · Machine Learning Model Evaluation. It combines the predictions of multiple decision trees to reduce overfitting and improve accuracy. 0 method is a decision tree May 26, 2024 · Artificial Intelligence. The key comparisons are based on philosophy, handling uncertainty and computational complexity. It can take four values “ auto “, “ sqrt “, “ log2 ” and None . It is often used to measure the performance of classification models, which aim to predict a categorical label for each Oct 30, 2023 · In Deep learning, an Extreme Learning Machine (ELM) is a type of feedforward neural network utilized for tasks such as classifications and regression. Jan 22, 2021 · The default value is set to 1. These patterns are now further use for the future references to predict solution of unseen problems. Decision trees are an intuitive supervised machine learning algorithm that allows you to classify data with high degrees of accuracy. Decision trees use both classification and regression. Oct 16, 2023 · Hyperparameters are parameters that control the behaviour of the model but are not learned during training. In summary, the Stochastic Gradient Descent (SGD) Classifier in Python is a versatile optimization algorithm that underpins a wide array of machine learning applications. The depth of a tree is the maximum distance between the root and any leaf. Feb 6, 2023 · XGBoost. Once a model is trained on a dataset, it becomes a valuable asset that can be used for making predictions on new, unseen data. These parameters are not learned during training but are instead set prior to training. These algorithms construct decision trees, where each branch represents a decision based on features, ultimately leading to a prediction or classification. In case of auto: considers max_features May 18, 2023 · Step 2: Loading and Cleaning the Data. Regression trees are used when the dependent variable is Dec 6, 2023 · Random Forest Regression is a versatile machine-learning technique for predicting numerical values. Let us take the example of a grid world: An agent lives in the grid. Decision Tree – the hyperparameters. Step 3: Building the Extra Trees Forest and computing the individual feature importances. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign Mar 27, 2024 · Machine Learning Lifecycle. pyplot as plt. Data Dependency. Classification methods from machine learning have transformed difficult data analysis. Tree structure: CART builds a tree-like structure consisting of nodes and branches. Mar 10, 2024 · Here are some common approaches to how to combine Support Vector Machines (SVM) and Decision Trees : Bagging (Bootstrap Aggregating): This involves training multiple SVMs or Decision Trees on different subsets of the training data and then combining their predictions. Jun 20, 2024 · Machine learning algorithms play a crucial role in training the data and decision-making processes. Usually, they are fixed before to the start of the programme itself. The main objective of the SVM algorithm is to find the optimal hyperplane in an N-dimensional space that can separate the Feb 15, 2024 · Cons of Decision Tree Regression. Each step plays a crucial role in ensuring the success and effectiveness of the machine learning solution. In this specific comparison on the 20 Newsgroups dataset, the Support Vector Machines (SVM) model outperforms the Decision Trees model across all metrics, including accuracy, precision, recall, and F1-score. get_metadata_routing [source] # Get metadata routing of this object. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign Jul 4, 2024 · LightGBM is an open-source, distributed, high-performance gradient boosting framework developed by Microsoft. Philosophy: Frequentist methods are often seen as more objective, focusing on properties of estimators based on repeated sampling. Logistic Regression and K Nearest Neighbors (KNN) are two popular algorithms in machine learning used for classification tasks. Scikit-learn is a powerful tool for machine learning, provides a feature for handling such pipes under the sklearn. Bayesian methods, on the other hand, allow for the incorporation of prior knowledge Apr 25, 2024 · VI. The machine learning lifecycle is a process that guides the development and deployment of machine learning models in a structured way. Compared to other boosting libraries, CatBoost Apr 19, 2023 · Decision tree is a type of algorithm in machine learning that uses decisions as the features to represent the result in the form of a tree-like structure. However, the performance of decision trees highly relies on the hyperparameters, selecting the optimal hyperparameter can sign 5 days ago · The K-Nearest Neighbors (KNN) algorithm is a supervised machine learning method employed to tackle classification and regression problems. Two criteria are used by LDA to create a new axis: Mar 20, 2024 · Decision tree regression is a widely used algorithm in machine learning for predictive modeling tasks. Visualize the decision tree using Matplotlib’s plot_tree method: Pass the individual decision tree, feature names, and target names as parameters. Mentored by industry experts; learn to apply DS methods and techniques, and acquire analytical skills. zo jm lz tf iy zx je fq rz dd