You are reading the article Oops Concepts In Java: What Is, Basics With Examples updated in October 2023 on the website Benhvienthammyvienaau.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested November 2023 Oops Concepts In Java: What Is, Basics With Examples
Object-Oriented Programming System (OOPs) is a programming concept that works on the principles of abstraction, encapsulation, inheritance, and polymorphism. It allows users to create objects they want and create methods to handle those objects. The basic concept of OOPs is to create objects, re-use them throughout the program, and manipulate these objects to get results.
OOP meaning “Object Oriented Programming” is a popularly known and widely used concept in modern programming languages like Java.
OOPs Concepts in Java with ExamplesThe following are general OOPs concepts in Java:
1) ClassThe class is one of the Basic concepts of OOPs which is a group of similar entities. It is only a logical component and not the physical entity. Lets understand this one of the OOPs Concepts with example, if you had a class called “Expensive Cars” it could have objects like Mercedes, BMW, Toyota, etc. Its properties(data) can be price or speed of these cars. While the methods may be performed with these cars are driving, reverse, braking etc.
2) Object3) Inheritance
Inheritance is one of the Basic Concepts of OOPs in which one object acquires the properties and behaviors of the parent object. It’s creating a parent-child relationship between two classes. It offers robust and natural mechanism for organizing and structure of any software.
4) PolymorphismPolymorphism refers to one of the OOPs concepts in Java which is the ability of a variable, object or function to take on multiple forms. For example, in English, the verb run has a different meaning if you use it with a laptop, a foot race, and business. Here, we understand the meaning of run based on the other words used along with it. The same also applied to Polymorphism.
5) AbstractionAbstraction is one of the OOP Concepts in Java which is an act of representing essential features without including background details. It is a technique of creating a new data type that is suited for a specific application. Lets understand this one of the OOPs Concepts with example, while driving a car, you do not have to be concerned with its internal working. Here you just need to concern about parts like steering wheel, Gears, accelerator, etc.
6) Encapsulation 7) AssociationAssociation is a relationship between two objects. It is one of the OOP Concepts in Java which defines the diversity between objects. In this OOP concept, all objects have their separate lifecycle, and there is no owner. For example, many students can associate with one teacher while one student can also associate with multiple teachers.
8) AggregationIn this technique, all objects have their separate lifecycle. However, there is ownership such that child object can’t belong to another parent object. For example consider class/objects department and teacher. Here, a single teacher can’t belong to multiple departments, but even if we delete the department, the teacher object will never be destroyed.
9) CompositionComposition is a specialized form of Aggregation. It is also called “death” relationship. Child objects do not have their lifecycle so when parent object deletes all child object will also delete automatically. For that, let’s take an example of House and rooms. Any house can have several rooms. One room can’t become part of two different houses. So, if you delete the house room will also be deleted.
You're reading Oops Concepts In Java: What Is, Basics With Examples
Supervised Machine Learning: What Is, Algorithms With Examples
What is Supervised Machine Learning?
Supervised Machine Learning is an algorithm that learns from labeled training data to help you predict outcomes for unforeseen data. In Supervised learning, you train the machine using data that is well “labeled.” It means some data is already tagged with correct answers. It can be compared to learning in the presence of a supervisor or a teacher.
Successfully building, scaling, and deploying accurate supervised machine learning models takes time and technical expertise from a team of highly skilled data scientists. Moreover, Data scientist must rebuild models to make sure the insights given remains true until its data changes.
In this tutorial, you will learn:
How Supervised Learning Works
Supervised machine learning uses training data sets to achieve desired results. These data sets contain inputs and the correct output that helps the model to learn faster. For example, you want to train a machine to help you predict how long it will take you to drive home from your workplace.
Here, you start by creating a set of labeled data. This data includes:
Weather conditions
Time of the day
Holidays
All these details are your inputs in this Supervised learning example. The output is the amount of time it took to drive back home on that specific day.
You instinctively know that if it’s raining outside, then it will take you longer to drive home. But the machine needs data and statistics.
Let’s see some Supervised learning examples on how you can develop a supervised learning model of this example which help the user to determine the commute time. The first thing you requires to create is a training set. This training set will contain the total commute time and corresponding factors like weather, time, etc. Based on this training set, your machine might see there’s a direct relationship between the amount of rain and time you will take to get home.
So, it ascertains that the more it rains, the longer you will be driving to get back to your home. It might also see the connection between the time you leave work and the time you’ll be on the road.
The closer you’re to 6 p.m. the longer it takes for you to get home. Your machine may find some of the relationships with your labeled data.
Working of Supervised Machine Learning
This is the start of your Data Model. It begins to impact how rain impacts the way people drive. It also starts to see that more people travel during a particular time of day.
Types of Supervised Machine Learning Algorithms
Following are the types of Supervised Machine Learning algorithms:
Regression:Regression technique predicts a single output value using training data.
Example: You can use regression to predict the house price from training data. The input variables will be locality, size of a house, etc.
Strengths: Outputs always have a probabilistic interpretation, and the algorithm can be regularized to avoid overfitting.
Weaknesses: Logistic regression may underperform when there are multiple or non-linear decision boundaries. This method is not flexible, so it does not capture more complex relationships.
Logistic Regression:Logistic regression method used to estimate discrete values based on given a set of independent variables. It helps you to predicts the probability of occurrence of an event by fitting data to a logit function. Therefore, it is also known as logistic regression. As it predicts the probability, its output value lies between 0 and 1.
Here are a few types of Regression Algorithms
Classification:Classification means to group the output inside a class. If the algorithm tries to label input into two distinct classes, it is called binary classification. Selecting between more than two classes is referred to as multiclass classification.
Example: Determining whether or not someone will be a defaulter of the loan.
Strengths: Classification tree perform very well in practice
Weaknesses: Unconstrained, individual trees are prone to overfitting.
Here are a few types of Classification Algorithms
Naive Bayes ClassifiersNaive Bayesian model (NBN) is easy to build and very useful for large datasets. This method is composed of direct acyclic graphs with one parent and several children. It assumes independence among child nodes separated from their parent.
Decision TreesDecisions trees classify instance by sorting them based on the feature value. In this method, each mode is the feature of an instance. It should be classified, and every branch represents a value which the node can assume. It is a widely used technique for classification. In this method, classification is a tree which is known as a decision tree.
It helps you to estimate real values (cost of purchasing a car, number of calls, total monthly sales, etc.).
Support Vector MachineSupport vector machine (SVM) is a type of learning algorithm developed in 1990. This method is based on results from statistical learning theory introduced by Vap Nik.
SVM machines are also closely connected to kernel functions which is a central concept for most of the learning tasks. The kernel framework and SVM are used in a variety of fields. It includes multimedia information retrieval, bioinformatics, and pattern recognition.
Supervised vs. Unsupervised Machine learning techniquesBased On Supervised machine learning technique Unsupervised machine learning technique
Input Data Algorithms are trained using labeled data. Algorithms are used against data which is not labelled
Computational Complexity Supervised learning is a simpler method. Unsupervised learning is computationally complex
Accuracy Highly accurate and trustworthy method. Less accurate and trustworthy method.
Challenges in Supervised machine learningHere, are challenges faced in supervised machine learning:
Irrelevant input feature present training data could give inaccurate results
Data preparation and pre-processing is always a challenge.
Accuracy suffers when impossible, unlikely, and incomplete values have been inputted as training data
If the concerned expert is not available, then the other approach is “brute-force.” It means you need to think that the right features (input variables) to train the machine on. It could be inaccurate.
Advantages of Supervised Learning
Supervised learning in Machine Learning allows you to collect data or produce a data output from the previous experience
Helps you to optimize performance criteria using experience
Supervised machine learning helps you to solve various types of real-world computation problems.
Decision boundary might be overtrained if your training set which doesn’t have examples that you want to have in a class
You need to select lots of good examples from each class while you are training the classifier.
Classifying big data can be a real challenge.
Training for supervised learning needs a lot of computation time.
Best practices for Supervised Learning
Before doing anything else, you need to decide what kind of data is to be used as a training set
You need to decide the structure of the learned function and learning algorithm.
Gathere corresponding outputs either from human experts or from measurements
Summary
In Supervised learning algorithms, you train the machine using data which is well “labelled.”
You want to train a machine which helps you predict how long it will take you to drive home from your workplace is an example of Supervised learning.
Regression and Classification are two dimensions of a Supervised Machine Learning algorithm.
Supervised learning is a simpler method while Unsupervised learning is a complex method.
The biggest challenge in supervised learning is that Irrelevant input feature present training data could give inaccurate results.
The drawback of this model is that decision boundary might be overstrained if your training set doesn’t have examples that you want to have in a class.
As a best practice of supervise learning, you first need to decide what kind of data should be used as a training set.
What Is Data Mining? Basics And Its Techniques.
The foundation of the fourth industrial revolution will largely depend upon Data and Connectivity. Analysis Services capable of developing or creating data mining solutions will play a key role in this regard. It could assist in analyzing and predicting outcomes of customer purchasing behavior for targeting potential buyers. Data will become a new natural resource and the process of extracting relevant information from this unsorted data will assume immense importance. As such, a proper understanding of the term – Data Mining, its processes, and application could help us in developing a holistic approach to this buzzword.
Data Mining Basics and its TechniquesData mining, also known as Knowledge Discovery in Data (KDD) is about searching large stores of data to uncover patterns and trends that go beyond simple analysis. This, however, is not a single-step solution but a multi-step process and is completed in various stages. These include:
1] Data gathering and PreparationIt starts with data collection and its proper organization. This helps in significantly improving the chances of finding the information that can be discovered through data mining
2] Model Building and EvaluationThe second step in data mining process is the application of various modeling techniques. These are used to calibrate the parameters to optimal values. Techniques employed largely depend on analytic capabilities required to address a gamut of organizational needs and to arrive at a decision.
Let us examine some data mining techniques in brief. It is found that most organizations combine two or more data mining techniques together to form an appropriate process that meets their business requirements.
Read: What is Big Data?
Data Mining Techniques
Association – Association is one of the widely-known data mining techniques. Under this, a pattern is deciphered based on a relationship between items in the same transaction. Hence, it is also known as the relation technique. Big brand retailers rely on this technique to research customer’s buying habits/preferences. For example, when tracking people’s buying habits, retailers might identify that a customer always buys cream when they buy chocolates, and therefore suggest that the next time that they buy chocolates they might also want to buy cream.
Classification – This data mining technique differs from the above in the way that it is based on machine learning and uses mathematical techniques such as Linear programming, Decision trees, Neural network. In classification, companies try to build software that can learn how to classify the data items into groups. For instance, a company can define a classification in the application that “given all records of employees who offered to resign from the company, predict the number of individuals who are likely to resign from the company in future.” Under such a scenario, the company can classify the records of employees into two groups that namely “leave” and “stay”. It can then use its data mining software to classify the employees into separate groups created earlier.
Clustering – Different objects exhibiting similar characteristics are grouped together in a single cluster via automation. Many such clusters are created as classes and objects (with similar characteristics) are placed in it accordingly. To understand this better, let us consider an example of book management in the library. In a library, the vast collection of books is fully cataloged. Items of the same type are listed together. This makes it easier for us to find a book of our interest. Similarly, by using the clustering technique, we can keep books that have some kinds of similarities in one cluster and assign it a suitable name. So, if a reader is looking to grab a book relevant to his interest, he only has to go to that shelf instead of searching the entire library. Thus, the clustering technique defines the classes and puts objects in each class, while in the classification techniques, objects are assigned into predefined classes.
Prediction – The prediction is a data mining technique that is often used in combination with the other data mining techniques. It involves analyzing trends, classification, pattern matching, and relation. By analyzing past events or instances in a proper sequence one can safely predict a future event. For instance, the prediction analysis technique can be used in the sale to predict future profit if the sale is chosen as an independent variable and profit as a variable dependent on sale. Then, based on the historical sale and profit data, one can draw a fitted regression curve that is used for profit prediction.
Data Mining is at the heart of analytics efforts across a variety of industries and disciplines like communications, Insurance, Education, Manufacturing, Banking and Retail and more. Therefore, having correct information about it is essential before apply the different techniques.
Also read: What is Social Media Mining?
Hadoop & Mapreduce Examples: Create First Program In Java
In this tutorial, you will learn to use Hadoop with MapReduce Examples. The input data used is chúng tôi It contains Sales related information like Product name, price, payment mode, city, country of client etc. The goal is to Find out Number of Products Sold in Each Country.
In this tutorial, you will learn-
First Hadoop MapReduce ProgramNow in this MapReduce tutorial, we will create our first Java MapReduce program:
Data of SalesJan2009
Ensure you have Hadoop installed. Before you start with the actual process, change user to ‘hduser’ (id used while Hadoop configuration, you can switch to the userid used during your Hadoop programming config ).
su - hduser_Step 1)
Create a new directory with name MapReduceTutorial as shwon in the below MapReduce example
sudo mkdir MapReduceTutorialGive permissions
sudo chmod -R 777 MapReduceTutorialSalesMapper.java
package SalesCountry; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.*; private final static IntWritable one = new IntWritable(1); String valueString = value.toString(); String[] SingleCountryData = valueString.split(","); output.collect(new Text(SingleCountryData[7]), one); } }SalesCountryReducer.java
package SalesCountry; import java.io.IOException; import java.util.*; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.*; Text key = t_key; int frequencyForCountry = 0; while (values.hasNext()) { IntWritable value = (IntWritable) values.next(); frequencyForCountry += value.get(); } output.collect(key, new IntWritable(frequencyForCountry)); } }SalesCountryDriver.java
package SalesCountry; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; public class SalesCountryDriver { public static void main(String[] args) { JobClient my_client = new JobClient(); JobConf job_conf = new JobConf(SalesCountryDriver.class); job_conf.setJobName("SalePerCountry"); job_conf.setOutputKeyClass(Text.class); job_conf.setOutputValueClass(IntWritable.class); job_conf.setMapperClass(SalesCountry.SalesMapper.class); job_conf.setReducerClass(SalesCountry.SalesCountryReducer.class); job_conf.setInputFormat(TextInputFormat.class); job_conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(job_conf, new Path(args[0])); FileOutputFormat.setOutputPath(job_conf, new Path(args[1])); my_client.setConf(job_conf); try { JobClient.runJob(job_conf); } catch (Exception e) { e.printStackTrace(); } } }Download Files Here
Check the file permissions of all these files
and if ‘read’ permissions are missing then grant the same-
Step 2)Export classpath as shown in the below Hadoop example
export CLASSPATH="$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:$HADOOP_HOME/share/hadoop/common/hadoop-common-2.2.0.jar:~/MapReduceTutorial/SalesCountry/*:$HADOOP_HOME/lib/*" Step 3)Compile Java files (these files are present in directory Final-MapReduceHandsOn). Its class files will be put in the package directory
javac -d . chúng tôi SalesCountryReducer.java SalesCountryDriver.javaThis warning can be safely ignored.
This compilation will create a directory in a current directory named with package name specified in the java source file (i.e. SalesCountry in our case) and put all compiled class files in it.
Step 4)Create a new file chúng tôi
sudo gedit Manifest.txtadd following lines to it,
Main-Class: SalesCountry.SalesCountryDriverSalesCountry.SalesCountryDriver is the name of main class. Please note that you have to hit enter key at end of this line.
Step 5)Create a Jar file
jar cfm chúng tôi chúng tôi SalesCountry/*.classCheck that the jar file is created
Step 6)Start Hadoop
$HADOOP_HOME/sbin/start-dfs.sh $HADOOP_HOME/sbin/start-yarn.sh Step 7)Copy the File chúng tôi into ~/inputMapReduce
Now Use below command to copy ~/inputMapReduce to HDFS.
$HADOOP_HOME/bin/hdfs dfs -copyFromLocal ~/inputMapReduce /We can safely ignore this warning.
Verify whether a file is actually copied or not.
$HADOOP_HOME/bin/hdfs dfs -ls /inputMapReduce Step 8)Run MapReduce job
$HADOOP_HOME/bin/hadoop jar chúng tôi /inputMapReduce /mapreduce_output_salesThis will create an output directory named mapreduce_output_sales on HDFS. Contents of this directory will be a file containing product sales per country.
Step 9)The result can be seen through command interface as,
$HADOOP_HOME/bin/hdfs dfs -cat /mapreduce_output_sales/part-00000Results can also be seen via a web interface as-
Open r in a web browser.
Now select ‘Browse the filesystem’ and navigate to /mapreduce_output_sales
Open part-r-00000
Explanation of SalesMapper ClassIn this section, we will understand the implementation of SalesMapper class.
1. We begin by specifying a name of package for our class. SalesCountry is a name of our package. Please note that output of compilation, SalesMapper.class will go into a directory named by this package name: SalesCountry.
Followed by this, we import library packages.
Below snapshot shows an implementation of SalesMapper class-
Sample Code Explanation:
1. SalesMapper Class Definition-
Every mapper class must be extended from MapReduceBase class and it must implement Mapper interface.
2. Defining ‘map’ function-
public void map(LongWritable key, Text value, Reporter reporter) throws IOExceptionThe main part of Mapper class is a ‘map()’ method which accepts four arguments.
At every call to ‘map()’ method, a key-value pair (‘key’ and ‘value’ in this code) is passed.
‘map()’ method begins by splitting input text which is received as an argument. It uses the tokenizer to split these lines into words.
String valueString = value.toString(); String[] SingleCountryData = valueString.split(",");Here, ‘,’ is used as a delimiter.
After this, a pair is formed using a record at 7th index of array ‘SingleCountryData’ and a value ‘1’.
output.collect(new Text(SingleCountryData[7]), one);
We are choosing record at 7th index because we need Country data and it is located at 7th index in array ‘SingleCountryData’.
Please note that our input data is in the below format (where Country is at 7th index, with 0 as a starting index)-
Transaction_date,Product,Price,Payment_Type,Name,City,State,Country,Account_Created,Last_Login,Latitude,Longitude
An output of mapper is again a key-value pair which is outputted using ‘collect()’ method of ‘OutputCollector’.
Explanation of SalesCountryReducer ClassIn this section, we will understand the implementation of SalesCountryReducer class.
1. We begin by specifying a name of the package for our class. SalesCountry is a name of out package. Please note that output of compilation, SalesCountryReducer.class will go into a directory named by this package name: SalesCountry.
Followed by this, we import library packages.
Below snapshot shows an implementation of SalesCountryReducer class-
Code Explanation:
1. SalesCountryReducer Class Definition-
Here, the first two data types, ‘Text’ and ‘IntWritable’ are data type of input key-value to the reducer.
The last two data types, ‘Text’ and ‘IntWritable’ are data type of output generated by reducer in the form of key-value pair.
Every reducer class must be extended from MapReduceBase class and it must implement Reducer interface.
2. Defining ‘reduce’ function-
public void reduce( Text t_key, Reporter reporter) throws IOException {An input to the reduce() method is a key with a list of multiple values.
For example, in our case, it will be-
reduce() method begins by copying key value and initializing frequency count to 0.
int frequencyForCountry = 0;
Then, using ‘while’ loop, we iterate through the list of values associated with the key and calculate the final frequency by summing up all the values.
while (values.hasNext()) { IntWritable value = (IntWritable) values.next(); frequencyForCountry += value.get(); }Now, we push the result to the output collector in the form of key and obtained frequency count.
Below code does this-
output.collect(key, new IntWritable(frequencyForCountry)); Explanation of SalesCountryDriver ClassIn this section, we will understand the implementation of SalesCountryDriver class
1. We begin by specifying a name of package for our class. SalesCountry is a name of out package. Please note that output of compilation, SalesCountryDriver.class will go into directory named by this package name: SalesCountry.
Here is a line specifying package name followed by code to import library packages.
The driver class is responsible for setting our MapReduce job to run in Hadoop. In this class, we specify job name, data type of input/output and names of mapper and reducer classes.
3. In below code snippet, we set input and output directories which are used to consume input dataset and produce output, respectively.
arg[0] and arg[1] are the command-line arguments passed with a command given in MapReduce hands-on, i.e.,
$HADOOP_HOME/bin/hadoop jar chúng tôi /inputMapReduce /mapreduce_output_sales
4. Trigger our job
Below code start execution of MapReduce job-
try { JobClient.runJob(job_conf); } catch (Exception e) { e.printStackTrace(); }What Are Wildcards Arguments In Generics In Java?
T age; Student(T age){ chúng tôi = age; } public void display() { System.out.println(“Value: “+this.age); } } public class GenericsExample { public static void main(String args[]) { std1.display(); std2.display(); std3.display(); } }
Output Value: 25.5 Value: 25 Value: 25 WildcardsInstead of the typed parameter in generics (T) you can also use “?”, representing an unknown type. You can use a wild card as a −
Type of parameter.
Field
Local field.
The only restriction on wilds cards is that you cannot it as a type argument of a generic method while invoking it.
Java provides 3 types of wild cards namely upper-bounded, lower-bounded, un-bounded.
Upper-bounded wildcardsUpper bounds in wild cards is similar to the bounded type in generics. Using this you can enable the usage of all the subtypes of a particular class as a typed parameter.
For example, if want to accept a Collection object as a parameter of a method with the typed parameter as a sub class of the number class, you just need to declare a wild card with the Number class as upper bound.
To create/declare an upper-bounded wildcard, you just need to specify the extends keyword after the “?” followed by the class name.
ExampleFollowing Java example demonstrates the creation of the upper-bounded wildcard.
Live Demo
import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.List; import java.util.HashSet; public class UpperBoundExample { for (Number num: col) { System.out.print(num+" "); } System.out.println(""); } public static void main(String args[]) { col1.add(24); col1.add(56); col1.add(89); col1.add(75); col1.add(36); sampleMethod(col1); sampleMethod(col2); col3.add(25.225d); col3.add(554.32d); col3.add(2254.22d); col3.add(445.21d); sampleMethod(col3); } } Output 24 56 89 75 36 22.1 3.32 51.4 82.7 95.4 625.0 25.225 554.32 2254.22 445.21If you pass a collection object other than type that is subclass of Number as a parameter to the sampleMethod() of the above program a compile time error will be generated.
ExampleLive Demo
import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.List; import java.util.HashSet; public class UpperBoundExample { for (Number num: col) { System.out.print(num+" "); } System.out.println(""); } public static void main(String args[]) { col1.add(24); col1.add(56); col1.add(89); col1.add(75); col1.add(36); sampleMethod(col1); sampleMethod(col2); col3.add("Raju"); col3.add("Ramu"); col3.add("Raghu"); col3.add("Radha"); sampleMethod(col3); } sampleMethod(col3); ^ Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output 1 error Lower-Bounded wildcardsupper-bounded wildcard enables the usage of all the subtypes of a particular class as a typed parameter.
Similarly, if we use the lower-bounded wildcards you can restrict the type of the “?” to a particular type or a super type of it.
For example, if want to accept a Collection object as a parameter of a method with the typed parameter as a super class of the Integer class, you just need to declare a wildcard with the Integer class as lower bound.
To create/declare a lower-bounded wildcard, you just need to specify the super keyword after the “?” followed by the class name.
Following Java example demonstrates the creation of the Lower-bounded wildcard.
ExampleLive Demo
import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.List; import java.util.Iterator; public class LowerBoundExample { Iterator it = col.iterator(); while (it.hasNext()) { System.out.print(it.next()+" "); } System.out.println(""); } public static void main(String args[]) { col1.add(24); col1.add(56); col1.add(89); col1.add(75); col1.add(36); sampleMethod(col1); sampleMethod(col2); } } Output 24 56 89 75 36 22.1 3.32 51.4 82.7 95.4 625.0If you pass a collection object other of type other than Integer and its super type as a parameter to the sampleMethod() of the above program a compile time error will be generated.
ExampleLive Demo
import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.List; import java.util.Iterator; import java.util.HashSet; public class LowerBoundExample { Iterator it = col.iterator(); while (it.hasNext()) { System.out.print(it.next()+" "); } System.out.println(""); } public static void main(String args[]) { col1.add(24); col1.add(56); col1.add(89); col1.add(75); col1.add(36); sampleMethod(col1); sampleMethod(col2); col3.add(25.225d); col3.add(554.32d); col3.add(2254.22d); col3.add(445.21d); sampleMethod(col3); } sampleMethod(col3); ^ Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output 1 error Unbounded wildcardsAn unbounded wildcard is the one which enables the usage of all the subtypes of an unknown type i.e. any type (Object) is accepted as typed-parameter.
For example, if want to accept an ArrayList of object type as a parameter, you just need to declare an unbounded wildcard.
To create/declare a Unbounded wildcard, you just need to specify the wild card character “?” as a typed parameter within angle brackets.
ExampleFollowing Java example demonstrates the creation of the Unbounded wildcard.
import java.util.List; import java.util.Arrays; public class UnboundedExample { for (Object ele : col) { System.out.print(ele+" "); } System.out.println(""); } public static void main(String args[]) { col1.add(24); col1.add(56); col1.add(89); col1.add(75); col1.add(36); sampleMethod(col1); col2.add(24.12d); col2.add(56.25d); col2.add(89.36d); col2.add(75.98d); col2.add(36.47d); sampleMethod(col2); } } Output 24 56 89 75 36 24.12 56.25 89.36 75.98 36.47If you pass an List object created from arrays (contains elements of primitive type) a compile time error will be generated.
Live Demo
import java.util.ArrayList; import java.util.Arrays; import java.util.List; public class UnboundedExample { for (Object ele : col) { System.out.print(ele+" "); } System.out.println(""); } public static void main(String args[]) { col1.add(24); col1.add(56); col1.add(89); col1.add(75); col1.add(36); sampleMethod(col1); col2.add(24.12d); col2.add(56.25d); col2.add(89.36d); col2.add(75.98d); col2.add(36.47d); sampleMethod(col2); sampleMethod(col2); } } Compile time error UnboundedExample.java:27: error: variable col2 is already defined in method main(String[]) ^ 1 errorLoad Testing Tutorial: What Is? How To? (Examples)
Load Testing
Load Testing is a non-functional software testing process in which the performance of software application is tested under a specific expected load. It determines how the software application behaves while being accessed by multiple users simultaneously. The goal of Load Testing is to improve performance bottlenecks and to ensure stability and smooth functioning of software application before deployment.
This testing usually identifies –
The maximum operating capacity of an application
Determine whether the current infrastructure is sufficient to run the application
Sustainability of application with respect to peak user load
Number of concurrent users that an application can support, and scalability to allow more users to access it.
It is a type of non-functional testing. In Software Engineering, Load testing is commonly used for the Client/Server, Web-based applications – both Intranet and Internet.
Need of Load Testing:
Consider the following examples
An Airline website was not able to handle 10000+ users during a festival offer.
Encyclopedia Britannica declared free access to their online database as a promotional offer. They were not able to keep up with the onslaught of traffic for weeks.
Many sites suffer delayed load times when they encounter heavy traffic. Few Facts –
$ 4.4 Billion Lost annually due to poor performance
Why Load Testing?
Load testing gives confidence in the system & its reliability and performance.
Load Testing helps identify the bottlenecks in the system under heavy user stress scenarios before they happen in a production environment.
Load testing gives excellent protection against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment.
Goals of Load Testing:Loading testing identifies the following problems before moving the application to market or Production:
Response time for each transaction
Network delay between the client and the server
Software design issues
Server configuration issues like a Web server, application server, database server etc.
Hardware limitation issues like CPU maximization, memory limitations, network bottleneck, etc.
Load testing will determine whether the system needs to be fine-tuned or modification of hardware and software is required to improve performance. To effectively conduct load testing, you can utilize various performance testing tools that are available to help you identify areas for improvement.
Prerequisites of load testing:The chief metric for load testing is response time. Before you begin load testing, you must determine –
Whether the response time is already measured and compared – Quantitative
Whether the response time is applicable to the business process – Relevant
Whether the response time is justifiable – Realistic
Whether the response time is achievable – Achievable
Whether the response time is measurable using a tool or stopwatch – Measurable
An environment needs to be set up before starting the load testing:Hardware Platform Software Configuration
Server Machines
Processors
Memory
Disk Storage
Load Machines configuration
Network configuration
Operating System
Server Software
Strategies of load Testing:There are many numbers of ways to perform load testing. Following are a few load testing strategies-
Manual Load Testing: This is one of the strategies to execute load testing, but it does not produce repeatable results, cannot provide measurable levels of stress on an application and is an impossible process to coordinate.
In house developed load testing tools: An organization, which realizes the importance of load testing, may build their own tools to execute load tests.
Open source load testing tools: There are several load testing tools available as open source that are free of charge. They may not be as sophisticated as their paid counterparts, but if you are on a budget, they are the best choice.
Enterprise-class load testing tools: They usually come with capture/playback facility. They support a large number of protocols. They can simulate an exceptionally large number of users.
How to do Load TestingThe load testing process can be briefly described as below –
Create a dedicated Test Environment for load testing
Determine the following
Load Test Scenarios
Determine load testing transactions for an application
Prepare Data for each transaction
Number of Users accessing the system need to be predicted
Determine connection speeds. Some users may be connected via leased lines while others may use dial-up
Determine different browsers and operating systems used by the users
A configuration of all the servers like web, application and DB Servers
Test Scenario execution and monitoring. Collecting various metrics
Analyze the results. Make recommendations
Fine-tune the System
Re-test
Guidelines for load testing
Load testing should be planned once the application becomes functionally stable.
A large number of unique data should be ready in the data pool
Number of users should be decided for each scenario or scripts
Avoid creation of detailed logs to conserve the disk IO space
Try to avoid downloading of images in the site
In the process of executing load testing test cases, the consistency of response time over the elapsed period should be logged and the same should be compared with various test runs.
Difference between Load and Stress testing:Load Testing Stress Testing
Stress Testing determines the breaking point of the system to reveal the maximum point after which it breaks.
To recognize the upper limit of the system, set SLA of the app and check how the system can handle a heavy load.
Generating increased load on a web application is the main aim of load testing. Stress testing aims to ensure that under a sudden high load for a considerable duration the servers don’t crash.
The attributes which are checked in a load test are peak performance, server quantity and response time. This kind of testing checks stability response time, etc.
In load testing load limit is a threshold of a break. In stress testing load limit is above the threshold of a break.
Difference between Functional and Load Testing:Functional Testing Load Testing
Results of functional tests are easily predictable as we have proper steps and preconditions defined Results of load tests are unpredictable
Results of functional tests vary slightly Load test results vary drastically
Frequency of executing Functional Testing will be high A frequency of executing load testing will be low
Results of functional tests are dependent on the test data Load testing depends on the number of users.
Load Testing Tools: LoadNinjaLoadNinja – is revolutionizing the way we load test. This cloud-based load testing tool empowers teams to record & instantly playback comprehensive load tests, without complex dynamic correlation & run these load tests in real browsers at scale. Teams are able to increase test coverage. & cut load testing time by over 60%.
Load Runner:Load runner is HP tool used to test the applications under normal and peak load conditions. Load runner generates load by creating virtual users that emulate network traffic. It simulates real time usage like a production environment and gives graphical results.
Read more about Loadrunner here.
Performance bottlenecks identification before production
Improves the scalability of the system
Minimize risk related to system downtime
Reduced costs of failure
Increase customer satisfaction
One needs programming knowledge to use tools for conducting a load test in the context of software testing.
Tools can be expensive as pricing depends on the number of virtual users supported.
Summary:
Load testing is defined as a type of software testing that determines a system’s performance under real-life load conditions.
Load testing typically improves performance bottlenecks, scalability and stability of the application before it is available for production.
This testing helps to identify the maximum operating capacity of applications as well as system bottlenecks.
Load testing in software testing is important because if ignored, it can cause financial losses for an organization.
Update the detailed information about Oops Concepts In Java: What Is, Basics With Examples on the Benhvienthammyvienaau.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!