Wt slip tybcs11to 20

 @Slip-11


Q. 1) Write a Javascript program to accept name of student, change font color to red, font size to 18 if 

Student name is present otherwise on clicking on empty text box display image which changes its size 

(Use onblur, onload, onmousehover, onmouseclick, onmouseup) 

Ans:

<!DOCTYPE html>

<html>

<head>

<title>JavaScript Example</title>

<style>

#name {

Font-size: 14px;

Color: black;

}

</style>

</head>

<body>

<input type=”text” id=”name” onblur=”changeStyle()” onmouseover=”changeSize()” onmouseout=”resetSize()” onmousedown=”changeColor()” onmouseup=”resetColor()”>

<img id=”img” src=https://via.placeholder.com/150 onload=”changeImageSize()”>

<script>

Function changeStyle() {

Let name = document.getElementById(“name”).value;

If (name) {

Document.getElementById(“name”).style.fontSize = “18px”;

Document.getElementById(“name”).style.color = “red”;

} else {

Document.getElementById(“img”).style.display = “block”;

}

}


Function changeSize() {

Document.getElementById(“name”).style.fontSize = “16px”;

}


Function resetSize() {

Document.getElementById(“name”).style.fontSize = “14px”;

}


Function changeColor() {

Document.getElementById(“name”).style.color = “blue”;

}


Function resetColor() {

Document.getElementById(“name”).style.color = “red”;

}


Function changeImageSize() {

Document.getElementById(“img”).style.width = “200px”;

Document.getElementById(“img”).style.height = “200px”;

}

</script>

</body>

</html>



Q 2).Create the above dataset in python & Convert the categorical values into numeric format.Apply 

The apriori algorithm on the above dataset to generate the frequent itemsets and associationrules. Repeat 

The process with different min_sup values.




TID={1:[“butter”,”bread”,”milk],2=[“butter”,”flour”,”milk”,”suger”],3=[“butter”,”eggs”,”milk”,”salt”],4=[“eggs”],5=[“butter”,”flour”,”milk”,”salt”]}



Ans:


Import pandas as pd

From mlxtend.preprocessing import TransactionEncoder

From mlxtend.frequent_patterns import apriori, association_rules


# Creating the dataset

Dataset = [[‘butter’, ‘bread’, ‘milk’], [‘butter’, ‘flour’, ‘milk’, ‘sugar’], [‘butter’, ‘eggs’, ‘milk’, ‘salt’], [‘eggs’], [‘butter’, ‘flour’, ‘milk’, ‘salt’]]

Df = pd.DataFrame(dataset)


# Converting the categorical values into numeric format

Te = TransactionEncoder()

Te_ary = te.fit(dataset).transform(dataset)

Df = pd.DataFrame(te_ary, columns=te.columns_)


# Generating frequent itemsets using Apriori algorithm with different min_sup values

Min_sup_values = [0.4, 0.3, 0.2]

For min_sup in min_sup_values:

    Frequent_itemsets = apriori(df, min_support=min_sup, use_colnames=True)

    Print(“Frequent Itemsets with minimum support of”, min_sup)

    Print(frequent_itemsets)


    # Generating association rules

    Rules = association_rules(frequent_itemsets, metric=”confidence”, min_threshold=0.7)

    Print(“Association Rules with minimum support of”, min_sup)

    Print(rules)




@Slip-12




Q. 1)Write AJAX program to read contact.dat file and print the contents of the file in a tabular format 

When the user clicks on print button. Contact.dat file should contain srno, name, residence number, 

Mobile number, Address. [Enter at least 3 record in contact.dat file]

.


Ans:


Html file

<<!DOCTYPE html>

<html>

<head>

<title>Contact List</title>

<script src=https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js></script>

<script src=”script.js”></script>

</head>

<body>

<button id=”printBtn”>Print Contacts</button>

<br><br>

<table id=”contactTable”>

<thead>

<tr>

<th>Sr. No.</th>

<th>Name</th>

<th>Residence Number</th>

<th>Mobile Number</th>

<th>Address</th>

</tr>

</thead>

<tbody>

<!—Contact list will be displayed here (

</tbody>

</table>

</body>

</html>



Ajax file


$(document).ready(function() {

// Event listener for print button

$(“#printBtn”).click(function() {

// AJAX request to read contact.dat file

$.ajax({

url: “contact.dat”,

dataType: “text”,

success: function(data) {

// Split the file contents into lines

Var lines = data.split(“\n”);


// Iterate over each line and create a table row

Var tableRows = “”;

For (var i = 0; i < lines.length; i++) {

Var columns = lines[i].split(“,”);

If (columns.length == 5) { // Only process valid rows

tableRows += “<tr>”;

for (var j = 0; j < columns.length; j++) {

tableRows += “<td>” + columns[j] + “</td>”;

}

tableRows += “</tr>”;

}

}


// Add the table rows to the table body

$(“#contactTable tbody”).html(tableRows);

},

Error: function(jqXHR, textStatus, errorThrown) {

Alert(“Error: “ + errorThrown);

}

});

});

});



Q. 2)Create ‘heights-and-weights’ Data set . Build a linear regression model by identifying independent 

And target variable. Split the variables into training and testing sets and print them. Build a simple linear 

Regression model for predicting purchases.


Ans:


Import numpy as np

Import pandas as pd

From sklearn.linear_model import LinearRegression

From sklearn.model_selection import train_test_split


# Create a random dataset with 10 samples

Heights = np.random.normal(170, 10, 10)

Weights = np.random.normal(70, 5, 10)


# Combine the two arrays into a single dataset

Dataset = pd.DataFrame({‘Height’: heights, ‘Weight’: weights})


# Split the dataset into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(dataset[‘Height’], dataset[‘Weight’], test_size=0.2, random_state=42)


# Create a Linear Regression model and fit it to the training data

Lr_model = LinearRegression()

Lr_model.fit(X_train.values.reshape(-1, 1), y_train)


# Print the model coefficients

Print(‘Model Coefficients:’, lr_model.coef_)


# Predict the weights for the test data and print the predictions

Y_pred = lr_model.predict(X_test.values.reshape(-1, 1))

Print(‘Predictions:’, y_pred)




@Slip-13


Q. 1) Write AJAX program where the user is requested to write his or her name in a text box, and the 

Server keeps sending back responses while the user is typing. If the user name is not entered then the 

Message displayed will be, “Stranger, please tell me your name!”. If the name is Rohit, Virat, Dhoni, 

Ashwin or Harbhajan , the server responds with “Hello, master !”. If the name is anything else, the 

Message will be “, I don’t know you!”.


Ans:


Html file

 

<!DOCTYPE html>

<html>

<head>

<title>AJAX Program</title>

<script src=https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js></script>

</head>

<body>

<label for=”name”>Enter your name:</label>

<input type=”text” id=”name” name=”name”>

<div id=”response”></div>


<script src=”ajax.js”></script>

</body>

</html>



Ajax file


$(document).ready(function() {

// Attach an event listener to the name input field

$(‘#name’).on(‘input’, function() {

// Get the name entered by the user

Var name = $(this).val();


// Send an AJAX request to the server

$.ajax({

url: ‘server.php’,

type: ‘POST’,

data: { name: name },

success: function(response) {

// Update the response div with the server’s response

$(‘#response’).html(response);

}

});

});

});



File name: Server.php

 

<?php


// Get the name entered by the user

$name = $_POST[‘name’];


// Check if the name is empty

If (empty($name)) {

Echo ‘Stranger, please tell me your name!’;

}

// Check if the name is one of the master names

Else if ($name == ‘Rohit’ || $name == ‘Virat’ || $name == ‘Dhoni’ || $name == ‘Ashwin’ || $name == ‘Harbhajan’) {

Echo ‘Hello, master!’;

}

// Otherwise, the server doesn’t know the user

Else {

Echo $name . ‘, I don\’t know you!’;

}



Q. 2)Download nursery dataset from UCI. Build a linear regression model by identifying independent 

And target variable. Split the variables into training and testing sets and print them. Build a simple linear 

Regression model for predicting purchases. 

Ans:


Import pandas as pd

Import numpy as np

From sklearn.model_selection import train_test_split

From sklearn.linear_model import LinearRegression


# Load the dataset


url = https://archive.ics.uci.edu/ml/machine-learning-databases/nursery/nursery.data

names = [‘parents’, ‘has_nurs’, ‘form’, ‘children’, ‘housing’, ‘finance’, ‘social’, ‘health’, ‘class’]

dataset = pd.read_csv(url, names=names)


# Identify independent and target variables

X = dataset.drop(‘class’, axis=1)

Y = dataset[‘class’]


# Convert categorical variables into numerical variables using one-hot encoding

X = pd.get_dummies(X)


# Split into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)


# Build a linear regression model

Model = LinearRegression()

Model.fit(X_train, y_train)


# Print the coefficients of the model

Print(“Intercept: “, model.intercept_)

Print(“Coefficients: “, model.coef_)


# Predict the target variable for the testing set

Y_pred = model.predict(X_test)


# Evaluate the model using Mean Squared Error (MSE)

Mse = np.mean((y_test – y_pred) ** 2)

Print(“MSE: “, mse)



@Slip-14


Q. 1) Create TEACHER table as follows TEACHER(tno, tname, qualification, salary). Write Ajax 

Program to select a teachers name and print the selected teachers details.

AAns:


Js file

<!DOCTYPE html>

<html>

<head>

<title>Teacher Details</title>

<script src=https://code.jquery.com/jquery-3.6.0.min.js></script>

</head>

<body>

<select id=”teacher-list”>

<option value=””>--Select Teacher--</option>

<option value=”1”>John Doe</option>

<option value=”2”>Jane Smith</option>

<option value=”3”>Bob Johnson</option>

</select>

<button id=”submit-btn”>Get Details</button>

<div id=”details”></div>


<script>

$(document).ready(function() {

$(‘#submit-btn’).click(function() {

Var tno = $(‘#teacher-list’).val();

If (tno == ‘’) {

Alert(‘Please select a teacher.’);

Return;

}

$.ajax({

url: ‘teacherdetails.php’,

method: ‘POST’,

data: {tno: tno},

success: function(response) {

$(‘#details’).html(response);

},

Error: function(xhr, status, error) {

Console.log(xhr.responseText);

}

});

});

});

</script>

</body>

</html>


Php file teacherdetails.php


<?php

// Connect to database

$servername = “localhost”;

$username = “username”;

$password = “password”;

$dbname = “database_name”;

$conn = mysqli_connect($servername, $username, $password, $dbname);


// Check connection

If (!$conn) {

    Die(“Connection failed: “ . mysqli_connect_error());

}


// Retrieve selected teacher details

If (isset($_POST[‘tno’])) {

$tno = $_POST[‘tno’];

$sql = “SELECT * FROM TEACHER WHERE tno = ‘$tno’”;

$result = mysqli_query($conn, $sql);


If (mysqli_num_rows($result) > 0) {

$row = mysqli_fetch_assoc($result);

Echo “Teacher Name: “ . $row[‘tname’] . “<br>”;

Echo “Qualification: “ . $row[‘qualification’] . “<br>”;

Echo “Salary: “ . $row[‘salary’] . “<br>”;

} else {

Echo “No data found.”;

}

}


// Close database connection

Mysqli_close($conn);

?>


Q. 2)Create the following dataset in python & Convert the categorical values into numeric format.Apply 

The apriori algorithm on the above dataset to generate the frequent itemsets and association rules. Repeat 

The process with different min_sup_values.




TID={1:[“apple”,”mango”,”banana”],2=[“mango”,”banana”, “cabbage”,”carrots”],3=[“mango”,”banana”,carrots”],4=[“mango”,”carrots”]}AAAns:


From mlxtend.preprocessing import TransactionEncoder

From mlxtend.frequent_patterns import apriori


# Create the dataset

TID = {1:[“apple”,”mango”,”banana”],

       2:[“mango”,”banana”,”cabbage”,”carrots”],

       3:[“mango”,”banana”,”carrots”],

       4:[“mango”,”carrots”]}


# Convert the categorical values into numeric format

Te = TransactionEncoder()

Te_ary = te.fit([TID[i] for i in TID]).transform([TID[i] for i in TID])

Df = pd.DataFrame(te_ary, columns=te.columns_)


# Apply the apriori algorithm with different min_sup values

Min_sup_values = [0.25, 0.5, 0.75]

For min_sup in min_sup_values:

    Frequent_itemsets = apriori(df, min_support=min_sup, use_colnames=True)

    Print(“Frequent itemsets with min_sup =”, min_sup)

    Print(frequent_itemsets)

    Print(“\n”)


@Slip-15


Q. 1) Write Ajax program to fetch suggestions when is user is typing in a textbox. (eg like google 

Suggestions. Hint create array of suggestions and matching string will be displayed).


Ans:

<!DOCTYPE html>

<html>

<head>

<title>AJAX Auto Suggestions Example</title>

<script>

Function fetchSuggestions(str) {

If (str.length == 0) {

Document.getElementById(“suggestions”).innerHTML = “”;

Return;

}

Var suggestions = [“apple”, “banana”, “cherry”, “dates”, “elderberry”, “fig”, “grape”, “honeydew”, “kiwi”, “lemon”];

Var matches = [];

For (var i = 0; i < suggestions.length; i++) {

If (suggestions[i].toLowerCase().startsWith(str.toLowerCase())) {

Matches.push(suggestions[i]);

}

}

If (matches.length > 0) {

Document.getElementById(“suggestions”).innerHTML = matches.join(“<br>”);

} else {

Document.getElementById(“suggestions”).innerHTML = “No suggestions found”;

}

}

</script>

</head>

<body>

<input type=”text” onkeyup=”fetchSuggestions(this.value)”>

<div id=”suggestions”></div>

</body>

</html>



Q. 2)Create the following dataset in python & Convert the categorical values into numeric format.Apply 

The apriori algorithm on the above dataset to generate the frequent itemsets and association rules. Repeat 

The process with different min_sup values.


 No | Company |     model    |   year

  1.        Tata.              Nexon.      2017

  2.         MG.              Astor.        2021

  3.         Kia.               Seltos.       2019

 4.       Hyundai.       Creta.        2015

 

Ans:

Import pandas as pd


# Create the dataset

Data = {‘No’: [1, 2, 3, 4],

        ‘Company’: [‘Tata’, ‘MG’, ‘Kia’, ‘Hyundai’],

        ‘Model’: [‘Nexon’, ‘Astor’, ‘Seltos’, ‘Creta’],

        ‘Year’: [2017, 2021, 2019, 2015]}


Df = pd.DataFrame(data)


# Convert categorical values into numeric format

Df[‘Company’] = pd.Categorical(df[‘Company’])

Df[‘Model’] = pd.Categorical(df[‘Model’])


Df[‘Company’] = df[‘Company’].cat.codes

Df[‘Model’] = df[‘Model’].cat.codes


Print(df)

From mlxtend.frequent_patterns import apriori

From mlxtend.frequent_patterns import association_rules


# Generate frequent itemsets with min_sup = 0.5

Frequent_itemsets = apriori(df, min_support=0.5, use_colnames=True)

Print(frequent_itemsets)


# Generate association rules with min_threshold = 0.7

Association_rules = association_rules(frequent_itemsets, metric=”confidence”, min_threshold=0.7)

Print(association_rules)


@Slip-16


Q. 1) Write Ajax program to get book details from XML file when user select a book name. Create XML 

File for storing details of book(title, author, year, price).


Ans:


Xml file book_details.xml


<books>

<book>

<title>The Great Gatsby</title>

<author>F. Scott Fitzgerald</author>

<year>1925</year>

<price>10.99</price>

</book>

<book>

<title>To Kill a Mockingbird</title>

<author>Harper Lee</author>

<year>1960</year>

<price>8.99</price>

</book>

<book>

<title>1984</title>

<author>George Orwell</author>

<year>1949</year>

<price>6.99</price>

</book>

<book>

<title>Pride and Prejudice</title>

<author>Jane Austen</author>

<year>1813</year>

<price>7.99</price>

</book>

</books>



Ajax file


<!DOCTYPE html>

<html>

<head>

<title>Book Details</title>

<script src=https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js></script>

<script>

$(document).ready(function(){

$(“select”).change(function(){

Var book = $(this).val();

$.ajax({

url: “book_details.xml”,

dataType: “xml”,

success: function(xml){

$(xml).find(‘book’).each(function(){

Var title = $(this).find(‘title’).text();

If (title == book) {

Var author = $(this).find(‘author’).text();

Var year = $(this).find(‘year’).text();

Var price = $(this).find(‘price’).text();

$(“#details”).html(“Author: “ + author + “<br>Year: “ + year + “<br>Price: “ + price);

}

});

}

});

});

});

</script>

</head>

<body>

<select>

<option>Select a book</option>

<option>The Great Gatsby</option>

<option>To Kill a Mockingbird</option>

<option>1984</option>

<option>Pride and Prejudice</option>

</select>

<div id=”details”></div>

</body>

</html>



Q2).Consider any text paragraph. Preprocess the text to remove any special characters and digits. 

Generate the summary using extractive summarization pprocess.

Ans:


Import re

Import nltk

From nltk.corpus import stopwords

From nltk.tokenize import sent_tokenize, word_tokenize

From heapq import nlargest


# Sample text paragraph you can write any text


Text = “Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human languages, in particular how to program computers to process and analyze large amounts of natural language data. Challenges in natural language processing frequently involve speech recognition, natural language understanding, and natural language generation. The history of natural language processing generally started in the 1950s, although work can be found from earlier periods.”


# Remove special characters and digits

Text = re.sub(‘[^a-zA-Z]’, ‘ ‘, text)


# Tokenize the text into sentences

Sentences = sent_tokenize(text)


# Tokenize each sentence into words and remove stop words

Stop_words = set(stopwords.words(‘english’))

Words = []

For sentence in sentences:

    Words.extend(word_tokenize(sentence))

Words = [word.lower() for word in words if word.lower() not in stop_words]


# Calculate word frequency

Word_freq = nltk.FreqDist(words)


# Calculate sentence scores based on word frequency

Sentence_scores = {}

For sentence in sentences:

    For word in word_tokenize(sentence.lower()):

        If word in word_freq:

            If len(sentence.split(‘ ‘)) < 30:

                If sentence not in sentence_scores:

                    Sentence_scores[sentence] = word_freq[word]

                Else:

                    Sentence_scores[sentence] += word_freq[word]


# Generate summary by selecting top 3 sentences with highest scores

Summary_sentences = nlargest(3, sentence_scores, key=sentence_scores.get)

Summary = ‘ ‘.join(summary_sentences)


Print(summary)


@Slip-17


Q. 1) Write a Java Script Program to show Hello Good Morning message onload event using alert box 

And display the Student registration from.


Ans:


<!DOCTYPE html>

<html>

<head>

<title>Student Registration Form</title>

<script>

Window.onload = function() {

Alert(“Hello Good Morning!”);

};

</script>

</head>

<body>

<h1>Student Registration Form</h1>

<form>

<label for=”name”>Name:</label>

<input type=”text” id=”name” name=”name” required><br><br>

<label for=”email”>Email:</label>

<input type=”email” id=”email” name=”email” required><br><br>

<label for=”phone”>Phone:</label>

<input type=”tel” id=”phone” name=”phone” required><br><br>

<label for=”address”>Address:</label>

<textarea id=”address” name=”address” required></textarea><br><br>

<input type=”submit” value=”Submit”>

</form>

</body>

</html>



Q. 2)Consider text paragraph.So, keep working. Keep striving. Never give up. Fall down seven times, get 

Up eight. Ease is a greater threat to progress than hardship. Ease is a greater threat to progress than 

Hardship. So, keep moving, keep growing, keep learning. See you at work.Preprocess the text to remove 

Any special characters and digits. Generate the summary using extractive summarization process.

Ans:


Import re

From nltk.tokenize import sent_tokenize


# Text paragraph

Text = “So, keep working. Keep striving. Never give up. Fall down seven times, get up eight. Ease is a greater threat to progress than hardship. Ease is a greater threat to progress than hardship. So, keep moving, keep growing, keep learning. See you at work.”


# Remove special characters and digits

Text = re.sub(‘[^A-Za-z]+’, ‘ ‘, text)


# Tokenize the sentences

Sentences = sent_tokenize(text)


# Calculate the score of each sentence based on the number of words

# The sentences with more words will have a higher score

Scores = {}

For sentence in sentences:

    Words = sentence.split()

    Score = len(words)

    Scores[sentence] = score


# Sort the sentences based on their scores

Sorted_sentences = sorted(scores.items(), key=lambda x: x[1], reverse=True)


# Extract the top 2 sentences with the highest scores as the summary

Summary_sentences = [sentence[0] for sentence in sorted_sentences[:2]]

Summary = “ “.join(summary_sentences)


# Print the summary

Print(summary)



@Slip-18


Q. 1) Write a Java Script Program to print Fibonacci numbers on onclick event.

Ans:


<!DOCTYPE html>

<html>

<head>

<title>Fibonacci Numbers</title>

<script>

Function generateFibonacci() {

// Get the input value from the user

Var input = document.getElementById(“inputNumber”).value;

Var output = document.getElementById(“output”);


// Convert the input to a number

Var n = parseInt(input);


// Create an array to store the Fibonacci sequence

Var fib = [];


// Calculate the Fibonacci sequence up to n

Fib[0] = 0;

Fib[1] = 1;

For (var i = 2; i <= n; i++) {

Fib[i] = fib[i-1] + fib[i-2];

}


// Display the Fibonacci sequence

Output.innerHTML = “Fibonacci sequence up to “ + n + “: “ + fib.join(“, “);

}

</script>

</head>

<body>

<h1>Fibonacci Numbers</h1>

<p>Enter a number:</p>

<input type=”text” id=”inputNumber”>

<button onclick=”generateFibonacci()”>Generate Fibonacci</button>

<p id=”output”></p>

</body>

</html>



Q. 2)Consider any text paragraph. Remove the stopwords. Tokenize the paragraph to extract words and 

Sentences. Calculate the word frequency distribution and plot the frequencies. Plot the wordcloud of the 

Txt.



Ans:


# Install the libraries

!pip install nltk matplotlib wordcloud


# Import the necessary modules

Import nltk

From nltk.corpus import stopwords

From nltk.tokenize import word_tokenize, sent_tokenize

From nltk.probability import FreqDist

Import matplotlib.pyplot as plt

From wordcloud import WordCloud


# Download the stopwords corpus

Nltk.download(‘stopwords’)


# Define the text paragraph

Text = “Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed tristique ante et velit vestibulum, vel pharetra orci iaculis. Nullam mattis risus quis augue tincidunt rhoncus. Morbi varius, arcu vitae scelerisque laoreet, magna est imperdiet quam, sit amet ultrices lectus justo id enim. Sed dictum suscipit commodo. Sed maximus consequat risus, nec pharetra nibh interdum quis. Etiam eget quam vel augue dictum dignissim sit amet nec elit. Nunc at sapien dolor. Nulla vitae iaculis lorem. Suspendisse potenti. Sed non ante turpis. Morbi consectetur, arcu a vestibulum suscipit, mauris eros convallis nibh, nec feugiat orci enim sit amet enim. Aliquam erat volutpat. Etiam vel nisi id neque viverra dapibus non non lectus.”


# Tokenize the paragraph to extract words and sentences

Words = word_tokenize(text.lower())

Sentences = sent_tokenize(text)


# Remove the stopwords from the extracted words

Stop_words = set(stopwords.words(‘english’))

Filtered_words = [word for word in words if word.casefold() not in stop_words]


# Calculate the word frequency distribution and plot the frequencies using matplotlib

Fdist = FreqDist(filtered_words)

Fdist.plot(30, cumulative=False)

Plt.show()


# Plot the wordcloud of the text using wordcloud

Wordcloud = WordCloud(width = 800, height = 800, 

                Background_color =’white’, 

                Stopwords = stop_words, 

                Min_font_size = 10).generate(text) 

  

# plot the WordCloud image                        

Plt.figure(figsize = (8, 8), facecolor = None) 

Plt.imshow(wordcloud) 

Plt.axis(“off”) 

Plt.tight_layout(pad = 0) 

  

Plt.show() 



@Slip-19


Q. 1) Write a Java Script Program to validate user name and password on onSubmit event.

Ans:


<!DOCTYPE html>

<html>

  <head>

    <title>Validate User Name and Password</title>

    <script>

      Function validateForm() {

        Var username = document.forms[“myForm”][“username”].value;

        Var password = document.forms[“myForm”][“password”].value;


        If (username == “”) {

          Alert(“Username must be filled out”);

          Return false;

        }


        If (password == “”) {

          Alert(“Password must be filled out”);

          Return false;

        }

      }

    </script>

  </head>

  <body>

    <h2>Validate User Name and Password</h2>

    <form name=”myForm” onsubmit=”return validateForm()” method=”post”>

      <label for=”username”>Username:</label>

      <input type=”text” id=”username” name=”username”><br><br>

      <label for=”password”>Password:</label>

      <input type=”password” id=”password” name=”password”><br><br>

      <input type=”submit” value=”Submit”>

    </form>

  </body>

</html>


Q. 2)Download the movie_review.csv dataset from Kaggle by using the following link 

:https://www.kaggle.com/nltkdata/movie-review/version/3?select=movie_review.csv to perform 

Sentiment analysis on above dataset and create a wordcloud. 


Ans:


Import pandas as pd

From textblob import TextBlob

From wordcloud import WordCloud, STOPWORDS

Import matplotlib.pyplot as plt


# Load the dataset

Df = pd.read_csv(‘movie_review.csv’)


# Add a column for sentiment analysis using TextBlob

Df[‘Sentiment’] = df[‘Review’].apply(lambda x: TextBlob(x).sentiment.polarity)


# Create a new dataframe for positive reviews only

Pos_df = df[df[‘Sentiment’] > 0.2]


# Create a wordcloud for positive reviews

Wordcloud = WordCloud(width = 800, height = 800, 

                Background_color =’white’, 

                Stopwords = STOPWORDS, 

                Min_font_size = 10).generate(‘ ‘.join(pos_df[‘Review’]))


# Plot the wordcloud

Plt.figure(figsize = (8, 8), facecolor = None) 

Plt.imshow(wordcloud) 

Plt.axis(“off”) 

Plt.tight_layout(pad = 0) 

  

Plt.show()



@Slip-20


Q. 1) create a student.xml file containing at least 5 student information.

Ans:


<?xml version=”1.0”?>

<students>

  <student>

    <name>John Doe</name>

    <age>21</age>

    <gender>Male</gender>

    <major>Computer Science</major>

    <gpa>3.8</gpa>

  </student>

  <student>

    <name>Jane Smith</name>

    <age>19</age>

    <gender>Female</gender>

    <major>Business</major>

    <gpa>3.5</gpa>

  </student>

  <student>

    <name>Tom Johnson</name>

    <age>20</age>

    <gender>Male</gender>

    <major>Engineering</major>

    <gpa>3.2</gpa>

  </student>

  <student>

    <name>Sara Lee</name>

    <age>22</age>

    <gender>Female</gender>

    <major>Psychology</major>

    <gpa>3.6</gpa>

  </student>

  <student>

    <name>Mike Brown</name>

    <age>18</age>

    <gender>Male</gender>

    <major>Education</major>

    <gpa>3.4</gpa>

  </student>

</students>



Q. 2)Consider text paragraph.”””Hello all, Welcome to Python Programming Academy. Python 

Programming Academy is a nice platform to learn new programming skills. It is difficult to get enrolled 

In this Academy.”””Remove the stopwords.

Ans:


Import nltk

From nltk.corpus import stopwords

Nltk.download(‘stopwords’)


# Text paragraph

Text = “Hello all, Welcome to Python Programming Academy. Python Programming Academy is a nice platform to learn new programming skills. It is difficult to get enrolled in this Academy.”


# Tokenize the text

Tokens = nltk.word_tokenize(text)


# Remove stopwords

Stop_words = set(stopwords.words(‘english’))

Filtered_tokens = [word for word in tokens if not word.lower() in stop_words]


# Print the filtered tokens

Print(filtered_tokens)

Comments

Popular posts from this blog

WT slip tybcs21to30

Java