This document summarizes a project report on an online movie ticket booking system developed by a group of students. It includes the scope, objectives, system architecture using tools like ASP.NET and SQL Server, database design using entity relationship diagrams and data dictionaries. It also describes the coding, testing, and screenshots of the system's interfaces for functions like admin login, adding movies and shows, booking tickets, and user registration. The report provides details of the system developed as part of a college course project.
Scaling Up: How Switching to Apache Spark Improved Performance, Realizability...Databricks
This document summarizes how switching from Hadoop to Spark for data science applications improved performance, reliability, and reduced costs at Salesforce. Some key issues addressed were handling large datasets across many S3 prefixes, efficiently computing segment overlap on skewed user data, and performing joins on highly skewed datasets. These changes resulted in applications that were 100x faster, used 10x less data, had fewer failures, and reduced infrastructure costs.
This document summarizes how switching from Hadoop to Spark for data science applications improved performance, reliability, and reduced costs at Salesforce. Some key issues addressed were handling large datasets across many S3 prefixes, efficiently computing segment overlap on skewed user data, and performing joins on highly skewed datasets. These changes resulted in applications that were 100x faster, used 10x less data, had fewer failures, and reduced infrastructure costs.
This document contains solutions to questions from a computer science examination. It includes questions on topics like Python, Pandas, SQL, data visualization, and computer networks. The solutions demonstrate how to write Python code to create and manipulate dataframes, plot charts, and perform SQL queries. Examples of network topologies and devices like switches, modems, and gateways are also provided. The document aims to test students' understanding of key concepts in informatics practices.
The document describes Java programs to demonstrate various concepts:
1. Generating a WiFi password based on user input of name, city, age, and gender.
2. Finding the character with the second highest frequency in a string.
3. Finding the longest subsequence of the same character in a string.
4. Generating a WiFi key based on user input of name, city, age, and gender.
5. Deleting non-empty text files from a specified folder.
6. Checking if a character appears twice in two strings and returning the strings without that character.
7. Calculating the number of days between a given date and the current date.
8. Drawing shapes like
R is an open source statistical computing platform that is rapidly growing in popularity within academia. It allows for statistical analysis and data visualization. The document provides an introduction to basic R functions and syntax for assigning values, working with data frames, filtering data, plotting, and connecting to databases. More advanced techniques demonstrated include decision trees, random forests, and other data mining algorithms.
Need an detailed analysis of what this code-model is doing- Thanks #St.pdfactexerode
Need an detailed analysis of what this code/model is doing. Thanks
#Step 1: Import the required Python libraries:
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import Adam,SGD
from keras.datasets import cifar10
#Step 2: Load the data.
#Loading the CIFAR10 data
(X, y), (_, _) = keras.datasets.cifar10.load_data()
#Selecting a single class of images
#The number was randomly chosen and any number
#between 1 and 10 can be chosen
X = X[y.flatten() == 8]
#Step 3: Define parameters to be used in later processes.
#Defining the Input shape
image_shape = (32, 32, 3)
latent_dimensions = 100
#Step 4: Define a utility function to build the generator.
def build_generator():
model = Sequential()
#Building the input layer
model.add(Dense(128 * 8 * 8, activation="relu",
input_dim=latent_dimensions))
model.add(Reshape((8, 8, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.78))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.78))
model.add(Activation("relu"))
model.add(Conv2D(3, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
#Generating the output image
noise = Input(shape=(latent_dimensions,))
image = model(noise)
return Model(noise, image)
#Step 5: Define a utility function to build the discriminator.
def build_discriminator():
#Building the convolutional layers
#to classify whether an image is real or fake
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2,
input_shape=image_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.82))
model.add(LeakyReLU(alpha=0.25))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.82))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.25))
model.add(Dropout(0.25))
#Building the output layer
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
image = Input(shape=image_shape)
validity = model(image)
return Model(image, validity)
#Step 6: Define a utility function to display the generated images.
def display_images():
# Generate a batch of random noise
noise = np.random.normal(0, 1, (16, latent_dimensions))
# Generate images from the noise
generated_images = generator.predict(noise)
# Rescale the images to 0.
Geometric and material nonlinearity analysis of 2 d truss with force and duct...Salar Delavar Qashqai
This document describes a C program written by Salar Delavar Qashqai to analyze the geometric and material nonlinearity of a 2D truss structure under force and ductility damage index control. The program imports input data files, performs a pushover analysis using incremental loading, calculates member forces and displacements, and exports output to text, Excel, MATLAB, and HTML files. Key sections of code are presented to demonstrate how the program assembles and solves the system stiffness matrix, calculates element forces and stiffness, and checks for convergence at each iteration.
Programs are complete in best of my knowledge with zero compilation error in IDE Bloodshed Dev-C++. These can be easily portable to any versions of Visual Studio or Qt. If you need any guidance please let me know via comments and Always Enjoy Programming.
This document provides an introduction to the basics of R programming. It begins with quizzes to assess the reader's familiarity with R and related topics. It then covers key R concepts like data types, data structures, importing and exporting data, control flow, functions, and parallel computing. The document aims to equip readers with fundamental R skills and directs them to online resources for further learning.
This document loads various libraries and reads in multiple csv files containing transportation data. It then performs some data cleaning and preprocessing steps. Various outputs are defined to render tables and plots of subsets of the data. Plots are created to visualize relationships between weighted time, cost, and safety metrics. Interactive elements are added to output text describing user input from the plots. Maps and motion charts are also defined as outputs to visualize additional data aspects.
This document discusses time series analysis techniques in R, including decomposition, forecasting, clustering, and classification. It provides examples of decomposing the AirPassengers dataset, forecasting with ARIMA models, hierarchical clustering on synthetic control chart data using Euclidean and DTW distances, and classifying the control chart data using decision trees with DWT features. Accuracy of over 88% was achieved on the classification task.
The document discusses various concepts related to abstraction in software development including project architecture, code refactoring, enumerations, and the static keyword in Java. It describes how to split code into logical parts using methods and classes to improve readability, reuse code, and avoid repetition. Refactoring techniques like extracting methods and classes are presented to restructure code without changing behavior. Enumerations are covered as a way to represent numeric values from a fixed set as text. The static keyword is explained for use with classes, variables, methods, and blocks to belong to the class rather than object instances.
This document discusses using the C to Go translation tool c2go to translate C code implementing quicksort algorithms into Go code. It provides examples of translating simple quicksort C code, improving the translation by using a configuration file, and how c2go handles standard C functions like qsort by translating them to their Go equivalents. The examples demonstrate how c2go can generate valid Go code from C code but may require some manual fixes or configuration to handle certain data structures or language differences.
Here is a bpftrace program to measure scheduler latency for ICMP echo requests:
#!/usr/local/bin/bpftrace
kprobe:icmp_send {
@start[tid] = nsecs;
}
kprobe:__netif_receive_skb_core {
@diff[tid] = hist(nsecs - @start[tid]);
delete(@start[tid]);
}
END {
print(@diff);
clear(@diff);
}
This traces the time between the icmp_send kernel function (when the packet is queued for transmit) and the __netif_receive_skb_core function (when the response packet is received). The
1. The document describes a midterm exam for a C++ programming course that consists of multiple choice questions and has a time limit of 2 hours.
2. It provides sample exam questions about C++ concepts like functions, variables, conditionals, and data types.
3. The student completed the exam, scoring 64 out of 100 points within the 2 hour time limit.
Powering Heap With PostgreSQL And CitusDB (PGConf Silicon Valley 2015)Dan Robinson
At Heap, we lean on PostgreSQL for all our backend heavy lifting. We support an expressive set of queries — conversion funnels with filtering and grouping, retention analysis, and behavioral cohorting to name a few — across billions of users and tens of billions of events. Results need to come back in a matter of seconds and reflect up-to-the-minute data.
This talk will discuss these challenges, with a particular focus on:
- Using CitusDB for interactive analysis across 50 terabytes of data and counting.
- PostgreSQL and Kafka: two great tastes that taste great together.
- UDFs in C and PL/pgSQL, partial indexes for pre-aggregation, and other tricks up our sleeves.
Similar to LeetCode Database problems solved using PySpark.pdf (20)
Maintaining asset integrity, maximising performance, and reaching peak production outcomes all depend on effective defect elimination management.
The difficulties of locating, comprehending, and resolving asset flaws will be examined in this article, with a focus on the significance of having a complete awareness of their nature, implications, and possible outcomes.
The significance of maintenance departments creating a framework for integrating defect elimination and bad actor analysis into their proactive maintenance strategies, enabling them to make more informed decisions, becomes clear when we delve into the subtleties of defect analysis.
Defects in an asset are characterised as flaws, inadequacies, or departures from standard operating characteristics or design specifications.
The kind of defect, where it is located on the assembly of the asset, and what happens when it exists determine how one defect impacts the performance of the asset as a whole.
For example, a minor corrosion patch on a panel that is readily replaceable and does not pose a structural risk to the asset overall is unlikely to have an adverse effect on the asset's longevity, performance, or safety.
Effective Defect Elimination Management is critical for maintaining asset integrity, optimizing performance, and achieving peak production results. It involves identifying, understanding, and addressing asset defects systematically.
Understanding asset defects requires accurate identification and comprehensive documentation in the CMMS, including risk assessments that evaluate both the consequence and likelihood of defects leading to failures.
Defect Elimination Management (DEM) is a comprehensive approach that goes beyond traditional maintenance practices, focusing on root cause analysis and implementing long-term solutions to prevent defect recurrence.
"Bad Actors" in defect elimination refer to equipment, systems, or components that consistently underperform, require frequent maintenance, or cause repeated operational reliability or quality issues.
Advanced diagnostic tools and technologies, such as vibration analysis systems, infrared thermography, and AI-based analytics, have transformed the way asset defects are identified and managed.
Quality and timely repairs and clear business processes for managing defects are crucial, along with maintaining quality maintenance history data to provide valuable insights for future defect elimination processes.
To learn more, you can read my article via this link: https://www.cmmssuccess.com/defect-elimination-management/
Vijay Engineering and Machinery Company (VEMC) is a leading company in the field of electromechanical engineering products and services, with over 70 years of experience.
Modified O-RAN 5G Edge Reference Architecture using RNNijwmn
Paper Title
Modified O-RAN 5G Edge Reference Architecture using RNN
Authors
M.V.S Phani Narasimham1 and Y.V.S Sai Pragathi2, 1Wipro Technologies, India, 2Stanley College of Engineering & Technology for Women (Autonomous), India
Abstract
This paper explores the implementation of 6G/5G standards by network providers using cloud-native technologies such as Kubernetes. The primary focus is on proposing algorithms to improve the quality of user parameters for advanced networks like car as cloud and automated guided vehicle. The study involves a survey of AI algorithm modifications suggested by researchers to enhance the 5G and 6G core. Additionally, the paper introduces a modified edge architecture that seamlessly integrates the RNN technologies into O-RAN, aiming to provide end users with optimal performance experiences. The authors propose a selection of cutting-edge technologies to facilitate easy implementation of these modifications by developers.
Keywords
5G O-RAN, 5G-Core, AI Modelling, RNN, Tensor Flow, MEC Host, Edge Applications.
Volume URL: https://airccse.org/journal/jwmn_current24.html
Abstract URL: https://aircconline.com/abstract/ijwmn/v16n3/16324ijwmn01.html
Youtube URL: https://youtu.be/rIYGvf478Oc
Pdf URL: https://aircconline.com/ijwmn/V16N3/16324ijwmn01.pdf
#callforpapers #researchpapers #cfp #researchers #phdstudent #researchScholar #journalpaper #submission #journalsubmission #WBAN #requirements #tailoredtreatment #MACstrategy #enhancedefficiency #protrcal #computing #analysis #wirelessbodyareanetworks #wirelessnetworks
#adhocnetwork #VANETs #OLSRrouting #routing #MPR #nderesidualenergy #korea #cognitiveradionetworks #radionetworks #rendezvoussequence
Here's where you can reach us : ijwmn@airccse.org or ijwmn@aircconline.com
The Transformation Risk-Benefit Model of Artificial Intelligence: Balancing R...gerogepatton
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
In the global energy equation, the IT industry is not yet a major contributor to global warming, but it is increasingly significant. From an engineering standpoint we can achieve huge energy saving by replacing electronic signal processing with optical techniques for routing and switching, whilst longer fibre spans in the local loop offer further reductions. The mobile industry on the other hand has engineered 5G systems demanding ~10kW/tower due to signal processing and beam steering technologies. This sees some countries (i.e. China) closing cell sites at night to save money. So, what of 6G? The assumption that all surfaces can be smart signal regenerators with beam steering looks be a step too far and it may be time for a rethink!
On the extreme end of the scale we have AWS planning to colocate their latest AI data centre (at 1GW power consumption) along side two nuclear reactors because it needs 40% of their joint output. Google and Microsoft are following the AWS approach and reportedly in negotiation with nuclear plant owners. Needless to say that AI train ing sessions and usage have risen to dominate the top of the IT demand curve. At this time, there appears to be no limits to the projected energy demands of AI, but there is a further contender in this technology race, and that is the IoT. In order to satisfy the ecological demands of Industry 4.0/Society 5.0 we need to instrument and tag ‘Things’ by the Trillion, and not ~100 Billion as previously thought!
Now let’s see, Trillions of devices connected to the internet with 5G, 4G, WiFi, BlueTooth, LoRaWan et al using >100mW demands more power plants…
Structural Dynamics and Earthquake Engineeringtushardatta
Slides are prepared with a lot of text material to help young teachers to teach the course for the first time. This also includes solved problems. This can be used to teach a first course on structural dynamics and earthquake engineering. The lecture notes based on which slides are prepared are available in SCRIBD.
Artificial Intelligence Imaging - medical imagingNeeluPari
10 stages of Artificial Intelligence,
Artificial intelligence (AI) has made significant advancements in the field of medical imaging, offering valuable tools and capabilities to improve diagnostics, treatment planning, and patient care. Here are several ways AI is used in medical imaging
6. )
select distinct num
from temp
where num = num1
and num = num2''').show()
+---+
|num|
+---+
| 1|
+---+
23/08/03 15:05:56 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/03 15:05:56 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/03 15:05:57 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/03 15:05:57 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
df.withColumn('num1', lead('num', 1).over(Window.orderBy('id')))
.withColumn('num2', lead('num', 2).over(Window.orderBy('id')))
.filter((col('num') == col('num1')) & (col('num') == col('num2')))
.select('id').distinct().show()
+---+
| id|
+---+
| 1|
+---+
23/08/03 15:56:26 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/03 15:56:26 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/03 15:56:26 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/03 15:56:26 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
Students and classes example
schema = StructType([StructField('class', IntegerType(), True),
StructField('student_id', IntegerType(), True),
StructField('term', IntegerType(), True),
StructField('subject', StringType(), True),
StructField('marks', IntegerType(), True)])
df = spark.createDataFrame([[2, 1, 1, 'maths', 10],
In [ ]:
In [ ]:
10. +---+-------+
| id|Student|
+---+-------+
| 1| Doris|
| 2| Abbot|
| 3| Green|
| 4|Emerson|
| 5| Jeames|
+---+-------+
23/08/09 17:41:37 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/09 17:41:37 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/09 17:41:37 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/09 17:41:37 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
23/08/09 17:41:37 WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradatio
n.
Tree Node
https://leetcode.com/problems/tree-node/
schema = StructType([StructField('id', IntegerType(), False),
StructField('p_id', IntegerType(), True)])
df = spark.createDataFrame(
[[1, None],
[2, 1],
[3, 1],
[4, 2],
[5, 2]], schema=schema)
df.show()
+---+----+
| id|p_id|
+---+----+
| 1|null|
| 2| 1|
| 3| 1|
| 4| 2|
| 5| 2|
+---+----+
distinct_parent_ids = df.select(
'p_id').distinct().rdd.flatMap(lambda x: x).collect()
In [ ]:
In [ ]:
In [ ]:
11. df.select('id',
when(col('p_id').isNull(), 'Root')
.when(col('p_id').isNotNull() & col('id').isin(distinct_parent_ids), 'Inner')
.otherwise('Leaf').alias('type')).show()
+---+-----+
| id| type|
+---+-----+
| 1| Root|
| 2|Inner|
| 3| Leaf|
| 4| Leaf|
| 5| Leaf|
+---+-----+
Pandas
pdf = df.toPandas()
pdf
id p_id
0 1 NaN
1 2 1.0
2 3 1.0
3 4 2.0
4 5 2.0
import numpy as np
pdf['type'] = np.where(pdf['p_id'].isna(),
'Root',
np.where(pdf['id'].isin(pdf['p_id'].unique()) & pdf['p_id'].notna(),
'Inner',
'Leaf'))
pdf[['id', 'type']]
In [ ]:
In [ ]:
Out[ ]:
In [ ]:
In [ ]:
22. If transactions_df is large to run collect(), we can use left join and picking records with null transaction_id
visits_df.join(transactions_df, on='visit_id', how='left')
.filter(col('transaction_id').isNull())
.groupBy('customer_id').agg(count('visit_id'))
.withColumnRenamed('count(visit_id)', 'count_no_trans')
.show()
+-----------+--------------+
|customer_id|count_no_trans|
+-----------+--------------+
| 54| 2|
| 96| 1|
| 30| 1|
+-----------+--------------+
Product Price at a Given Date
https://leetcode.com/problems/product-price-at-a-given-date/description/
product_schema = StructType([StructField('product_id', IntegerType(), False),
StructField('new_price', IntegerType(), True),
StructField('change_date', StringType(), True)])
product_df = spark.createDataFrame([[1, 20, '2019-08-14'],
[2, 50, '2019-08-14'],
[1, 30, '2019-08-15'],
[1, 35, '2019-08-16'],
[2, 65, '2019-08-17'],
[3, 20, '2019-08-18']], schema=product_schema).select('product_id', 'new_price', col('change_date').cast('date'))
product_df.show()
+----------+---------+-----------+
|product_id|new_price|change_date|
+----------+---------+-----------+
| 1| 20| 2019-08-14|
| 2| 50| 2019-08-14|
| 1| 30| 2019-08-15|
| 1| 35| 2019-08-16|
| 2| 65| 2019-08-17|
| 3| 20| 2019-08-18|
+----------+---------+-----------+
from pyspark.sql.functions import min as min_
## note - it is important to alias the pyspark min function since otherwise it would use python min() function and break the code
In [ ]:
In [ ]:
In [ ]:
In [ ]: