Context
An art dealer friend, who has multiple sales representatives who sell various products across four different states in the US, likes to use the data he has collected to make insight-driven decisions.
Objective
Dealer wants to understand the sales performance across various products over the last three years
Strategies
1. Combine sales data from multiple CSV files and add product and sales rep data to the data model
2. Create a date table with a column for the last day of each month for the purpose of conducting time-based analysis
3. Establish relationships between sales rep, product, sales, and date tables
4. Calculate total net sales excluding discounts
5. Create a pivot table showing total sales and YOY% change for regions, excluding subtotals and individual regions
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
This document provides guidance on creating an effective digital marketing analytics dashboard using Power BI. It recommends connecting to Google Analytics as a primary data source and including visualizations of key performance indicators (KPIs) like impressions, clicks, and spending over time. The dashboard should allow users to interact with the data by selecting specific time periods to analyze and compare metrics. Color coding and tooltips can also help users understand relationships in the data and drill down into further details.
Running head COMPANY NAME 1 MBA 7200 Financia.docxtodd271
Running head: COMPANY NAME 1
MBA 7200 Financial Analysis Paper: Company Name
Student name
Date
Wilmington University
COMPANY NAME 2
Outline for paper
Notes:
no abstract is needed for this paper
double spacing is required. The outline is presented in single space for presentation
purposes.
Important point: in the appendices you present financial data and your ratio analysis calculations.
Within the narrative sections, you are to analyze the data and describe what the data is indicating.
What do the numbers mean? What are the trends and how, based on the analysis, is the firm
performing for its owners (stockholders) and within its industry. Use the data to prepare
financial analysis.
Simply regurgitating the financial numbers in your narrative is not analysis and is not sufficient
to receive a passing grade for this project.
Outline of paper
1. Page 2: Description of corporation, major products, industries, markets served, and any
significant developments over the past three years.
a. Prepare a concise summary of your company using declarative sentences. The
purpose of this section is to provide the reader with basic information on the
company. Distilling your company down to one page of essential information
should not be easy. Eliminate any extraneous “fluff” and avoid providing any
interpretations or analysis. Numerical analysis is not part of this section. It is the
only part of the paper that numerical analysis should not be included. This is a
factual section. Assume the reader is a business professional.
2. Page 3: Overall descriptive analysis of the financials for the last three years
a. In this section you can now present key financial highlights of your company. At
a minimum you should discuss sales and net income performance and any
significant financial factors related to your company over the last three years. Use
concise $ figures. For example, use $7.8m or $7.8b instead of $7,800,000 or
writing $7.8 million. There is more key financial information than one can easily
fit into one page so you must determine what is most important for the reader to
understand the financial picture of the corporation as of the most recent financials.
b. If your firm has two or more published quarterly statements since the last annual
report, be sure to prepare your analysis using the most recent quarterly data.
COMPANY NAME 3
3. Pages 4-6: Descriptive analysis of the firms financials and ratio analysis
a. In this section the writer now gets into the financial details of the firm. The
narrative in this section is based on the financials of the firm (Appendix A) and
the ratio analysis (Appendix B).
b. What are the trends in your ratio analysis? What are the trends for the company
as a whole and in comparison to key competitors and industry as a whole?
c. Ratios to include are Debt/Equity, ROI, ROE, ROA, curren.
Real -time data visualization using business intelligence techniques. and mak...MD Owes Quruny Shubho
Real-time data visualization using business intelligence techniques. and make a faster decision on sales data.
Business Intelligence is a way of gaining advantage form business using data. This data can be User information, Stock information sales report or any source that related to its business. From a large amount of data, business intelligence mining the information and convert them to knowledge which plays a role for the decision support system.BI is a mass effective way to make a data-driven decision.BI Visualize data and give us a visual look of data that can be easily understood.
The document discusses Hyperion's product suite which includes tools for business intelligence, planning, performance management, and data management. It provides an overview of Essbase, a multidimensional database that allows users to analyze business data from multiple perspectives and levels. Key concepts covered include multidimensional data modeling, OLAP operations for analyzing data (e.g. drill-down, drill-up, slice and dice), and comparing multi-dimensional and relational database approaches.
Create Data Model & Conduct Visualisation in Power BI DesktopThinkInnovation
Context
A global agency (an ex-coaching client) goes through a yearly budgeting process, where it evaluates the costs incurred by various departments and uses that information to forecast expenses.
Objectives
Likes to improve its budgeting and forecasts based on the actual costs incurred.
Strategies
1. Load & combine data from multiple Excel & .csv files into Power BI, removing any unnecessary columns
2. Establish relationships between tables to connect data from the ‘Dimension’ Table to the ‘Forecast’ and ‘Budget’ Tables
3. Create a ‘Calendar’ Table using DAX, add it to the data model, & establish relationships with other tables
4. Visualise the budget by region using a chart
Create a line graph to compare monthly budget & forecast
5. Analyse budget distribution by business area using a pie chart
6. Create linked stacked column charts to visualise budget breakdown by cost element group and IT area
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
Financial analysis refers to business assessment in terms of stability, viability, profitability, and other important financial and non-financial factors. It is done through several different techniques, ratios, and charts, with the purpose of transforming static numbers from or in financial statements, to an added value for decision-makers. Usually, the analyzed information and the analysis results are presented frequently as a report or as a dashboard.
A dashboard (or data visualization) is used to present all indicators at once to help owners, investors, or managers make efficient decisions by identifying specific actions that should be taken to reach future targets or goals.
Chapter 2 - Steps for Building a Financial Model Assumptions to Valuation.pptxJiaLing34
1) The document outlines the 9 key steps for building a financial model including discussing assumptions with management, building assumptions, constructing a template, inputting historical data, projecting financial statements, additional schedules, cash flow statements, ratio analysis, and valuation.
2) Important aspects of discussions with management are determining scope, corroborating information from department heads, and ensuring understanding of the business and expectations.
3) Building assumptions, templates, and projections involves determining growth drivers, standardizing the model, and considering inputs from management discussions.
The document discusses dimensional modeling and star schemas for data warehousing. It describes how dimensional modeling focuses on multiple levels of detail and refinement when designing a data warehouse. The key aspects of dimensional modeling include fact tables containing measures in the center connected through foreign keys to dimension tables containing attributes. Dimensional modeling is optimized for queries across dimensions. Star schemas divide data into facts and dimensions and are a popular design for data warehouses.
The document discusses a data strategy proposal for RealDirect, a real estate brokerage. The team analyzed real estate sales data from 2012-2013 across New York City boroughs. Graphs of attributes like building age, sales over time, area, and units sold showed patterns. Not all attributes were useful to collect - some were redundant or had no clear use. The team recommends RealDirect collect additional customer preference data and add a review/rating system and recommendation engine to better match customers to properties. This could increase users, sales, and investments from builders.
M tech inv_portal_with_dynamics_ax _2018_v_10.00ovais99
This document discusses an integrated solution for real estate and project management using Microsoft SharePoint and Dynamics AX 2012. It describes setting up a holding company structure with operational, regional, and franchisee levels to run and control business operations. It also outlines using SharePoint portals for franchisee operations and Dynamics AX for back office functions like financials. The solution aims to provide flexibility, visibility, controls and analytics across the business.
This document discusses data visualization techniques. It begins by defining data visualization and its importance for analyzing large datasets. It then discusses the advantages of data visualization, including how visuals help people quickly understand trends and outliers. The document also covers the importance of data visualization for business decision making. It lists several benefits, such as enabling better analysis, identifying patterns, and exploring insights. Finally, it categorizes and provides examples of different types of charts for visualizing data, including charts for showing change over time, comparing categories, ranking items, part-to-whole relationships, distributions, flows, and relationships.
Finc 3304 business finance work the web project part 2 (arnit1
This document provides instructions for the Work the Web project part 2. Students are asked to analyze the financial performance of a company over multiple years by collecting historical financial data, performing trend and ratio analyses, and calculating DuPont analysis ratios. Specifically, students will analyze trends in profitability ratios, compare the company's ratios to competitors' across various categories, and assess the company's return on assets and return on equity based on DuPont analysis components versus industry levels.
Health Care Research Project
By:
Dr. Joseph Foy, CPA
Dr. Frimette Kass, CPA
Overview
This project is designed to have many learning outcomes. Some of the learning outcomes include:
· team building skills
· leadership skills
· accounting and auditing research
· identifying and correcting weak/non-existent controls
· perform financial statement analysis
· imp[rove/develop report writing skills
To accomplish this project you will be divided into teams of four or five. Each team will choose a publicly traded hospital corporation. You will then perform certain audit techniques on the team’s chosen corporation and write short papers about what you have discovered.
Step by Step Description of the Project
This project is broken into various steps. Each step will have its own due date. By dividing the project into steps it will be easier for you to accomplish the project over the course of a semester.
Step One: Selection of Teams and corporations
On, or about, February 4 (the last day to add a course) you will be randomly assigned to teams of three to five. You will be able to determine your team by searching in BB under the ‘Groups’ tab.
Each team will then do research to find a publicly traded hospital corporation. The benefit of using publicly traded corporations is that their financial statements are publicly available online. Each team must choose a different corporation. Do some internet research. When you find a company post it on the discussion board area that is set up in BB for this purpose. The corporations will be assigned on a first come, first serve basis.
For each Deliverable (assignment to be submitted via Blackboard), teams will choose a different leader from among the group. The job of the team leader is to breakdown the work for that deliverable, assign the work to team members, organize peer review of the assignment, and upload the assignment in a timely fashion.
Step Two/Deliverable One: Finding and Analyzing F/SDeliverable One: Create Excel Spreadsheet, Financial Analysis
In this first deliverable, you are going to work with excel spreadsheets to become familiar with the financial information published by your corporation.Deliverable One Objectives:
1. Demonstrate an understanding how to use various features of excel.
2. Understand financial analytical tools to help make business decisions.
3. Demonstrate an understanding of various types of accounts public companies utilize.
4. Demonstrate how to organize data.Deliverable One Requirements:
1. Collaborate with other students in groups.
2. Excel spreadsheet data set up.
3. Horizontal and vertical analysis.
4. Financial ratio analysis.
5. Chart results.
6. Upload the document via Blackboard.
Requirement #1: Collaborate with Group
Students will continue to collaborate within their assigned groups to complete this deliverable, but the work product will be graded individually. You are to elect a new group leader to centralize group communications ...
The document provides a showcase of reports created by Garth Wilson in Microsoft Excel to analyze purchasing, inventory, and financial data for managers. The reports were designed to extract meaningful insights from vast data arrays compiled from various software sources. Examples shown include comprehensive reports on inventory levels, costs, and sales across different time periods, locations, and product categories. The reports automatically update when new raw data is imported, transforming it into clear, visual summaries and analyses.
Leveraging IBM Cognos TM1 for Merchandise Planning at Tractor Supply Company ...QueBIT Consulting
AGENDA:
Introductions and Company Overviews
TSC Merchandise Planning Solution Overview
Prior State
Solution and Implementation
Tips & Tricks for TM1 Perspectives Templates
Q&A
The document provides an overview of the Financial Analytics 7.9.6.3 product from Oracle. It summarizes the key features including prebuilt dashboards, reports, metrics and subject areas covering general ledger, profitability, receivables, payables, and US federal financial performance. Example dashboard pages and their purposes are described for areas like balance sheet, cash flow, budget vs actual, asset usage and liquidity. Common business roles that would benefit from using the tool are also listed.
The document describes the prebuilt dashboards and subject areas available in the Oracle Financial Analytics product. It provides an overview of the different dashboard pages available for key areas like general ledger, profitability, receivables, payables, and US federal financial performance. It also summarizes the types of reports and analyses that users can perform using the subject areas for accounts payable, accounts receivable, and other financial data. Finally, it highlights how the prebuilt content and subject areas can be used to perform unlimited analyses and build additional reports with little incremental effort.
Best structure of taxonomies for the different purposes of analysisChie Mitsui
Nomura Research Institute discussed best practices for taxonomy design based on different user needs. There are difficulties because users have varying purposes for analysis. They focused on two main types - allowing users to re-calculate data using original elements, and emphasizing relationships between line items and totals to automatically check for inconsistencies. While one taxonomy can't serve all needs, design should prioritize disclosure purposes like validation and correct data handling. Different layers may be required to fully support different analysis objectives.
Similar to Using DAX & Time-based Analysis in Data Warehouse (20)
Difference in Differences - Does Strict Speed Limit Restrictions Reduce Road ...ThinkInnovation
Objective
To identify the impact of speed limit restrictions in different constituencies over the years with the help of DID technique to conclude whether having strict speed limit restrictions can help to reduce the increasing number of road accidents on weekends.
Context*
Generally, on weekends people tend to spend time with their family and friends and go for outings, parties, shopping, etc. which results in an increased number of vehicles and crowds on the roads.
Over the years a rapid increase in road casualties was observed on weekends by the Government.
In the year 2005, the Government wanted to identify the impact of road safety laws, especially the speed limit restrictions in different states with the help of government records for the past 10 years (1995-2004), the objective was to introduce/revive road safety laws accordingly for all the states to reduce the increasing number of road casualties on weekends
* The Speed limit restriction can be observed before 2000 year as well, but the strict speed limit restriction rule was implemented from 2000 year to understand the impact
Strategies
Observe the Difference in Differences between ‘year’ >= 2000 & ‘year’ <2000
Observe the outcome from multiple linear regression by considering all the independent variables & the interaction term
Identify Rules that Predict Patient’s Heart Disease - An Application of Decis...ThinkInnovation
Context
1. Make Insight-informed Decisions: Clinic collected data on heart disease diagnosis and other patient information, and wants to use the data to make insight-informed decisions.
Objective
2. Predict Patient’s Well-being: To identify the rules that will predict whether a patient will have heart disease in the future, based on the data collected on him/her.
Strategy
3. Deploy Decision Tree Model: Create a Decision Tree Model, with rules, to predict whether a patient will have a heart disease in the future based on collected data.
3.1 To train and evaluate the model
3.2 Boost the model’s performance
3.3 Conduct predictions
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
Identify Customer Segments to Create Customer Offers for Each Segment - Appli...ThinkInnovation
Context
1. Social Enterprise collected data on customers & wants to make insight-informed decisions.
Objective
2. To identify customer segments to customised offers for each segment.
Strategy
3. Explore & Clean data for analysis.
4. Perform K-Means Clustering, in Orange, to find possible segments in the customer data.
5. Tune the model to improve its performance.
6. Visualise the findings, share conclusions, and give insight-driven recommendations.
Author: Anthony mok
date: 18 Nov 2023
Email: xxiaohao@yahoo.com
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangeThinkInnovation
Context
1. Housing Agent collected resale prices on HDB apartments in Singapore.
Objective
2. To predict resale prices in to advise his potential clients.
Strategies
3. Explore & Clean data for analysis.
4. Perform K-Means Clustering, in Orange, to find possible segments in the customer data.
5. Tune the model to improve its performance.
6. Visualise the findings, share conclusions, and give insight-driven recommendations.
Author: Anthony mok Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
Project Primary Goal
1. To identify factors influencing medical expenses given the variables while removing endogeneity issue
Context
2. Good health insurance is one that can cover a maximum amount of medical expenses so that people don't have to worry about paying medical bills.
3. As a health insurance company, the company saw its sales fall significantly over time, something that is causing concerns.
4. It is the firm's intention to analyse factors that determine medical expenses in order to improve their sales in the coming fiscal year.
5. By conducting the study, they will have a better understanding of their customers' needs and be able to develop their marketing strategies accordingly.
Modelling Strategies
6. OLS REGRESSION: Observed outcomes from OLS regression using independent variables.
STAGE-2 REGRESSION:
7.1 Observe outcomes from Stage 1 Regression with the endogenous variable as the target variable.
7.2 Observe Stage 2 Regression using predicted endogenous variable.
8. INSIGHTS: Form insights from results extracted out of OLS regression and Stage - 2 Regression
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...ThinkInnovation
Primary Goals
1. To determine what factors are driving the lead conversion process.
2. To Identify which leads are more likely to convert to paid customers.
Data Description
3. Dataset consists of 4613 rows and 15 columns.
Modelling Strategies
4. Plan
4.1 Perform Dummy Encoding
4.2 List Variables for Modeling
4.3 Identify metric of interest to judge model's performance
5. Build
5.1 Build Logistic Regression Model (Preliminary Model)
5.2 Observe the metrics of the model
6. Improve
6.1 Identify the significant variables
6.2 Rebuild model
6.3 Observe the metrics of the models
7. Decide
7.1 Compare the results of Logistic Regression model (Base model) and Decision Tree Model
7.2 Conclude on best model for this project
8. Recommend
8.1 Determine factors driving the lead conversion process
8.2 Recommend what that may help to identify which leads are more likely to convert to paying customers
Author: Anthony Mok
Date: 16 Nov 2023
Email: xxiaohao@yahoo.com
Decision Making Under Uncertainty - Predict the Chances of a Person Suffering...ThinkInnovation
Project Goal
1. Use Naive Bayes’ Classifier to Predict Heart Attacks Based on Patient’s Symptoms.
Context
2. After completing the project to identify the rules that predict patient’s heart disease, the Clinic reached out again wanting to know who is likely to have a heart attack based on his/her symptoms.
Dataset
3. The dataset was explored for its relationships and patterns, and it’s found that, through its univariate, bivariate and multivariate analysis, the data is highly correlated and suitable for modelling.
Strategies For Modelling & Data Analysis
4. Data Preparation: Three new categorical features were created.
5. Train Model: PivotTables are created for the features and probabilities calculated.
6. Findings & Conclusions: The probability of Patient A, given her attributes, is 53.66% more likely to have an heart attack as compared to Patient B, whose probability of experiencing an heart attack is merely 9.79%, given his attributes.
Author: Anthony Mok
Date: 16 Nov 2023
Email: xxiaohao@yahoo.com
Decision Making Under Uncertainty - Is It Better Off Joining a Partnership or...ThinkInnovation
Monte Carlo Simulation
1. Simulation is the process of creating a virtual environment that mimics the behavior of a real-world system.
2. This virtual environment is used to train Machine Learning Models, test new algorithms, and explore the behavior of complex systems.
3. It provides a safe and controlled space to test different options, predict outcomes, and make insight-informed decisions.
Project Objective
4. Which is better: joining a partnership or starting own business?
Strategies For Modelling & Data Analysis
5. Simulate Number of Deliveries Made/Month
6. Simulate Labour Cost
7. Calculate Revenue Per Delivery
8. Calculate the Monthly Total Revenue & Profit, Calculate Estimated Average Profit & Variances
Author: Anthony mok
Date: 16 Nov 2023
Email: xxiaohao@yahoo.com
Predictive Analysis - Using Insight-informed Data to Plan Inventory in Next 6...ThinkInnovation
Project’s Primary Goals
1. To analyse past sales data to generate insights to understand what features of mobile phone that drive the sales.
2. To use these insights to efficiently plan the inventory in the next 6 months.
Data Description
3. Dataset consists of sales and product-related features.
4. Dataset contains descriptions of the top 5 most popular mobile brands.
5. Dataset consists of 418 row-instances and 16 column-features.
Strategies Deployed for Modelling
6. Check for, and treat with suitable methods, missing values in dataset.
7. Observe for, and take suitable steps to treat, outliers.
8. Check for multicollinearity amongst variables and use suitable steps to treat highly correlated variables.
9. Build a Linear Regression Model to predict the sales of mobile phones.
10. Report on the the metrics of the models.
11. Identify the significant variables, and rebuild and report on the model using only these variables only.
12. Based on the final model outcomes, determine the features driving mobile phone sales.
13. List down the recommendations to help in the inventory planning for the next 6 months.
Author: Anthony Mok
Date: 16 Nov 2023
Email: xxiaohao@yahoo.com
Decision Making Under Uncertainty - Decide Whether Or Not to Take PrecautionsThinkInnovation
Context
1. Company A, a sports company based in Country B, signed a deal worth $5.5 million with Company C to install sports courts and golf courses.
2. Z, the lead manager, is confident in Company A's ability to meet Company C’s expectations, but is concerned with the risks of installation faults.
Track Record
3. Company A’s past experience suggests that 95% of project failures occur during the final installation phase.
4. Under normal production techniques, Company A can produce and install all products for $5 million, but there is a 6% chance of not meeting measurement specifications.
Rework Costs & Lost
5. If the products fail to meet specifications, they must be returned back to Country B for modification and reinstallation at a cost of $600k.
6. For an additional of $250k to ensure no errors, Company A could test its products prior to the installation.
7. If Company A fails to meet customer expectations, it may lose $200k in goodwill and reputation.
Simulation
8. The Test and Evaluation Manager approached Y to look into using simulation to predict the possibility of failure before deciding on spending on additional precautions.
9. Building a simulation model will cost $33k, which will give a positive or negative rating.
10. If the product is all right, the chance of testing Positive is 90%. If it is not all right, the chance of testing Negative is 65%.
Author: Anthony Mok
Date: 16 Nov 2023
Email: xxiaohao@yahoo.com
Optimal Decision Making - Cost Reduction in LogisticsThinkInnovation
Background
1. Invited by a local Social Enterprise (SE) to provide, as skill-base volunteerism, to solve a logistical problem.
Problem
2. Determine optimal quantity of Product X to be delivered from each SE’s outlets to different retailers at minimum transportation cost.
SE’s Outlets
3. Delivers from 3 outlets - in Jurong, Alexandra, & Tuas.
Retailers
4. Delivers to 6 retailers - in Jurong, Alexandra, Tampines, Yishun, Changi, Bishan, & Woodlands.
Request
5. Applies linear optimisation modeling to find optimal quantity of Project X to be delivered where it will be able to minimise the transportation cost significantly, which will result in increased profitability.
Author: Anthony Mok
Date: 16 Nov 2023
Email: xxiaohao@yahoo.com
Creating Data Warehouse Using Power Query & Power PivotThinkInnovation
Context
Social Enterprise, from a neighboring country which provides ambulatory services, has collected data on road accidents and is keen to use the data to inform on its resource deployment. It has stored the data into three files: ‘Accidents.xlsx’, ‘Casualties.xlsx’ and ‘Vehicles.txt’
Objective
Create a data warehouse containing meaningful information on road accidents
Strategies
1. Import file and transform data
2. Create queries as a new table
3. Merge these tables
4. Summary table
5. Power Pivot and create a data model
Unlocking New Insights Into the World of European Soccer Through the European...ThinkInnovation
Exploring Datasets With SQLite
Context
European Soccer Database (ESD) used to study team dynamics and identify the factors that lead to player’s and team’s success.
Objective
Run queries to inspect its structure through SQLite
Strategies
1. Import the European Soccer Database file into DB Browser (SQLite) and find the total number of tables in the database
2. Using the ‘Country’ table, run a SQL query to show the list of countries in descending order (Z-A) based on the country name
3. Display the specified columns from the ‘Team_Attributes’ table with filtered rows based on ‘buildUpPlaySpeed’
4. List all the players with the specified conditions in a table with the specified columns
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
The document discusses managing projects and project management. It covers the importance of managing projects well given global trends. It describes the characteristics of modern projects, including established objectives, defined lifespans, involvement of multiple teams, and doing something never done before. It also discusses failures in project management and best practices through collaboration.
The document discusses the "Thinking Outside the Box" series which aims to help people think unconventionally. It describes the SCAMPER method, created by Bob Eberle, which provides a checklist for refining existing products and services by substituting, combining, adapting, modifying, putting to other uses, eliminating, or reversing elements. SCAMPER stands for these techniques and the document provides examples of applying each letter of the acronym to different products or services.
Created by Bob Eberle in the 1970’s, SCAMPER, which comes in the form of a checklist of idea-spurring questions, helps you think outside-of-the-box when you encounter a challenge.
SCAMPER is based on the notion that everything is a new translation of something that has already existed. Each letter in the acronym – SCAMPER, represents a way the characteristics of the challenge are manipulated until new ideas are created.
After years of teaching others how to think creatively, I find the best way to answer these questions is through learning and using the creative tools to experience what thinking outside the box really means.
Assumption Reversal Method, which ideas are triggered from assumptions that are reversed from those currently ruling the situation, is an excellent ideation technique that enables us to obtain such enlightenment.......
Psyche of Facilitation - The New Language of Facilitating ConversationsThinkInnovation
Not every participant in an interaction will respond in the same way to the facilitator.
Some language of facilitation may attract the participants to the conversation. Others may cause them to stay away.
So, by combining the sciences presented and described in this slideshare, I have created a framework that provides a guide on how the language could be better fine-tuned to enrich the collective learning and wisdom of the group.
Visual Connection - Ideation Through Word AssociationThinkInnovation
This document discusses techniques for thinking creatively outside the box, including visual connection, an ideation technique where words associated with images are used to trigger new ideas. It provides an example of visual connection, using news about decreased business in Chinatown after new road tolls to formulate a challenge statement: "In what ways might we increase the volume of business in Chinatown?". Words from a list of sensory perceptions related to an image are then used to generate potential solutions to the challenge.
Introduction to Data Science
1.1 What is Data Science, importance of data science,
1.2 Big data and data Science, the current Scenario,
1.3 Industry Perspective Types of Data: Structured vs. Unstructured Data,
1.4 Quantitative vs. Categorical Data,
1.5 Big Data vs. Little Data, Data science process
1.6 Role of Data Scientist
Annex K RBF's The World Game pdf documentSteven McGee
Signals & Telemetry Annex K for RBF's The World Game / Trade Federations / USPTO 13/573,002 Heart Beacon Cycle Time - Space Time Chain meters, metrics, standards. Adaptive Procedural template framework structured data derived from DoD / NATO's system of systems engineering tech framework
Solution Manual for First Course in Abstract Algebra A, 8th Edition by John B...rightmanforbloodline
Solution Manual for First Course in Abstract Algebra A, 8th Edition by John B. Fraleigh, Verified Chapters 1 - 56,.pdf
Solution Manual for First Course in Abstract Algebra A, 8th Edition by John B. Fraleigh, Verified Chapters 1 - 56,.pdf
Towards an Analysis-Ready, Cloud-Optimised service for FAIR fusion dataSamuel Jackson
We present our work to improve data accessibility and performance for data-intensive tasks within the fusion research community. Our primary goal is to develop services that facilitate efficient access for data-intensive applications while ensuring compliance with FAIR principles [1], as well as adoption of interoperable tools, methods and standards.
The major outcome of our work is the successful creation and deployment of a data service for the MAST (Mega Ampere Spherical Tokamak) experiment [2], leading to substantial enhancements in data discoverability, accessibility, and overall data retrieval performance, particularly in scenarios involving large-scale data access. Our work follows the principles of Analysis-Ready, Cloud Optimised (ARCO) data [3] by using cloud optimised data formats for fusion data.
Our system consists of a query-able metadata catalogue, complemented with an object storage system for publicly serving data from the MAST experiment. We will show how our solution integrates with the Pandata stack [4] to enable data analysis and processing at scales that would have previously been intractable, paving the way for data-intensive workflows running routinely with minimal pre-processing on the part of the researcher. By using a cloud-optimised file format such as zarr [5] we can enable interactive data analysis and visualisation while avoiding large data transfers. Our solution integrates with common python data analysis libraries for large, complex scientific data such as xarray [6] for complex data structures and dask [7] for parallel computation and lazily working with larger that memory datasets.
The incorporation of these technologies is vital for advancing simulation, design, and enabling emerging technologies like machine learning and foundation models, all of which rely on efficient access to extensive repositories of high-quality data. Relying on the FAIR guiding principles for data stewardship not only enhances data findability, accessibility, and reusability, but also fosters international cooperation on the interoperability of data and tools, driving fusion research into new realms and ensuring its relevance in an era characterised by advanced technologies in data science.
[1] Wilkinson, M., Dumontier, M., Aalbersberg, I. et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3, 160018 (2016) https://doi.org/10.1038/sdata.2016.18
[2] M Cox, The Mega Amp Spherical Tokamak, Fusion Engineering and Design, Volume 46, Issues 2–4, 1999, Pages 397-404, ISSN 0920-3796, https://doi.org/10.1016/S0920-3796(99)00031-9
[3] Stern, Charles, et al. "Pangeo forge: crowdsourcing analysis-ready, cloud optimized data production." Frontiers in Climate 3 (2022): 782909.
[4] Bednar, James A., and Martin Durant. "The Pandata Scalable Open-Source Analysis Stack." (2023).
[5] Alistair Miles (2024) ‘zarr-developers/zarr-python: v2.17.1’. Zenodo. doi: 10.5281/zenodo.10790679
[6] Hoyer, S. & Hamman, J., (20
Combined supervised and unsupervised neural networks for pulse shape discrimi...Samuel Jackson
Our methodology for pulse shape discrimination is split into two steps. Firstly, we learn a model to discriminate between pulses using "clean" low-rate examples by removing pile-up & saturated events. In addition to traditional tail sum discrimination, we investigate three different choices for discrimination between γ-pulses, fast, thermal neutrons. We consider clustering the pulses directly using Gaussian Mixture Modelling (GMM), using variational autoencoders to learn a representation of the pulses and then clustering the learned representation (VAE+GMM) and using density ratio estimation to discriminate between a mixed (γ + neutron) and pure (γ only) sources using a multi-layer perceptron (MLP) as a supervised learning problem.
Secondly, we aim to classify and recover pile-up events in the < 150 ns regime by training a single unified multi-label MLP. To frame the problem as a multi-label supervised learning method, we first simulate pile-up events with known components. Then, using the simulated data and combining it with single event data, we train a final multi-label MLP to output a binary code indicating both how many and which type of events are present within an event window.
Harnessing Wild and Untamed (Publicly Available) Data for the Cost efficient ...weiwchu
We recently discovered that models trained with large-scale speech datasets sourced from the web could achieve superior accuracy and potentially lower cost than traditionally human-labeled or simulated speech datasets. We developed a customizable AI-driven data labeling system. It infers word-level transcriptions with confidence scores, enabling supervised ASR training. It also robustly generates phone-level timestamps even in the presence of transcription or recognition errors, facilitating the training of TTS models. Moreover, It automatically assigns labels such as scenario, accent, language, and topic tags to the data, enabling the selection of task-specific data for training a model tailored to that particular task. We assessed the effectiveness of the datasets by fine-tuning open-source large speech models such as Whisper and SeamlessM4T and analyzing the resulting metrics. In addition to openly-available data, our data handling system can also be tailored to provide reliable labels for proprietary data from certain vertical domains. This customization enables supervised training of domain-specific models without the need for human labelers, eliminating data breach risks and significantly reducing data labeling cost.
1. USING DAX & TIME-BASED
ANALYSIS IN DATA WAREHOUSE
Understand Sales Performance Across
Various Products Over Last ThreeYears
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com
2. WHAT ARE...?
Data Analysis Expressions (DAX)
A formula language used in Power Pivot in Excel.
It allows users to create custom calculations for
calculated columns and measures
Time-based Analysis
Used to make informed business
decisions by identifying trends,
patterns, and anomalies in data
Measures
Are calculated fields that are used to perform
aggregations and calculations on data.They are
used in PivotTables and PivotCharts to
summarise and analyse data
3. PROJECT’S CONTEXT, OBJECTIVE & STRATEGIES
Context
An art dealer friend, who has multiple
sales representatives who sell various
products across four different states in the
US, likes to use the data he has collected to
make insight-driven decisions
Strategies
• Combine sales data from multiple CSV files and add
product and sales rep data to the data model
• Create a date table with a column for the last day of
each month for the purpose of conducting time-based
analysis
• Establish relationships between sales rep, product,
sales, and date tables
• Calculate total net sales excluding discounts
• Create a pivot table showing total sales andYOY%
change for regions, excluding subtotals and individual
regions
Objective
Dealer wants to understand the
sales performance across
various products over the last
three years
4. CREATE & ADD DATA MODEL TO POWER PIVOT
▪ .csv files for ‘Sales 21’,‘Sales 22’ and
‘Sales 23’ were imported Power
Query
▪ These were combined into a single
file:‘Merged Sales 2021-2023’
▪ Together with the ‘Product’ and
‘Sales Rep’Tables, from the ‘Product
and SalesRep.xlsx’ file, these are
added to the data model in Power
Pivot
5. MORE DATA MODELS & USING DAX FORMULAE*
▪ To prepare the data model for time-
based analysis, new table, which dates
ranged from 01 January 2021 to 31
December 2023, was created in Power
Pivot
▪ The new date table was renamed as
‘Date’
▪ An additional column, with the last day of
the month for any given month, was
created in this table
▪ DAX formulae used to create this column:
EOM:=FORMAT(EOMONTH('Date'[Date], 0),
"dd/mm/yyyy”
▪ This new column was renamed as ‘EOM’
* This is one of many DAX created in this project. The rest could be found
in the report, which the Art Dealer has requested not to be released
6. FINALISING DATA MODEL WITH RELATIONSHIPS
▪ The relationships between the ‘Sales
Rep’,‘Product’,‘Sales’ and ‘Date’
tables were created after examining
their for their common key across
tables.
▪ ‘Date’,‘ProductID’ and ‘SalesRepID’
are the keys used to connect the
tables to create the final data model
7. GENERATING MEASURES* IN POWER PIVOT
Measure to get the ‘Total Net Sales’ was created from the ‘Merged Sales 2021-2023’ Table in Power Pivot:
Steps
▪ Inserted a column between ‘SalesRepID’ and the ‘Units’ columns, and use the Power Pivot DAX’s “RELATED()” function to bring the data
from the ‘Price’ column, in the ‘Product’ Table, into the ‘Merged Sales 2021-2023’ Table
▪ Created two additional columns in the ‘Merged Sales 2021-2023 Table to store the calculated ‘Total Discount’ and ‘Sales’ data
▪ Created the measure for “Net Sales = Total Sales – Discounts”, the “Sum of Total Discount” and the “Sum of Sales”, and these were
formulated and formatted into Singapore Dollars
▪ Finally, the ‘Net Sales’ or ‘Sum of Net Sales’ were created as the final measure in the data model in the Power Pivot, and were formatted
in Singapore Dollars
* This is one of many Measures in this project. The rest could be found in the report, which the Art Dealer has requested not to be released
8. CREATE PIVOT TABLE FOR TIME-BASED ANALYSIS
Created a PivotTable with total sales andYOY%
change in sales across regions (AL, CO, CA, FL)
to conduct time-based analysis
* These are just five of many Measures created in this project for analysis
* The rest could be found in the report, which the Art Dealer has requested not to be released
Two new Power Pivot Measures were created in
the ‘Merged Sales 2021-2023’Table in Power Pivot:
▪ ‘Year Sales LastYear’; and
▪ ‘YOY % Change in Sales’
The PivotTable, in
the next slide, was
configured in the
following manner,
through the
PivotTable Fields,
to produce the
outcome of Task 5
9. FINDINGS, CONCLUSIONS & RECOMMENDATIONS*
▪ In absolute-sales-dollar terms, FL and WA contributed 85% of the Total Sum of Net Sales in the 3-year period of 2021 to
2023
▪ As at 31 December 2023, the AL region performed the best, in TotalYOY % Change In Sales in this 3-year period, against
the other three regions, but it suffered the most significant drop of 43% in sales performance in 2022, against 2021,
before drastically recovering in the following year
▪ This pattern seemed to have repeated in the CA region with a drop of 12% in 2023 as compared to 2022
▪ Overall, the total-YOY-%-Change-In-Sales in this 3-year period for all four regions was 65%, which could be higher if the
year-on-year sales performance for the AL and CA regions were more consistent with the previous year/s
▪ This suggests that the data need to be further explored and mined for the possible and key contributors for the
inconsistent performance of the AL and CA regions, and to see, through the data, whether there are signs of these
reoccurring in these regions or in those regions which are consistently contributing to the overall sales performance of
the company
* These are some of many findings, conclusions and recommendations in this project.
* The rest could be found in the report, which the Art Dealer has requested not to be released
10. USING DAX & TIME-BASED
ANALYSIS IN DATA WAREHOUSE
Understand Sales Performance Across
Various Products Over Last ThreeYears
Author: Anthony Mok
Date: 18 Nov 2023
Email: xxiaohao@yahoo.com