# Quadratic Discriminant Analysis (QDA algorithm)¶

QDA is closely related to linear discriminant analysis (LDA), where it is assumed that the measurements are normally distributed. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. To estimate the parameters required in quadratic discrimination more computation and data is required than in the case of linear discrimination. If there is not a great difference in the group covariance matrices, then the latter will perform as well as quadratic discrimination. Quadratic Discrimination is the general form of Bayesian discrimination.
Discriminant analysis is used to determine which variables discriminate between two or more naturally occurring groups. For example, an educational researcher may want to investigate which variables discriminate between high school graduates who decide (1) to go to college, (2) NOT to go to college. For that purpose the researcher could collect data on numerous variables prior to students' graduation. After graduation, most students will naturally fall into one of the two categories. Discriminant Analysis could then be used to determine which variable(s) are the best predictors of students' subsequent educational choice. Computationally, discriminant function analysis is very similar to analysis of variance (ANOVA). For example, suppose the same student graduation scenario. We could have measured students' stated intention to continue on to college one year prior to graduation. If the means for the two groups (those who actually went to college and those who did not) are different, then we can say that the intention to attend college as stated one year prior to graduation allows us to discriminate between those who are and are not college bound (and this information may be used by career counselors to provide the appropriate guidance to the respective students). The basic idea underlying discriminant analysis is to determine whether groups differ with regard to the mean of a variable, and then to use that variable to predict group membership (e.g. of new cases).
Discriminant Analysis may be used for two objectives: either we want to assess the adequacy of classification, given the group memberships of the objects under study; or we wish to assign objects to one of a number of (known) groups of objects. Discriminant Analysis may thus have a descriptive or a predictive objective. In both cases, some group assignments must be known before carrying out the Discriminant Analysis. Such group assignments, or labeling, may be arrived at in any way. Hence Discriminant Analysis can be employed as a useful complement to Cluster Analysis (in order to judge the results of the latter) or Principal Components Analysis.

Here we are going to implement QDA using Telecom Churn Dataset.

In [2]:
library(DBI)
library(corrgram)
library(caret) # contains QDA function
library(gridExtra)
library(ggpubr)


## 1. Setting up the code parallelizing¶

Today is a good practice to start parallelizing your code. The common motivation behind parallel computing is that something is taking too long time. For somebody that means any computation that takes more than 3 minutes – this because parallelization is incredibly simple and most tasks that take time are embarrassingly parallel. Here are a few common tasks that fit the description:

• Bootstrapping
• Cross-validation
• Multivariate Imputation by Chained Equations (MICE)
• Fitting multiple regression models
You can find out more about parallelizing your computations in R - here.

### For Windows users

In [ ]:
# process in parallel on Windows
library(doParallel)
cl <- makeCluster(detectCores(), type='PSOCK')
registerDoParallel(cl)


### For Mac OSX and Unix like systems users

In [5]:
# process in parallel on Mac OSX and UNIX like systems
library(doMC)
registerDoMC(cores = 4)


## 2. Importing Data¶

In [8]:
#Set working directory where CSV is located

#getwd()
#setwd("...YOUR WORKING DIRECTORY WITH A DATASET...")
#getwd()

In [6]:
# Load the DataSets:
colnames(dataSet) #Check the dataframe column names

1. 'Account_Length'
2. 'Vmail_Message'
3. 'Day_Mins'
4. 'Eve_Mins'
5. 'Night_Mins'
6. 'Intl_Mins'
7. 'CustServ_Calls'
8. 'Churn'
9. 'Intl_Plan'
10. 'Vmail_Plan'
11. 'Day_Calls'
12. 'Day_Charge'
13. 'Eve_Calls'
14. 'Eve_Charge'
15. 'Night_Calls'
16. 'Night_Charge'
17. 'Intl_Calls'
18. 'Intl_Charge'
19. 'State'
20. 'Area_Code'
21. 'Phone'

## 3. Exploring the dataset¶

In [7]:
# Print top 10 rows in the dataSet

A data.frame: 10 × 21
Account_LengthVmail_MessageDay_MinsEve_MinsNight_MinsIntl_MinsCustServ_CallsChurnIntl_PlanVmail_PlanDay_ChargeEve_CallsEve_ChargeNight_CallsNight_ChargeIntl_CallsIntl_ChargeStateArea_CodePhone
<int><int><dbl><dbl><dbl><dbl><int><fct><fct><fct><dbl><int><dbl><int><dbl><int><dbl><fct><int><fct>
112825265.1197.4244.710.01nono yes45.07 9916.78 9111.0132.70KS415382-4657
210726161.6195.5254.413.71nono yes27.4710316.6210311.4533.70OH415371-7191
3137 0243.4121.2162.612.20nono no 41.3811010.30104 7.3253.29NJ415358-1921
4 84 0299.4 61.9196.9 6.62noyesno 50.90 88 5.26 89 8.8671.78OH408375-9999
5 75 0166.7148.3186.910.13noyesno 28.3412212.61121 8.4132.73OK415330-6626
6118 0223.4220.6203.9 6.30noyesno 37.9810118.75118 9.1861.70AL510391-8027
712124218.2348.5212.6 7.53nono yes37.0910829.62118 9.5772.03MA510355-9993
8147 0157.0103.1211.8 7.10noyesno 26.69 94 8.76 96 9.5361.92MO415329-9001
9117 0184.5351.6215.8 8.71nono no 31.37 8029.89 90 9.7142.35LA408335-4719
1014137258.6222.0326.411.20noyesyes43.9611118.87 9714.6953.02WV415330-8173
In [8]:
# Print last 10 rows in the dataSet
tail(dataSet, 10)

A data.frame: 10 × 21
Account_LengthVmail_MessageDay_MinsEve_MinsNight_MinsIntl_MinsCustServ_CallsChurnIntl_PlanVmail_PlanDay_ChargeEve_CallsEve_ChargeNight_CallsNight_ChargeIntl_CallsIntl_ChargeStateArea_CodePhone
<int><int><dbl><dbl><dbl><dbl><int><fct><fct><fct><dbl><int><dbl><int><dbl><int><dbl><fct><int><fct>
3324117 0118.4249.3227.013.65yesno no 20.13 9721.19 5610.22 33.67IN415362-5899
3325159 0169.8197.7193.711.61no no no 28.8710516.80 82 8.72 43.13WV415377-1164
3326 78 0193.4116.9243.3 9.32no no no 32.88 88 9.9410910.95 42.51OH408368-8555
3327 96 0106.6284.8178.914.91no no no 18.12 8724.21 92 8.05 74.02OH415347-6812
3328 79 0134.7189.7221.411.82no no no 22.90 6816.12128 9.96 53.19SC415348-3830
332919236156.2215.5279.1 9.92no no yes26.5512618.32 8312.56 62.67AZ415414-4276
3330 68 0231.1153.4191.3 9.63no no no 39.29 5513.04123 8.61 42.59WV415370-3271
3331 28 0180.8288.8191.914.12no no no 30.74 5824.55 91 8.64 63.81RI510328-8230
3332184 0213.8159.6139.2 5.02no yesno 36.35 8413.57137 6.26101.35CT510364-6381
3333 7425234.4265.9241.413.70no no yes39.85 8222.60 7710.86 43.70TN415400-4344
In [9]:
# Dimention of Dataset
dim(dataSet)

1. 3333
2. 21
In [10]:
# Check Data types of each column
table(unlist(lapply(dataSet, class)))

 factor integer numeric
5       8       8 
In [11]:
# Check Data types of individual column
data.class(dataSet$Account_Length) data.class(dataSet$Vmail_Message)
data.class(dataSet$Day_Mins) data.class(dataSet$Eve_Mins)
data.class(dataSet$Night_Mins) data.class(dataSet$Intl_Mins)
data.class(dataSet$CustServ_Calls) data.class(dataSet$Intl_Plan)
data.class(dataSet$Vmail_Plan) data.class(dataSet$Day_Calls)
data.class(dataSet$Day_Charge) data.class(dataSet$Eve_Calls)
data.class(dataSet$Eve_Charge) data.class(dataSet$Night_Calls)
data.class(dataSet$Night_Charge) data.class(dataSet$Intl_Calls)
data.class(dataSet$Intl_Charge) data.class(dataSet$State)
data.class(dataSet$Phone) data.class(dataSet$Churn)

'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'factor'
'factor'
'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'numeric'
'factor'
'factor'
'factor'

#### Converting variables Intl_Plan, Vmail_Plan, State to numeric data type.

In [12]:
dataSet$Intl_Plan <- as.numeric(dataSet$Intl_Plan)
dataSet$Vmail_Plan <- as.numeric(dataSet$Vmail_Plan)
dataSet$State <- as.numeric(dataSet$State)

In [13]:
# Check Data types of each column
table(unlist(lapply(dataSet, class)))

 factor integer numeric
2       8      11 

## 4. Exploring or Summarising dataset with descriptive statistics¶

In [14]:
# Find out if there is missing value in rows
rowSums(is.na(dataSet))

1. 0
2. 0
3. 0
4. 0
5. 0
6. 0
7. 0
8. 0
9. 0
10. 0
11. 0
12. 0
13. 0
14. 0
15. 0
16. 0
17. 0
18. 0
19. 0
20. 0
21. 0
22. 0
23. 0
24. 0
25. 0
26. 0
27. 0
28. 0
29. 0
30. 0
31. 0
32. 0
33. 0
34. 0
35. 0
36. 0
37. 0
38. 0
39. 0
40. 0
41. 0
42. 0
43. 0
44. 0
45. 0
46. 0
47. 0
48. 0
49. 0
50. 0
51. 0
52. 0
53. 0
54. 0
55. 0
56. 0
57. 0
58. 0
59. 0
60. 0
61. 0
62. 0
63. 0
64. 0
65. 0
66. 0
67. 0
68. 0
69. 0
70. 0
71. 0
72. 0
73. 0
74. 0
75. 0
76. 0
77. 0
78. 0
79. 0
80. 0
81. 0
82. 0
83. 0
84. 0
85. 0
86. 0
87. 0
88. 0
89. 0
90. 0
91. 0
92. 0
93. 0
94. 0
95. 0
96. 0
97. 0
98. 0
99. 0
100. 0
101. 0
102. 0
103. 0
104. 0
105. 0
106. 0
107. 0
108. 0
109. 0
110. 0
111. 0
112. 0
113. 0
114. 0
115. 0
116. 0
117. 0
118. 0
119. 0
120. 0
121. 0
122. 0
123. 0
124. 0
125. 0
126. 0
127. 0
128. 0
129. 0
130. 0
131. 0
132. 0
133. 0
134. 0
135. 0
136. 0
137. 0
138. 0
139. 0
140. 0
141. 0
142. 0
143. 0
144. 0
145. 0
146. 0
147. 0
148. 0
149. 0
150. 0
151. 0
152. 0
153. 0
154. 0
155. 0
156. 0
157. 0
158. 0
159. 0
160. 0
161. 0
162. 0
163. 0
164. 0
165. 0
166. 0
167. 0
168. 0
169. 0
170. 0
171. 0
172. 0
173. 0
174. 0
175. 0
176. 0
177. 0
178. 0
179. 0
180. 0
181. 0
182. 0
183. 0
184. 0
185. 0
186. 0
187. 0
188. 0
189. 0
190. 0
191. 0
192. 0
193. 0
194. 0
195. 0
196. 0
197. 0
198. 0
199. 0
200. 0
201. 0
202. 0
203. 0
204. 0
205. 0
206. 0
207. 0
208. 0
209. 0
210. 0
211. 0
212. 0
213. 0
214. 0
215. 0
216. 0
217. 0
218. 0
219. 0
220. 0
221. 0
222. 0
223. 0
224. 0
225. 0
226. 0
227. 0
228. 0
229. 0
230. 0
231. 0
232. 0
233. 0
234. 0
235. 0
236. 0
237. 0
238. 0
239. 0
240. 0
241. 0
242. 0
243. 0
244. 0
245. 0
246. 0
247. 0
248. 0
249. 0
250. 0
251. 0
252. 0
253. 0
254. 0
255. 0
256. 0
257. 0
258. 0
259. 0
260. 0
261. 0
262. 0
263. 0
264. 0
265. 0
266. 0
267. 0
268. 0
269. 0
270. 0
271. 0
272. 0
273. 0
274. 0
275. 0
276. 0
277. 0
278. 0
279. 0
280. 0
281. 0
282. 0
283. 0
284. 0
285. 0
286. 0
287. 0
288. 0
289. 0
290. 0
291. 0
292. 0
293. 0
294. 0
295. 0
296. 0
297. 0
298. 0
299. 0
300. 0
301. 0
302. 0
303. 0
304. 0
305. 0
306. 0
307. 0
308. 0
309. 0
310. 0
311. 0
312. 0
313. 0
314. 0
315. 0
316. 0
317. 0
318. 0
319. 0
320. 0
321. 0
322. 0
323. 0
324. 0
325. 0
326. 0
327. 0
328. 0
329. 0
330. 0
331. 0
332. 0
333. 0
334. 0
335. 0
336. 0
337. 0
338. 0
339. 0
340. 0
341. 0
342. 0
343. 0
344. 0
345. 0
346. 0
347. 0
348. 0
349. 0
350. 0
351. 0
352. 0
353. 0
354. 0
355. 0
356. 0
357. 0
358. 0
359. 0
360. 0
361. 0
362. 0
363. 0
364. 0
365. 0
366. 0
367. 0
368. 0
369. 0
370. 0
371. 0
372. 0
373. 0
374. 0
375. 0
376. 0
377. 0
378. 0
379. 0
380. 0
381. 0
382. 0
383. 0
384. 0
385. 0
386. 0
387. 0
388. 0
389. 0
390. 0
391. 0
392. 0
393. 0
394. 0
395. 0
396. 0
397. 0
398. 0
399. 0
400. 0
In [15]:
# Find out if there is missing value in columns
colSums(is.na(dataSet))

Account_Length
0
Vmail_Message
0
Day_Mins
0
Eve_Mins
0
Night_Mins
0
Intl_Mins
0
CustServ_Calls
0
Churn
0
Intl_Plan
0
Vmail_Plan
0
Day_Calls
0
Day_Charge
0
Eve_Calls
0
Eve_Charge
0
Night_Calls
0
Night_Charge
0
Intl_Calls
0
Intl_Charge
0
State
0
Area_Code
0
Phone
0

### Missing value checking using different packages (mice and VIM)

In [16]:
#Checking missing value with the mice package
library(mice)
md.pattern(dataSet)

Attaching package: ‘mice’

The following objects are masked from ‘package:base’:

cbind, rbind


 /\     /\
{  ---'  }
{  O   O  }
==>  V <==  No need for mice. This data set is completely observed.
\  \|/  /
-----'


A matrix: 2 × 22 of type dbl
Account_LengthVmail_MessageDay_MinsEve_MinsNight_MinsIntl_MinsCustServ_CallsChurnIntl_PlanVmail_PlanEve_CallsEve_ChargeNight_CallsNight_ChargeIntl_CallsIntl_ChargeStateArea_CodePhone
333311111111111111111110
00000000000000000000
In [17]:
#Checking missing value with the VIM package
library(VIM)
mice_plot <- aggr(dataSet, col=c('navyblue','yellow'),
numbers=TRUE, sortVars=TRUE,
labels=names(dataSet[1:21]), cex.axis=.9,
gap=3, ylab=c("Missing data","Pattern"))

Loading required package: colorspace

Suggestions and bug-reports can be submitted at: https://github.com/statistikat/VIM/issues

Attaching package: ‘VIM’

The following object is masked from ‘package:datasets’:

sleep


 Variables sorted by number of missings:
Variable Count
Account_Length     0
Vmail_Message     0
Day_Mins     0
Eve_Mins     0
Night_Mins     0
Intl_Mins     0
CustServ_Calls     0
Churn     0
Intl_Plan     0
Vmail_Plan     0
Day_Calls     0
Day_Charge     0
Eve_Calls     0
Eve_Charge     0
Night_Calls     0
Night_Charge     0
Intl_Calls     0
Intl_Charge     0
State     0
Area_Code     0
Phone     0


After the observation, we can claim that dataset contains no missing values.

### Summary of dataset

In [18]:
# Selecting just columns with numeric data type
numericalCols <- colnames(dataSet[c(1:7,9:20)])


Difference between the lapply and sapply functions (we will use them in the next 2 cells):
We use lapply - when we want to apply a function to each element of a list in turn and get a list back.
We use sapply - when we want to apply a function to each element of a list in turn, but we want a vector back, rather than a list.

#### Finding statistics metrics with lapply function

In [19]:
#Sum
lapply(dataSet[numericalCols], FUN = sum)

$Account_Length 336849$Vmail_Message
26994
$Day_Mins 599190.4$Eve_Mins
669867.5
$Night_Mins 669506.5$Intl_Mins
34120.9
$CustServ_Calls 5209$Intl_Plan
3656
$Vmail_Plan 4255$Day_Calls
334752
$Day_Charge 101864.17$Eve_Calls
333681
$Eve_Charge 56939.44$Night_Calls
333659
$Night_Charge 30128.07$Intl_Calls
14930
$Intl_Charge 9214.35$State
90189
$Area_Code 1457129 In [20]: #Mean lapply(dataSet[numericalCols], FUN = mean) $Account_Length
101.064806480648
$Vmail_Message 8.0990099009901$Day_Mins
179.775097509751
$Eve_Mins 200.980348034803$Night_Mins
200.87203720372
$Intl_Mins 10.2372937293729$CustServ_Calls
1.56285628562856
$Intl_Plan 1.0969096909691$Vmail_Plan
1.27662766276628
$Day_Calls 100.435643564356$Day_Charge
30.5623072307231
$Eve_Calls 100.114311431143$Eve_Charge
17.0835403540354
$Night_Calls 100.107710771077$Night_Charge
9.03932493249325
$Intl_Calls 4.47944794479448$Intl_Charge
2.76458145814581
$State 27.0594059405941$Area_Code
437.182418241824
In [21]:
#median
lapply(dataSet[numericalCols], FUN = median)

$Account_Length 101$Vmail_Message
0
$Day_Mins 179.4$Eve_Mins
201.4
$Night_Mins 201.2$Intl_Mins
10.3
$CustServ_Calls 1$Intl_Plan
1
$Vmail_Plan 1$Day_Calls
101
$Day_Charge 30.5$Eve_Calls
100
$Eve_Charge 17.12$Night_Calls
100
$Night_Charge 9.05$Intl_Calls
4
$Intl_Charge 2.78$State
27
$Area_Code 415 In [22]: #Min lapply(dataSet[numericalCols], FUN = min) $Account_Length
1
$Vmail_Message 0$Day_Mins
0
$Eve_Mins 0$Night_Mins
23.2
$Intl_Mins 0$CustServ_Calls
0
$Intl_Plan 1$Vmail_Plan
1
$Day_Calls 0$Day_Charge
0
$Eve_Calls 0$Eve_Charge
0
$Night_Calls 33$Night_Charge
1.04
$Intl_Calls 0$Intl_Charge
0
$State 1$Area_Code
408
In [23]:
#Max
lapply(dataSet[numericalCols], FUN = max)

$Account_Length 243$Vmail_Message
51
$Day_Mins 350.8$Eve_Mins
363.7
$Night_Mins 395$Intl_Mins
20
$CustServ_Calls 9$Intl_Plan
2
$Vmail_Plan 2$Day_Calls
165
$Day_Charge 59.64$Eve_Calls
170
$Eve_Charge 30.91$Night_Calls
175
$Night_Charge 17.77$Intl_Calls
20
$Intl_Charge 5.4$State
51
$Area_Code 510 In [24]: #Length lapply(dataSet[numericalCols], FUN = length) $Account_Length
3333
$Vmail_Message 3333$Day_Mins
3333
$Eve_Mins 3333$Night_Mins
3333
$Intl_Mins 3333$CustServ_Calls
3333
$Intl_Plan 3333$Vmail_Plan
3333
$Day_Calls 3333$Day_Charge
3333
$Eve_Calls 3333$Eve_Charge
3333
$Night_Calls 3333$Night_Charge
3333
$Intl_Calls 3333$Intl_Charge
3333
$State 3333$Area_Code
3333

#### Finding statistics metrics with sapply function

In [25]:
# Sum
sapply(dataSet[numericalCols], FUN = sum)

Account_Length
336849
Vmail_Message
26994
Day_Mins
599190.4
Eve_Mins
669867.5
Night_Mins
669506.5
Intl_Mins
34120.9
CustServ_Calls
5209
Intl_Plan
3656
Vmail_Plan
4255
Day_Calls
334752
Day_Charge
101864.17
Eve_Calls
333681
Eve_Charge
56939.44
Night_Calls
333659
Night_Charge
30128.07
Intl_Calls
14930
Intl_Charge
9214.35
State
90189
Area_Code
1457129
In [26]:
# Mean
sapply(dataSet[numericalCols], FUN = mean)

Account_Length
101.064806480648
Vmail_Message
8.0990099009901
Day_Mins
179.775097509751
Eve_Mins
200.980348034803
Night_Mins
200.87203720372
Intl_Mins
10.2372937293729
CustServ_Calls
1.56285628562856
Intl_Plan
1.0969096909691
Vmail_Plan
1.27662766276628
Day_Calls
100.435643564356
Day_Charge
30.5623072307231
Eve_Calls
100.114311431143
Eve_Charge
17.0835403540354
Night_Calls
100.107710771077
Night_Charge
9.03932493249325
Intl_Calls
4.47944794479448
Intl_Charge
2.76458145814581
State
27.0594059405941
Area_Code
437.182418241824
In [27]:
# Median
sapply(dataSet[numericalCols], FUN = median)

Account_Length
101
Vmail_Message
0
Day_Mins
179.4
Eve_Mins
201.4
Night_Mins
201.2
Intl_Mins
10.3
CustServ_Calls
1
Intl_Plan
1
Vmail_Plan
1
Day_Calls
101
Day_Charge
30.5
Eve_Calls
100
Eve_Charge
17.12
Night_Calls
100
Night_Charge
9.05
Intl_Calls
4
Intl_Charge
2.78
State
27
Area_Code
415
In [28]:
# Min
sapply(dataSet[numericalCols], FUN = min)

Account_Length
1
Vmail_Message
0
Day_Mins
0
Eve_Mins
0
Night_Mins
23.2
Intl_Mins
0
CustServ_Calls
0
Intl_Plan
1
Vmail_Plan
1
Day_Calls
0
Day_Charge
0
Eve_Calls
0
Eve_Charge
0
Night_Calls
33
Night_Charge
1.04
Intl_Calls
0
Intl_Charge
0
State
1
Area_Code
408
In [29]:
# Max
sapply(dataSet[numericalCols], FUN = max)

Account_Length
243
Vmail_Message
51
Day_Mins
350.8
Eve_Mins
363.7
Night_Mins
395
Intl_Mins
20
CustServ_Calls
9
Intl_Plan
2
Vmail_Plan
2
Day_Calls
165
Day_Charge
59.64
Eve_Calls
170
Eve_Charge
30.91
Night_Calls
175
Night_Charge
17.77
Intl_Calls
20
Intl_Charge
5.4
State
51
Area_Code
510
In [30]:
# Length
sapply(dataSet[numericalCols], FUN = length)

Account_Length
3333
Vmail_Message
3333
Day_Mins
3333
Eve_Mins
3333
Night_Mins
3333
Intl_Mins
3333
CustServ_Calls
3333
Intl_Plan
3333
Vmail_Plan
3333
Day_Calls
3333
Day_Charge
3333
Eve_Calls
3333
Eve_Charge
3333
Night_Calls
3333
Night_Charge
3333
Intl_Calls
3333
Intl_Charge
3333
State
3333
Area_Code
3333

In the next few cells, you will find three different options on how to aggregate data.

In [31]:
# OPTION 1: (Using Aggregate FUNCTION - all variables together)
aggregate(dataSet[numericalCols], list(dataSet$Churn), summary)  A data.frame: 2 × 20 Group.1Account_LengthVmail_MessageDay_MinsEve_MinsNight_MinsIntl_MinsCustServ_CallsIntl_PlanVmail_PlanDay_CallsDay_ChargeEve_CallsEve_ChargeNight_CallsNight_ChargeIntl_CallsIntl_ChargeStateArea_Code <fct><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]><dbl[,6]> no 1, 73, 100, 100.7937, 127, 2430, 0, 0, 8.604561, 22, 510, 142.825, 177.2, 175.1758, 210.30, 315.6 0.0, 164.5, 199.6, 199.0433, 233.20, 361.823.2, 165.90, 200.25, 200.1332, 234.90, 395.00, 8.4, 10.2, 10.15888, 12.0, 18.90, 1, 1, 1.449825, 2, 81, 1, 1, 1.065263, 1, 21, 1, 1, 1.295439, 2, 20, 87.0, 100, 100.2832, 114.0, 1630, 24.2825, 30.12, 29.78042, 35.75, 53.65 0, 87, 100, 100.0386, 114, 1700.00, 13.980, 16.97, 16.91891, 19.820, 30.7533, 87, 100, 100.0582, 113, 1751.04, 7.470, 9.01, 9.006074, 10.570, 17.770, 3, 4, 4.532982, 6, 190.00, 2.27, 2.75, 2.743404, 3.24, 5.11, 14, 27, 27.01193, 40, 51408, 408, 415, 437.0747, 510, 510 yes1, 76, 103, 102.6646, 127, 2250, 0, 0, 5.115942, 0, 480, 153.250, 217.6, 206.9141, 265.95, 350.870.9, 177.1, 211.3, 212.4101, 249.45, 363.747.4, 171.25, 204.80, 205.2317, 239.85, 354.92, 8.8, 10.6, 10.70000, 12.8, 20.00, 1, 2, 2.229814, 4, 91, 1, 1, 1.283644, 2, 21, 1, 1, 1.165631, 1, 20, 87.5, 103, 101.3354, 116.5, 1650, 26.0550, 36.99, 35.17592, 45.21, 59.6448, 87, 101, 100.5611, 114, 1686.03, 15.055, 17.96, 18.05497, 21.205, 30.9149, 85, 100, 100.3996, 115, 1582.13, 7.705, 9.22, 9.235528, 10.795, 15.971, 2, 4, 4.163561, 5, 200.54, 2.38, 2.86, 2.889545, 3.46, 5.41, 17, 27, 27.33954, 39, 51408, 408, 415, 437.8178, 510, 510 In [32]: # OPTION 2: (Using Aggregate FUNCTION - variables separately) aggregate(dataSet$Intl_Mins, list(dataSet$Churn), summary) aggregate(dataSet$Day_Mins, list(dataSet$Churn), summary) aggregate(dataSet$Night_Mins, list(dataSet$Churn), summary)  A data.frame: 2 × 2 Group.1x <fct><dbl[,6]> no 0, 8.4, 10.2, 10.15888, 12.0, 18.9 yes2, 8.8, 10.6, 10.70000, 12.8, 20.0 A data.frame: 2 × 2 Group.1x <fct><dbl[,6]> no 0, 142.825, 177.2, 175.1758, 210.30, 315.6 yes0, 153.250, 217.6, 206.9141, 265.95, 350.8 A data.frame: 2 × 2 Group.1x <fct><dbl[,6]> no 23.2, 165.90, 200.25, 200.1332, 234.90, 395.0 yes47.4, 171.25, 204.80, 205.2317, 239.85, 354.9 In [33]: # OPTION 3: (Using "by" FUNCTION instead of "Aggregate" FUNCTION) by(dataSet$Intl_Mins, dataSet[8], FUN = summary)
by(dataSet$Day_Mins, dataSet[8], FUN = summary) by(dataSet$Night_Mins, dataSet[8], FUN = summary)

Churn: no
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
0.00    8.40   10.20   10.16   12.00   18.90
------------------------------------------------------------
Churn: yes
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
2.0     8.8    10.6    10.7    12.8    20.0 
Churn: no
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
0.0   142.8   177.2   175.2   210.3   315.6
------------------------------------------------------------
Churn: yes
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
0.0   153.2   217.6   206.9   265.9   350.8 
Churn: no
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
23.2   165.9   200.2   200.1   234.9   395.0
------------------------------------------------------------
Churn: yes
Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
47.4   171.2   204.8   205.2   239.8   354.9 

#### Find out correlation

In [34]:
# Correlations/covariances among numeric variables
library(Hmisc)
cor(dataSet[c(2,5,11,13,16,18)], use="complete.obs", method="kendall")
cov(dataSet[c(2,5,11,13,16,18)], use="complete.obs")

Loading required package: survival

Attaching package: ‘survival’

The following object is masked from ‘package:caret’:

cluster

Attaching package: ‘Hmisc’

The following objects are masked from ‘package:base’:

format.pval, units


A matrix: 6 × 6 of type dbl
Vmail_MessageNight_MinsDay_CallsEve_CallsNight_ChargeIntl_Charge
Vmail_Message 1.000000000 0.003718463-0.009573189-5.382921e-03 0.003710434-1.263503e-03
Night_Mins 0.003718463 1.000000000 0.012550159 3.291091e-03 0.999625309-7.103399e-03
Day_Calls-0.009573189 0.012550159 1.000000000 9.253492e-03 0.012531632 1.038631e-02
Eve_Calls-0.005382921 0.003291091 0.009253492 1.000000e+00 0.003310838-9.536135e-05
Night_Charge 0.003710434 0.999625309 0.012531632 3.310838e-03 1.000000000-7.097366e-03
Intl_Charge-0.001263503-0.007103399 0.010386309-9.536135e-05-0.007097366 1.000000e+00
A matrix: 6 × 6 of type dbl
Vmail_MessageNight_MinsDay_CallsEve_CallsNight_ChargeIntl_Charge
Vmail_Message187.37134656 5.3174453 -2.6229779 -1.59925653 0.23873433 0.02975334
Night_Mins 5.317445292557.7140018 23.2812431 -2.10859729115.09955435-0.57867377
Day_Calls -2.62297790 23.2812431402.7681409 2.58373944 1.04716693 0.32775442
Eve_Calls -1.59925653 -2.1085973 2.5837394396.91099860 -0.09322113 0.13025644
Night_Charge 0.23873433 115.0995543 1.0471669 -0.09322113 5.17959717-0.02605168
Intl_Charge 0.02975334 -0.5786738 0.3277544 0.13025644 -0.02605168 0.56817315
In [35]:
# Correlations with significance levels
rcorr(as.matrix(dataSet[c(2,5,11,13,16,18)]), type="pearson")

              Vmail_Message Night_Mins Day_Calls Eve_Calls Night_Charge
Vmail_Message          1.00       0.01     -0.01     -0.01         0.01
Night_Mins             0.01       1.00      0.02      0.00         1.00
Day_Calls             -0.01       0.02      1.00      0.01         0.02
Eve_Calls             -0.01       0.00      0.01      1.00         0.00
Night_Charge           0.01       1.00      0.02      0.00         1.00
Intl_Charge            0.00      -0.02      0.02      0.01        -0.02
Intl_Charge
Vmail_Message        0.00
Night_Mins          -0.02
Day_Calls            0.02
Eve_Calls            0.01
Night_Charge        -0.02
Intl_Charge          1.00

n= 3333

P
Vmail_Message Night_Mins Day_Calls Eve_Calls Night_Charge
Vmail_Message               0.6576     0.5816    0.7350    0.6583
Night_Mins    0.6576                   0.1855    0.9039    0.0000
Day_Calls     0.5816        0.1855               0.7092    0.1857
Eve_Calls     0.7350        0.9039     0.7092              0.9056
Night_Charge  0.6583        0.0000     0.1857    0.9056
Intl_Charge   0.8678        0.3810     0.2111    0.6167    0.3808
Intl_Charge
Vmail_Message 0.8678
Night_Mins    0.3810
Day_Calls     0.2111
Eve_Calls     0.6167
Night_Charge  0.3808
Intl_Charge              

## 5. Visualising DataSet¶

In [36]:
# Pie Chart from data
mytable <- table(dataSet$Churn) lbls <- paste(names(mytable), "\n", mytable, sep="") pie(mytable, labels = lbls, col=rainbow(length(lbls)), main="Pie Chart of Classes\n (with sample sizes)")  In [37]: # Barplot of categorical data par(mfrow=c(1,1)) barplot(table(dataSet$Churn), ylab = "Count",
col=c("darkblue","red"))
barplot(prop.table(table(dataSet$Churn)), ylab = "Proportion", col=c("darkblue","red")) barplot(table(dataSet$Churn), xlab = "Count", horiz = TRUE,
col=c("darkblue","red"))
barplot(prop.table(table(dataSet\$Churn)), xlab = "Proportion", horiz = TRUE,
col=c("darkblue","red"))

In [38]:
# Scatterplot Matrices from the glus Package
library(gclus)
dta <- dataSet[c(2,5,11,13,16,18)] # get data
dta.r <- abs(cor(dta)) # get correlations
dta.col <- dmat.color(dta.r) # get colors
# reorder variables so those with highest correlation are closest to the diagonal
dta.o <- order.single(dta.r)
cpairs(dta, dta.o, panel.colors=dta.col, gap=.5,
main="Variables Ordered and Colored by Correlation" )

Loading required package: cluster