Skip to main content
Have a personal or library account? Click to login
Can Peripheral Blood-Derived Gene Expressions Characterize Individuals at Ultra-high Risk for Psychosis? Cover

Can Peripheral Blood-Derived Gene Expressions Characterize Individuals at Ultra-high Risk for Psychosis?

Open Access
|Dec 2017

Figures & Tables

Table 1. 

Study design and factor description of study sample

Status N Subgroups Mean age (years) Gender, N (%) Ethnicity, N (%)
UHR subjects56APS: 43 (76.8%)22.1Male: 21 (75.0%)Chinese: 21 (75.0%)
BLIPS: 3 (5.4%)Female: 7 (25.0%)Malay: 7 (25.0%)
Vulnerable: 15 (26.8%)
Healthy controls28None22.5Male: 21 (75.0%)Chinese: 21 (75.0%)
Female: 7 (25.0%)Malay: 7 (25.0%)
Figure 1. 

Preliminary variance-based analysis. A) PCA scatterplots demonstrating that data normalization can improve the signal-to-noise ratio, enhancing discrimination between sample classes. Note that no feature selection is done here. Here we compare None, Quantile, GFS, and SVA. GFS and SVA seem to boost the class discrimination signal the most. B) Distribution of variance at each PC level shown as a series of bar plots, where the first bar corresponds to PC1, the second corresponds to PC2, and so on. In “None,” note that without any form of normalization, most variance is concentrated in PC1. A high concentration of variance in the first PC is usually indicative of the presence of a large amount of technical artifact. All normalization methods appear to balance the distribution of variance among the subsequent PCs, but also note that the relative scale of remaining variance after GFS and SVA processing is much lesser than for log-converted data.

Table 2. 

Significant association between data factors (class, gender, and ethnicity) against each principal component 110

PC Class Indeterminate Gender Indeterminate Ethnicity
None Quantile GFS SVA None Quantile GFS SVA None Quantile GFS SVA
1 0.00 0.38 0.00 0.380.36 0.02 0.560.340.35 0.01 0.910.97
20.160.14 0.00 0.00 0.01 0.810.850.200.120.690.690.85
30.230.050.100.220.250.09 0.01 0.85 0.05 0.95 0.01 0.37
40.24 0.00 0.210.730.240.190.360.200.760.860.660.82
5 0.00 0.00 0.130.210.090.780.16 0.03 0.920.910.950.23
60.70 0.02 0.14 0.03 0.250.721.00 0.00 0.950.300.370.64
7 0.03 0.270.330.060.870.590.070.300.070.790.110.20
80.12 0.02 0.980.140.340.190.940.360.330.620.31 0.01
90.300.220.080.870.23 0.00 0.22 0.01 0.770.880.900.13
100.590.230.860.79 0.01 0.420.740.310.590.65 0.05 0.50

[i] Note. Boldface indicates significance below 0.05.

Figure 2. 

How normalization affects statistical feature selection and prediction modeling. A) Histograms showing the p value distributions (x axis) following feature selection (based on the F test) and corrected for multiple testing via BH. Data are processed in four ways (None, Quantile, GFS, and SVA). The importance of normalization is obvious here. With simple log-conversion, most gene features will be reported as significant, and we should expect that many of these will be false positives. The p value distributions for Quantile and SVA are more within expectations, while GFS tends to be highly conservative here. B) Significant feature overlap based on a cutoff of 0.01. None, Quantile, GFS, and SVA report a total of 5,877, 256, 5, and 556 significant genes, respectively. Among these, only one gene (MAGEB16) is common among all four methods. The overlaps with GFS tend to be deeper with Quantile and SVA. C) Distributions of p values (based on SVA’s set of p values following F test and BH correction) showing that intersecting genes (common between Quantile, GFS, and SVA) are more significant than those that are not common among them. We disregarded the 5,482 significant genes in None, as they are quite likely to be false positives anyway. D) Cross-validation tests demonstrating that GFS, followed by SVA, tends to pick more relevant genes and build better models using the shrunken-centroid classifier. Data are evenly split into training and validation sets. All features were used to train the classifier. Cross-validation accuracy is the total number of correctly predicted class labels (control and subject) in the validation dataset (where 0 means no class labels were correctly predicted and 1 means all were correctly predicted). This is repeated 1,000 times to generate the violin plot, as shown.

Figure 3. 

Gene fuzzy scoringbased gene signature is functionally relevant. A) An unsupervised clustering method (hierarchical clustering; Euclidean distance and average linkage) on the set of GFS significant features (the ones in bold are the original five at a cutoff of 0.01, while the additional seven are included based on a cutoff of 0.05), yielding good separation between our sample classes. The cutoff was loosened to 0.05 to include more genes and boost sensitivity in functional analysis. B) Functional network (derived from GeneMANIA) among the significant GFS genes, pointing toward neurological functions and a high level of interconnectivity among other undetected genes. Despite its strong presence as a significant feature, MAGEB16 does not appear to be functionally associated with the other genes. C) Half samples used for training following statistical feature selection (signature), the remaining half for validation. The cross-validation prediction accuracy is the proportion of correctly predicted validation class labels. In each round, a random signature equal to the size of the inferred signature is also generated, and its cross-validation performance is evaluated similarly. Although classifier accuracy fell for GFS (compare Figure 2D), it strongly outperforms random signatures, suggesting that signatures inferred from GFS are more likely meaningful or relevant. This is not so for other normalization methods (compare Goh et al., 2017, Supplementary Figure 1).

Language: English
Submitted on: Mar 20, 2017
Accepted on: Jun 7, 2017
Published on: Dec 1, 2017
Published by: MIT Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2017 Wilson Wen Bin Goh, Judy Chia-Ghee Sng, Jie Yin Yee, Yuen Mei See, Tih-Shih Lee, Limsoon Wong, Jimmy Lee, published by MIT Press
This work is licensed under the Creative Commons Attribution 4.0 License.