Dataset: 11.1K articles from the COVID-19 Open Research Dataset (PMC Open Access subset)
All articles are made available under a Creative Commons or similar license. Specific licensing information for individual articles can be found in the PMC source and CORD-19 metadata.
More datasets: Wikipedia | CORD-19
Made by DATEXIS (Data Science and Text-based Information Systems) at Beuth University of Applied Sciences Berlin
Deep Learning Technology: Sebastian Arnold, Betty van Aken, Paul Grundmann, Felix A. Gers and Alexander Löser. Learning Contextualized Document Representations for Healthcare Answer Retrieval. The Web Conference 2020 (WWW'20)
Funded by The Federal Ministry for Economic Affairs and Energy; Grant: 01MD19013D, Smart-MD Project, Digital Technologies
The studied communities were located within 95 km (severe neurological infectious disease) and 62 km (fatal respiratory infectious disease) of a surveillance hospital. In these communities, 76 of 426 severe neurological disease cases (18%, 95% CI 14%–22%) and 234 of 1,630 fatal respiratory disease cases (14%, 95% CI 13%–16%) attended a surveillance hospital. Adjusting for distance, the case detection probability was nearly twice as high among severe neurological disease cases than among fatal respiratory disease cases (risk ratio 1.8, 95% CI 1.4–2.3; p < 0.001). At 10 km distance, an estimated 26% (95% CI 18%–33%) of severe neurological disease cases and 18% (95% CI 16%–21%) of fatal respiratory disease cases were detected by the hospital-based surveillance. The detection probability decreased with distance from the surveillance hospital, and the decline was faster for fatal respiratory disease than for severe neurological disease. A 10 km distance increase resulted in a 12% (95% CI 4%–19%; p = 0.003) relative reduction in case detection probability for severe neurological disease but a 36% (95% CI 29%–43%; p < 0.001) relative reduction for fatal respiratory disease (Fig 2C). Including more complex functional forms of distance in the log-binomial regression models did not improve model fit based on AIC (Table A and Figs. B and C in S1 Text).
The probability of detecting an outbreak of exactly three cases (if a single detected case was considered an outbreak) dropped below 50% at distances greater than 26 km for severe neurological disease and at distances greater than 7 km for fatal respiratory disease (Fig 3A). Fig 3B and 3C show the minimum number of cases required for surveillance to detect outbreaks with a probability of ≥90% if different outbreak thresholds are applied. For outbreaks defined as detection of at least one case, we found that an outbreak of fatal respiratory disease required 12 cases (95% CI 11–13) to be detected with 90% probability at 10 km from a surveillance hospital, but 30 cases (95% CI 24–39) to be detected at 30 km. In contrast, the impact of distance on the outbreak size requirement was much more limited for severe neurological disease: eight cases (95% CI 6–12) at 10 km and 11 cases (95% CI 9–14) at 30 km. For outbreaks defined as detection of at least two cases, 14 severe neurological disease cases (95% CI 11–20) and 20 fatal respiratory disease cases (95% CI 18–23) would be necessary for an outbreak to be detected at 10 km distance, and 19 severe neurological disease cases (95% CI 15–24) and 51 fatal respiratory disease cases (95% CI 41–66) at 30 km. The necessary outbreak sizes increased further when a five-case threshold was applied, so that 28 severe neurological disease cases (95% CI 21–39) and 39 fatal respiratory disease cases (95% CI 35–44) would need to occur for an outbreak to be detected at 10 km distance, and 36 (95% CI 30–46) and 97 (95% CI 79–128), respectively, cases at 30 km.
Surveillance hospital attendance among community cases varied by case characteristics, leading sometimes to biased disease statistics among surveillance cases (Table B in S1 Text). For severe neurological disease, individuals aged <5 y represented 48% of community cases but only 29% of surveillance cases (p < 0.001). Additionally, the proportion of cases in the lowest socioeconomic group was lower among surveillance cases than among community cases (43% versus 57%; p = 0.012), while the proportion of individuals aged 15–59 y was higher (43% versus 29%; p = 0.005) (Fig 4A). For fatal respiratory disease, the proportion of individuals aged ≥60 y (47% versus 62%; p < 0.001) was lower among surveillance cases than among community cases, while the proportion of individuals aged <5 y (24% versus 18%; p = 0.020), individuals aged 15–59 y (27% versus 18%; p < 0.001), and cases in the highest socioeconomic group (43% versus 37%; p = 0.022) was higher (Fig 4B). We observed a slight difference in the proportion of females for fatal respiratory disease (34% among surveillance cases versus 38% among community cases; p = 0.108), but not for severe neurological disease (39% versus 40%; p = 0.861). Results were consistent in sensitivity analyses with age as a continuous variable and socioeconomic status classified into quintiles (Figs. D and E in S1 Text).
A substantial proportion of cases (severe neurological disease 42% [95% CI 38%–47%]; fatal respiratory disease 26% [95% CI 24%–28%]) visited multiple healthcare providers during their illness. Forty-eight percent (95% CI 44%–53%) of severe neurological disease cases and 31% (95% CI 29%–34%) of fatal respiratory disease cases attended any hospital, including surveillance hospitals (Fig 5). Including other hospitals that were attended by cases in the surveillance system could have increased the overall case detection probability by 31% (absolute increase) for severe neurological disease cases and 17% for fatal respiratory disease cases. The capacity to detect outbreaks would have increased, so that outbreaks containing four severe neurological or eight fatal respiratory disease cases would have been detected with ≥90% probability for any distance in the range 0–40 km from the original surveillance hospital, compared to 13 and 47 cases, respectively, with the current system (Fig. F in S1 Text). However, since individuals who attended any hospital had similar characteristics in terms of sex, age, and socioeconomic status as those attending surveillance hospitals (Fig. G in S1 Text), this expansion would not have increased disease detection in key groups such as the lowest socioeconomic group. Only with the informal sector incorporated in the surveillance system would cases in such groups be detected.
Table 1 presents the summary profile characteristics and univariate analysis of the categorical variables in the dataset and age. 959 MERS cases were recorded in KSA during the study period with 317 (33%) deaths while 67 (7%) had contact with camels or camel products, 126 (13%) were health-care workers and 52.7% had some kind of comorbidity (Table 1). Similarly, out of the 630 male patients, 28% died as a result of MERS-CoV while only 36% of the females died from the disease (Table 1). The median age for males was 53.5 years (interquartile range 39-66) while the median age for females was 48 years (interquartile range 32-63).
Not all of the comorbidities were equally prevalent. While most of the patients in this study had some kind of underlying comorbidities (52.7% have at least one comorbidities), around 38% of all patients had more than one comorbidities with the most common being obesity, diabetes and hypertension (which occurred in more than 50% of those with any underlying comorbidity) (Table 1). Others comorbidities were heart disease, respiratory disease, pneumonia, renal/kidney disease and asthma.
Pearson’s chi-square test of health outcomes between subgroups shows significant difference in gender, comorbidity, health-care worker, clinical outcome, contact type and secondary contact (Table 1). About 3 out of every 10 males died of MERS disease, compared to 28% of the females. The percentage of health-care workers that died of MERS (8.73%) were much less than non-health care workers (36.5%), while 46.14% of persons with comorbidity died of MERS compared with 17.05% of those without comorbidity. Similarly, there effect of comorbidity on mortality from MERS-CoV was significant; patients who died of the disease were more likely to have one or more comorbidities with an odd ratios of 3.4 and 4.7 respectively.
Fig 1 shows the study area and the distribution of the number of infected people and the number of people who died of the disease in the 13 provinces of the KSA. Most of the MERS cases occurred in Ar Riyad (38%) and Makkah (34%) provinces. Fig 2 shows the pyramids of the distribution of the mortality status for the 13 regions based on comorbidity status (upper part) and whether or not the individual was a health worker (lower part). From the pyramids, it is clear that the highest number of cases occurred in Ar Riyad followed by Makkah. The incidence of comorbidities was significantly higher among patients in Ar Riyad, Makkah and Ash Sharqiyah (about half of the cases of comorbidities occurred in these three regions). Al Bahah had the least cases of infected individuals. Similarly, Ar Riyad, Makkah and Ash Sharqiyah recorded the highest number of infected health-care works (Fig 2 bottom). The proportion of health-care workers who died of MERS-CoV were smaller than the proportion of non health-care works who died of the disease.
SaTScan for local cluster detection detects the area of Al Qasim as primary cluster with high rates after adjusting for all explanatory variables (Relative risk(RR) = 1.83, p − value < 0.0001) and the area of Aseer and Jizan as primary cluster for low rates (RR = 0.093, p − value < 0.0001) while Al Jawf, Riyadh and Hail were secondary cluster for low rates (RR = 0.51, p − value < 0.0001). The Wang’s q-statistics for global stratified spatial heterogeneity was 0.2285 using the geographical detector method [30, 31]. The spatial stratified heterogeneity analysis indicated no significant stratified spatial heterogeneity of the district MERS incidence (q = 0.2285, p − value = 0.9444).
The estimated posterior odds ratio of mortality from MERS disease and corresponding 95% credibility intervals are shown in Table 2. The results reveal that individuals with comorbidities were twice as likely to have died from MERS-CoV compared with those without comorbidities (OR = 2.071; CI: 1.307, 3.263). Estimates for those individuals that had animal or camel contact, those with secondary contact and results based on gender were not significant. However, individuals who were health-care workers were significantly less likely to have died from the disease compared with non-health workers (OR = 0.372, CI: 0.151, 0.827). Compared with patients who had fatal clinical experience, those with clinical and subclinical experiences were equally less likely to have died from the disease.
Fig 3 shows the estimated effects of age (a) and the estimated effects of comorbidity as it varies smoothly over age (interaction between comorbidity and age). Individuals aged 25 years or younger who suffered from MERS-CoV were less likely to have suffered mortality. Nevertheless, the odds of dying from the disease tended to increase as age increased beyond 25 years and was much higher for individuals with any underlying comorbidities.
Results of the estimated total spatial variation in mortality due to MERS-CoV are presented in Fig 4. From Fig 4, individuals from provinces with red shading were less likely to have suffered mortality due to MERS-CoV but mortality was higher as the shading moves towards green colour. This implies evidence of significant geographical variation and clustering of mortality from MERS-CoV with lower risk (after adjusting for other variables) occurring in Riyadh, Ar’ar, Al Jawf and Jizan, and higher risk in Al Qasim.
All individual-based simulations start from a complete network of size N = 100. Disease spreading and network evolution proceed together under asynchronous updating. Disease update events take place with probability , where and , network update events occur otherwise. For network update events, we randomly draw two nodes from the population. If connected, then the link disappears with probability given by the respective bpq. Otherwise, a new link appears with probability c. When a disease update event occurs, a recovery event takes place with probability , an infection event otherwise. In both cases, an individual j is drawn randomly from the population. If j is infected and a recovery event has been selected then j will become susceptible (or recovered, model dependent) with probability δ. If j is susceptible and an infection event occurs, then j will get infected with probability λ if a randomly chosen neighbor of j is infected. The quasi-stationary distributions shown in Figure 1 are computed as the fraction of time the population spends in each configuration (i.e., number of infected individuals) during 109 disease event updates (107 generations). The average number of infected and the mean average degree of the network observed during these 107 generations are shown in Figure 3. The results reported are independent of the initial number of infected in the network. Finally, the disease progression in time, shown in Figure 4b, is calculated from 104 independent simulations, each simulation starting with 1 infected individual. The reported results correspond to the average amount of time after which the population reaches a state with i infected.
Based on the collected healthcare utilization data, we evaluated how the sensitivity and representativeness of a surveillance system may be improved by integrating other healthcare providers. We classified healthcare providers as (i) surveillance hospitals, (ii) other hospitals (government and private clinics), (iii) qualified private practitioners, and (iv) the informal sector (unqualified practitioners such as traditional healers, village doctors, homeopaths, and pharmacies). We estimated the proportion of cases attending each healthcare provider class, with exact binomial confidence intervals, and estimated outbreak detection probabilities based on proportions attending the surveillance hospital plus (i) other hospitals, (ii) qualified private practitioners, or (iii) informal healthcare providers. Furthermore, we compared the proportion of cases with each characteristic (sex, age, and socioeconomic group) among community cases to the proportion among those attending each healthcare provider class and quantified absolute differences in proportions with 95% CIs and p-values using bootstrapping (2,000 bootstrap iterations).
All statistical analyses and graphics were implemented in the R computing environment; maps were created using QGIS software.
Internet-based surveillance systems have broader applicability for the monitoring of infectious diseases than is currently recognised. Furthermore, internet-based surveillance systems have a potential role in forecasting of emerging infectious disease events.
We implement a susceptible-infected-recovered (SIR) model, adapted for network-based modeling [16, 17]. Each individual, i’s, disease state at time t is represented by Xti∈{S,I,R}, where S = susceptible, I = infected, and R = recovered. Infection is transmitted through pair-wise contact with infected neighbors on the disease network. At time t, an infected individual infects each of her susceptible neighbors, independently, with probability pt. Thus, if Xti=I, Xtj=S, and i and j are neighbors on the disease network, then:
Xt+1j={IwithprobabilityptSwithprobability1-pt.(5)
Following infection, individuals recover after TR time periods. Therefore, if Xt-1i=S and Xti=I, then:
Xti=⋯=Xt+TR-1i=IandXt+TRi=R.(6)
When vaccine is available, we implement an imperfect vaccine, with a delay of d time units before becoming effective. For influenza, the delay before full immunity is approximately two weeks. Vaccines are distributed randomly among the susceptible population, according to the estimated number of vaccines administered during the week. Let η be the vaccine efficacy. Then, if susceptible individual, i, is vaccinated at time t:
Xt+dj={RwithprobabilityηSwithprobability1-η.(7)
It has been shown that formulating imperfect vaccination as an all-or-nothing effect with probability of vaccination success equal to the vaccine efficacy correctly estimates the direct effectiveness of the vaccine.
In practice, spatiotemporal disease modeling is performed in uncertain conditions, e.g., erratic disease observations, incomplete prior knowledge of disease transmission and recovery rates. In most cases of SIR modeling, a rather incomplete knowledge of the in situ susceptible and recovered population fractions is possible. The numerical study shows that when the disease observations are uncertain and follow a non-Gaussian law, the BME method in Eq. (3)([28],,,) can further improve updating in the SIR model by representing erratic observations in the form of probabilistic data and by incorporating transmission and recovery rate uncertainties. In the BME framework, the core or general KB (G) includes the SIR equations and the associated disease covariance models, whereas the site-specific KB (S) includes the uncertain infected observations and the initial conditions of the transmission and recovery rates. The SIR states are then formulated in the matrix form(8)where and are vectors containing the current and predicted states of infected disease counts and the rates of the SIR model; i.e. recovery rate and transmission rate , respectively. and are the transition and Jacobian matrices characterizing the dynamics of the SIR model. models the uncertainty of infected states across space, which cannot be represented by SIR modeling, and is characterized by the covariance matrix . The observation matrix contains only zeros and ones indicating data presence across space. , and are the current, predicted and updated state covariances. Equation (8) involve the general KB containing the stochastic properties of disease dynamics (details of the matrix formulation of Equation (S8) shown in File S1). Concerning the site-specific KB, for estimation purposes the hard (accurate) data are randomly sampled over time at 44 spatial cells of the disease grid mentioned earlier. In addition, soft (uncertain) data that follow uniform probability distributions with uncertain ranges are sampled from another set of 29 cells. The sample locations are shown in Fig. 13.
Numerical comparisons between the simulation results for model prediction and parameter estimation by the BME-SIR method are shown in Figs. 14, 15, 16. For comparison purposes, the results obtained using the extended Kalman filter (exKF) are also shown (technical details of the exKF SIR model can be found in). Figure 14 shows that both methods predict almost equally well the infected population fractions at different times t (mean-square errors: 23.93 for BME-SIR and 28.92 for the exKF). Improvements in estimation uncertainty gained by using the BME-SIR over the exKF method are also shown. Similar results were obtained for the susceptible and recovered population fractions. Figs. 15, 16 demonstrate the performance of the two methods in estimating the transmission and recovery rates at different times t. One sees that both methods provide effective estimates of the SIR recovery rates at all times, but the corresponding estimates of the transmission rates for large times t (i.e., t>50) are poor. The changes in the recovery and transmission rates show some interesting temporal patterns. When t is small (e.g., t<10), the estimated rates are closely associated with their initial guess and therefore the deviations of both the BME-SIR and exKF rate estimates are large. The improvement of rate estimation is shown over time. The transmission rate estimation accuracy obtained by both methods is low when t>40. This is due to the low portion of susceptibles after time t = 40 (i.e., the percentage of susceptible population after t>40 is less than 3%), which yields transmission rate estimates insensitive to observations. However, even in the case of low infected population fractions, both the BME-SIR and exKF methods produce accurate recovery rate estimates. Clearly, the SIR model is more sensitive to changes in the recovery rate rather than the transmission rate (Figs. 15–16). As a result, real-time data assimilation should lead to better estimates of real-time transmission rates.
When the time scale for network update () is much smaller than the one for disease spreading (), the number of infected increases with a rate(M4)where we made . The effect of the network dynamics becomes apparent in the third factor, which represents the probability that a randomly selected neighbor of a susceptible is infected. In addition, Equation (M1) remains valid, as the linking dynamics does not affect the rate at which the number of infected decreases. Similarly to the case of static networks, one can show that the disease remains endemic in the SIS model whenever . A similar result holds for the SIR model (see Text S1). It is noteworthy that we can write Equation M4 as follows(M5)which shows that disease spreading in an adaptive network is equivalent to that in a well-mixed population with frequency dependent average degree and a transmission probability that is rescaled according to , where(M6)which is valid for both SIR, SIS () and SI (,) models.
In 2018, the MOHW allocated USD 435.1 million for major R&D funds, corresponding to about 3% of the government's 2018 regular budget. Leaving out the non-disease portion of the budget, the MOHW allocated 35.9% of the remainder of its R&D budget to communicable diseases, 64.1% to non-communicable diseases, and 0% to injuries and violence. Fig. 2 shows the distribution of the 2018 MOHW fund allocations across the diseases categories. In this figure, R&D funds and the burden of disease in Korea were not correlated. For example, much part of the MOHW R&D funds were allocated to upgrading the national system for responding to communicable diseases, focusing on prevention of variant infectious disease inflow and spread and on-site responses. However, communicable diseases in Korea only accounted for 2.8% of DALYs in 2015.7 Non-communicable diseases, on the other hand, accounted for more than 87% of Korean DALYs but were only allocated 64.1% of the MOHW R&D funds. The most salient findings, though, were in the injuries and violence disease group. There was no MOHW funding whatsoever for this disease group. However, injuries and violence such as self-harm, interpersonal injury, or transport injury accounted for 10.1% of the total 2015 DALYs in the KNBD study, even higher than those of the communicable disease category.6
To further investigate whether the economic burden and funds allocation are aligned, we also compared the differences between them. We noted that the funds were also skewed when compared to economic burden. The economic burden of injury and violence was 9.1%, but injury and violence did not receive any portion of the MOHW total funding (Fig. 2).
We undertook sub-analysis to investigate the relationship between DALYs for level 2 disease groups and MOHW allocation. Budget which could not be allocated to each level 2 disease group is not sub classified. Table 2 shows the R&D groups that correspond to the DALYs of each level 2 disease group. Within these groups, only parts of groups had their own R&D budgets. R&D funds for non-communicable disease were sorted into neoplasms, neurological disorders, and mental and behavioral disorders (and ranked in that order). This differs from the pattern of DALYs in the KNBD study. For example, DALYs from musculoskeletal disorders were the highest, followed by cardiovascular and circulatory diseases. However, the allocation of R&D funding for neoplasms was more than nine times that of musculoskeletal disorders. Neurological disorders, which include Alzheimer's dementia, were ranked second after neoplasm in the MOHW R&D funds.
In prioritizing health resources, risk factors should also be considered.4 Some of the R&D budget was directly targeted for risk factors in 2018. Fig. 3 shows the R&D budget for risk factors. We compared four risk categories in the DALYs of the 2013 KNBD study (behavioral, socio-economic, environmental, and metabolic) with R&D budgetary allocations. As a result, mismatches between R&D budget allocations and DALY distributions were found. For example, it is difficult to locate the fund for socioeconomic risk in the MOHW R&D budget. In the behavioral risk category, even though tobacco smoking was the most dangerous risk factor, only “alcohol use” had its own funding.
Results of cross correlations are demonstrated in Figure 3. Cross correlation results should be interpreted as product–moment correlations between the two time series; they allow dependence between two time series to be identified over a series of temporal offsets, referred to as lags. Lag values indicate the degree and direction of associations. A lag value of −1 indicates that correlations were performed using time series data for which the first series (Google Trends’ data) has been shifted backwards one unit (a month). Conversely, a lag value of 1 indicates that the primary series had been shifted forward one unit. Significant positive correlations for lag vales of ≥1 or above are of most interest in the context of this study as they indicate a positive relationship between the two time series with Google Trends data leading the notifications (a pre-requisite for Google Trends data to be a suitable early warning tool). It should also be noted that seasonal differencing was applied to cross correlations to remove cyclic seasonal trends.
Disease notifications positively correlated at a lag of one month (lag 1) with search term frequency for 12 of the 17 diseases that exhibited significant Spearman’s rank correlations. Overall, 15 of the 64 notifiable diseases exhibited significant, positive correlations at lag of one month. Significant positive associations were observed for four of the nine vector-borne diseases (Barmah Forest virus infection, Dengue virus infection, Murray Valley encephalitis virus infection and Ross River virus infection), six of the 14 vaccine preventable diseases (Haemophilus influenzae type b, influenza, pertussis, pneumococcal disease and varicella zoster (chickenpox and shingles)), two of the six blood-borne diseases (hepatitis B (unspecified) and C (unspecified)), two of 11 gastrointestinal diseases (campylobacteriosis and cryptosporidiosis) and one zoonosis (leptospirosis). Positive significant correlations were not observed at a lag of one month for any of the quarantinable diseases (n = 6), sexually transmissible infections (n = 6) or other bacterial infections (n = 4). It should be noted that positive significant correlations were observed at lags of over one month (but not at lag 1) for two of the top ranked 18 diseases (gonococcal infection and meningococcal disease) and 16 diseases overall (see Additional file 1). Additionally, the terms “haemolytic uraemic syndrome” and “leprosy” exhibited significant negative correlations with the respective disease notifications at a lag of one month.
The compact letters in Figure 3 show that there are some significant differences in the distributions of incidence totals across scenarios. All scenarios except for the “22.5% averse” one, which was only 10% less risk averse than the baseline scenario, have distributions significantly different from the baseline scenario. Generally, the scenarios with lower percentage of risk averse producer agents (12.5%, 17.5% averse; compact letter “a”) had more simulation runs that produced relatively high incidence totals (larger interquartile ranges) compared to the other scenarios. In contrast, scenario runs with higher proportions of risk averse producer agents (32.5 and 37.5% averse; compact letter “d”) lead to significantly different distributions of incidence totals characterized by lower medians and narrower ranges. All the scenarios appear to be right-skewed with some outlying values indicating that in all Monte Carlo experiments there were simulations where the system became very vulnerable to high PEDv infection. This is particularly evident for scenarios 12.5 and 17.5% averse. Overall, the scenarios indicate that the ABM is significantly sensitive to risk attitude shifts as small as 10% producer agents moving from being risk tolerant to being risk averse. Therefore, the total incidence indicator responds to the risk attitude distribution within the population.
The comparative box-plots provided an unexpected result when analyzed in relation to the observed total incidence (Figure 3, dashed black line). The ABM tends to underestimate the total incidence. While all scenarios produced some realizations with total incidence close to the observed one, none had the median aligned around the observed total incidence. The scenario with the most risk tolerant producers (12.5% averse) provided the highest number of simulation runs close to the observations in terms of total incidence. These results may suggest that we need to adopt a baseline model that is calibrated on an initial population of producers with relatively higher percentage of risk tolerant. Alternatively, the current baseline model could be correct and the observed data could represent a rare case that happened to be actualized in reality. Only independent data on risk attitude collected from a sample of producers can help answer this question.
The term "comorbidity" refers to the coexistence of multiple diseases or disorders in relation to a primary disease or disorder in an individual
[1]. A comorbidity relationship between two diseases exists whenever they appear simultaneously in a patient more than chance alone
[2]. It represents the co–occurrence of diseases or presence of different medical conditions one after another in the same patient
[1, 3]. Some diseases or infections can coexist in an individual by coincidence, and there is no pathological association among them. However, in most of the cases, multiple diseases (acute or chronic events) occur together in a patient because of the associations among them. These comorbidity associations can be due to direct or indirect causal relationships and the shared risk factors among diseases
[4]. For an instance, a type of genetic abnormality linked to cancer is more common in patient of type 2 diabetes than other people
[5]. Examples of comorbidity studies are many, often referring to chronic obstructive pulmonary disease (COPD)
[6, 7], obesity
[8], mental disorders
[9], immune-related diseases
[10], cancer
[11] etc.
Comorbidity can be attributed to the disease connections on the molecular level, such as dysregulated genes, PPIs (protein–protein interactions), and metabolic pathways as potential causes of comorbidity
[1, 3, 12, 13]. From a genetic perspective, a pair of diseases is connected because they have both been associated with the same dysregulated genes
[14, 15], whereas from a proteomics perspective phenotypically similar diseases are related via biological modules such as PPIs or molecular pathways
[16, 17].
Population-based disease association is important in conjunction with molecular and genetic data to uncover the molecular origins of diseases and disease comorbidities. Patient medical records contain important clarification regarding the co-occurrences of diseases affecting the same patient
[2]. During the last few years, several researchers have been conducted the disease comorbidity analysis to understand the origins of many diseases
[1, 12, 18]. Goh, Cusick, Valle, Childs, Vidal, Barabasi et al. and Feldman, Rzhetsky, Vitkup et al. built networks of gene-disease associations by connecting diseases that have been associated with the same genes
[14, 15], whereas Lee, Park, Kay, Christakis, Oltvai and Barabási et al. constructed a network in which two diseases are linked if metabolic reactions are associated between them
[13]. Disease association studies from proteomic point of view have been studied by Rual, Venkatesan, Hao, Hirozane-Kishikawa, Dricot, Li, Berriz, Gibbons, Dreze, Ayivi-Guedehoussou et al. and Stelzl, Worm, Lalowski, Haenig, Brembeck, Goehler, Stroedicke, Zenkner, Schoenherr, Koeppen et al.
[19, 20]. Rzhetsky, Wajngurt, Park and Zheng et al. inferred the comorbidity links between 161 disorders from the disease history of 1.5 million patients
[12]. However, all of these efforts have focused on the role of a single molecular or phenotypic measure to capture disease–disease relationships. In our work we have used disease–gene associations, PPIs, molecular pathways and clinical information to obtain statistically significant associations and comorbidity risks among diseases.
Inflammation is a hallmark of many serious human infectious diseases associated to a wide variety of infections, such as HIV-1
[21]. UK doctor Max Pemberton says "I’d rather have HIV than diabetes" as life expectancy among diabetes patients is lower than that of HIV
[22]. However, HIV has a comorbidity impact on the diabetes. Also the flu can cause complications, including bacterial pneumonia, or the worsening of chronic health problems. Asthma is the most common comorbidity in patients hospitalized for swine influenza (H1N1) infection
[23]. Dengue can cause myocardial impairment, arrhythmias and, occasionally, fulminant myocarditis
[24]. Chronic medical conditions, such as heart disease, lung disease, diabetes, renal disease, rheumatologic disease, dementia, and stroke are risk factors for influenza complications
[25]. Common chronic infections such as periodontitis or infection with Helicobacter pylori may also increase stroke risk
[26]. Moreover, the severity of pneumonia in patients coinfected with influenza virus and bacteria is significantly higher than in those infected with bacteria alone. The incidence of flu is higher in children and younger adults than in older individuals, but influenza-associated morbidity and mortality increase with age, especially for individuals with underlying medical conditions such as chronic cardiovascular diseases
[27]. During the ageing process the immune system becomes compromised and it causes an increasing inflammation
[28]. In particular, chronic inflammation (inflammageing) and metabolic function are strongly affected by the ageing process
[29]. The ageing of populations is leading to an unprecedented increase different diseases like cancer and fatalities. It is reported that 80% of the elderly population has three or more chronic conditions
[30].
On the other hand, respiratory viruses are an emerging threat to global health security and have led to worldwide epidemics with substantial morbidity and mortality
[31]. Coronaviruses (CoVs) cause respiratory and enteric diseases in human and other animals that induce fatal respiratory, gastrointestinal and neurological disease. Severe acute respiratory syndrome (SARS) is an epidemic human disease, is caused by a coronavirus (CoV), called SARS-associated coronavirus (SARS-CoV)
[32]. SARS patients may present with a spectrum of disease severity ranging from flu-like symptoms and viral pneumonia to acute respiratory distress syndrome and death
[33]. Most of the deaths were attributed to complications related to sepsis, ARDS and multiorgan failure, which occurred commonly in the elderly for comorbidities
[34]. Age and comorbidity (e.g. diabetes mellitus, heart disease) were consistently found to be significant independent predictors of various adverse outcomes in SARS
[35]. Children with SARS have better prognosis than adults
[34]. Advanced age and comorbidities were significantly associated with increased risk of SARS-CoV related death, due to acute respiratory distress syndrome
[35]. Mild degree of anaemia is common in the SARS infected patients and patients who have recovered from SARS show symptoms of psychological trauma
[34]. Another novel coronavirus MERS-CoV, which is a new threat for public health, has similar clinical characteristics to SARS-CoV, but the comorbidity is the key aspect to underline their different impacts
[36, 37]. MERS-CoV causes respiratory infections of varying severity and sometimes fatal infections in humans including kidney failure and severe acute pneumonia
[38]. Despite sharing some clinical similarities with SARS (eg, fever, cough, incubation period), there are also some important differences such as the rapid progression to respiratory failure, which we have studied on comorbidities point of view.
Infection with the human immunodeficiency virus-1 (HIV) and the resulting acquired immune deficiency syndrome (AIDS) affects cellular immune regulation
[39]. HIV infection severely impacts on the immune system causing phenotypic changes in peripheral cells and dysregulates the innate immune system
[40]. Significant number of HIV-1 infected patients exhibits osteopenia and osteoporosis, leading to higher incidence to develop weak and fragile bones during the course of disease
[41]. HIV has also been associated with an increased risk of developing both diabetes and cardiovascular disease
[42]. Infection with HIV weakens the immune system and reduces the body’s ability to fight infections that may lead to cancer
[43, 44]. People infected with human immunodeficiency virus (HIV) have a higher risk of some types of cancer (Kaposi sarcoma, non-Hodgkin lymphoma, cervical cancer, anal, liver, lung cancer, and Hodgkin lymphoma) than uninfected people
[45]. Many people infected with HIV are also infected with other viruses that cause certain cancers
[46, 47]. HIV infection even when controlled by highly active antiretroviral therapy (HAART) is being linked to chronic inflammation
[48]. People with HIV-1 infection appear to have a markedly higher rate of chronic kidney disease than the general public
[49]. It is because some of the risk factors associated with HIV-1 acquisition are the same as those that lead to kidney disease because of the virus itself and some therapies (e.g. HAART therapy). Antiretroviral therapy for HIV may increase the risk of developing metabolic syndrome (abdominal obesity, hyperglycaemia, dyslipidaemia and hypertension) and thus predispose to type 2 diabetes and cardiovascular disease. Many of the biologic factors thought to be causally associated with inflammation in HIV disease are also thought to be causally associated with the inflammation of ageing
[50].
Infections (acute and chronic conditions) are often associated to comorbidity that increases the risk of medical conditions which can lead to further morbidity and mortality. Comorbidities related to flu have been recently investigated
[51]. Comorbidities for tuberculosis have also been studied recently
[52, 53]. To understand the overall mechanism we have studied the comorbidity associations of SARS and HIV infections. Both HIV and SARS are emerging infectious diseases in the modern world; each of these diseases has caused global societal and economic impact related to unexpected illnesses and deaths
[54]. SRAS is a significant public health threat and HIV is a long term chronic infection. Since these two infections are associated with high mortality rates and there are no clinically approved antiviral treatments or vaccines available for either of these infections, we have selected these two infections for our study. Centred on the SARS and HIV-1 infections we have investigated highly heterogeneous disease comorbidity networks using the disease–gene associations, PPI subnetwork, molecular pathways and clinical information.
Evolutionary dynamics play an important role in why and how we should manage wildlife disease (Hudson et al. 2002; Karesh et al. 2012). In an increasingly connected world, the threat of spreading existing and emerging pathogens is growing (Daszak et al. 2000) and in some cases devastating (e.g. Jensen et al. 2002; Jancovich et al. 2004; Fenton 2012). When a pathogen is transported across natural barriers by human actions, it can often have significant negative impacts upon naïve hosts for which it may represent an entirely new selective pressure (Daszak et al. 2000) with the potential to cause extinction (De Castro and Bolker 2005). Moreover, growing movements of people and international trade in livestock and food products will inevitably increase the spread of exotic diseases (Olden et al. 2004). Therefore, managing wildlife diseases, particularly those of ecological or socio-economic concern, is an increasing challenge. Significant advances have been made to incorporate ecological principles into the study of infectious disease in wildlife (Tompkins et al. 2011), and increasingly, theory is guiding wildlife disease management (Joseph et al. 2013). However, apart from landscape genetics (Real and Biek 2007), the application of evolutionarily enlightened management (Ashley et al. 2003) to wildlife disease remains underexploited (Vander Wal et al. 2014).
The lack of integration of evolutionary principles is surprising given that infectious disease dynamics are an evolutionary interaction between two and more species: host(s) and pathogen(s) (Karesh et al. 2012). Hosts evolve to reduce the costs of infection in three ways: changing behaviours (e.g. avoidance), resistance (i.e. limiting the pathogen burden) or tolerance (i.e. limiting the damage performed by the pathogen burden, Medzhitov et al. 2012). Each of these tactics has different evolutionary implications. For instance, while resistance has a negative effect on the pathogen-creating selective pressure, tolerance does not (Raberg et al. 2009). Where pathogens are exposed to selection, however, they must evolve to continue to exploit their hosts (Hudson et al. 2002). For the pathogen, this ensures that the basic reproductive rate (R0) remains >1; that is, prior to a host’s death, it will infect at least one new susceptible individual. As such, even among the most virulent pathogens, evolution of reduced virulence (Boots and Mealor 2007) is one of the hallmarks of pathogen evolution which follows the infection of a naïve host population. Some emerging pathogens now coexist within their host [e.g. myxoma virus in European rabbits, Oryctolagus cuniculus (Fenner 2010), chydrid fungus and amphibians (Phillips and Puschendorf 2013)]. However, when hosts fail to adapt rapidly enough to novel pathogens or pathogens fail to evolve lower virulence, the threat of host extinction remains [e.g. devil facial tumour disease (McCallum 2008), white-nose syndrome (Blehert et al. 2009), Fig. 1]. Predominantly, our evolutionary lens has been focused on pathogen evolution – typically thought to occur on shorter timescales than host evolution (Grenfell et al. 2004). We argue, however, that in addition to historical timescales, mounting evidence for rapid evolution (Hairston et al. 2005), suggests that evolutionary principles provide insights on the management of host, pathogen and host–pathogen dynamics. These insights, including inferences into the origins of emergent diseases, into rates of local or landscape-scale disease spread, or into pathogens or environments as selective agents and their downstream effects on population dynamics as a function of changing host-life history.
In many practical instances, the management of wildlife diseases has involved collaboration between clinical veterinarians, veterinary epidemiologists, and at times, wildlife managers. Yet our understanding of these diseases has largely been shaped by evolutionary ecologists such as May and Anderson (1983). This distinction reinforces separations outlined in Tinbergen’s Four Questions (Tinbergen 1963; Nesse and Stearns 2008). The former group of professionals focuses on proximate mechanisms of disease (i.e. ‘causation’ and ‘ontogeny’); for example, aetiology or pathogenesis. The latter concentrate instead on the ultimate or evolutionary causes of disease (i.e. ‘survival value’ and ‘evolution’ or phylogeny). To understand the ultimate causes of disease spread, we must answer such questions as how pathogens can increase R0 or how hosts adapt to emergent diseases. Where proximal methods are important for diagnosing and treating individuals, the primary focus of wildlife managers is population level indices of ‘health’, such as population growth, which can be affected by disease. The need remains for a more comprehensive synthesis of our understanding of wildlife disease from individual hosts to ecosystems (Tompkins et al. 2011). For instance, aspects of disease linking different parts of an ecosystem include pathogen transmission that can vary within individual hosts due to heterogeneity in contact rates or immunity; transmission among multiple hosts and involves multiple pathogens; and occurs in environments with successional trajectories that affect which hosts reside within them (Fig. 2). Ultimately, evolutionary principles can inform management strategies (Ashley et al. 2003; Hendry et al. 2011) and should help predict how species may or may not adapt when facing the selective pressures imposed by novel infectious pathogens, changing environments or management interventions.
In this review, we first introduce the eco-evo epidemilogical triangle, and update of the epidemiological that can then be seen as a rubric to include evolutionarily enlightened principles into the study and management of wildlife disease. Within this framework, we then explore the evolutionary implications of environment–disease interactions in the light of climate change and rapid anthropogenic changes to landscapes. Next, we synthesize areas where applied evolution can be employed in wildlife disease management. Finally, we discuss some future directions and challenges that exist for evolutionarily enlightened wildlife disease management.
Associations between HDI, health workforce, international travel, IHR scores and disease control outcomes are shown in Table 3. Regarding analysis with IHR score in 2016 for all cases, HDI, international travel, total health expenditure and IHR average scores were significantly associated with disease control outcomes. Cases occurring in high HDI (OR = 2.23) and low HDI countries had higher risk (OR = 1.84) of having bad disease control outcomes than very high HDI countries. Cases occurring in high international travel volume countries had twice the risk of having bad disease control outcomes than cases occurring in low international travel volume countries (OR = 2.19). Cases occurring in low total health expenditure countries had nearly four times risk of having bad disease control outcomes than countries with high health expenditure (OR = 3.99). And cases occurring in low IHR average scores countries had 5 times the risk (OR = 7.83) of having bad disease control outcomes than in countries with high IHR average scores.
For only human cases, associations among HDI, total health expenditure and IHR average scores in 2016 and disease control outcomes were statistically significant. Cases occurring in middle to low HDI countries had twice as high a risk of having bad disease control outcomes than those in very high HDI countries (OR = 2.65). Cases occurring in low total health expenditure countries had two times risk of having bad disease control outcome than countries with high health expenditure (OR = 2.84). Cases occurring in low IHR average scores countries had an 11 times higher risk (OR = 11.16) of having bad disease control outcomes than countries with high IHR average scores.
Regarding analysis with IHR score in 2017 for all cases, HDI, international travel, health workforce density, total health expenditure and IHR average scores were all significantly associated with disease control outcomes. Cases occurring in high HDI (OR = 4.71), middle-low HDI (OR = 2.29) and low HDI countries had higher risk (OR = 3.59) of having bad disease control outcomes than very high HDI countries. Cases occurring in high international travel volume countries had twice the risk of having bad disease control outcomes than cases occurring in low international travel volume countries (OR = 2.97). Cases occurring in middle health workforce density countries had two times risk of having bad disease outcomes than countries with high health workforce countries (OR = 2.59). Cases occurring in low total health expenditure countries had two times risk of having bad disease control outcomes than countries with high health expenditure (OR = 2.79). And cases occurring in low IHR average scores countries had 2 times the risk (OR = 2.23) of having bad disease control outcomes than in countries with high IHR average scores.
Similarly, for only human cases, associations among HDI, international travel, health workforce density, total health expenditure and IHR average scores and disease control outcomes were all statistically significant. Cases occurring in low IHR average scores countries had 3 times the risk (OR = 3.45) of having bad disease control outcomes than in countries with high IHR average scores.
The Odds Ratio of IHR 2017 is lower than the Odds Ratio of IHR 2016.
We explicitly investigated the importance of capturing human behavior with interaction and feedbacks between humans and the environment. The producer agents in our ABM have adaptive capabilities and are reactive in that they do not learn but simply respond to signals from other agents and the environment. In the model, a population of veterinarian agents is encoded, each with its own network of hog producers. Within the network, the veterinarian tracks the number of hog producers affected by disease and reports it back weekly. The producer agents are encoded with a set of rules to simulate decisions to alter biosecurity at their facility in response to the disease status in their veterinarian network. Our goal was to explore the influence of reactive behaviors on biosecurity and ultimately disease incidence.
To reflect heterogeneity of human risk attitude and allow the evaluation of a variety of human behaviors, the ABM has underlying human processes with parameters for risk attitude, biosecurity investment, responsiveness to disease, and psychological distancing. In particular, the agents' risk-attitude is directly linked to their response to disease by determining the threshold number of neighboring infected production premises necessary for an agent to react and increase its biosecurity with a probability >0.9. We associate risk aversion with higher propensity to adopt biosecurity. For example, risk averse agents almost always increase biosecurity as soon as there are three production premises infected in their veterinarian network. On the opposite side of the risk spectrum, risk tolerant agents increase their biosecurity quasi certainly only when they know that there are nine or more infected production premises in their veterinarian network. In summary, the ABM agent behavior originates from a risk attitude distribution with four categories (risk averse, risk opportunists, risk neutral, and risk tolerant); four forms of disease response, one for each risk attitude category are used to simulate biosecurity response-to-disease strategies; and a utility function for psychological distancing, which simulates the waning of biosecurity compliance since an infection event. The detailed description of parameters and methods for the ABM human behavioral component are provided in the Supplementary Material section The agent-based model's human behavioral component.
We ran the algorithm on each outbreak year using only patients who tested negative for all four of influenza, RSV, parainfluenza, and hMPV. Because they were tested, we assume they have an ILI. However, since all of their tests were negative, their diagnoses were indeterminate. That is, they have some kind of ILI but do not have any of the modeled diseases.
Fig 3 shows the (logarithm of the) daily odds of the presence of an unmodeled disease in the monitor window for outbreak year 2014-2015. DUDE begins computing odds on day 93 (September 1). The odds of the presence of an unmodeled disease slowly increased and was greater than 1 on day 106 (September 14, 2014) indicating that it was more likely than not that an unmodeled disease was present. After day 106 the odds of the presence of an unmodeled disease increased dramatically. An examination of records in the monitor window at that time showed a prevalence of patients with wheezing, chest wall retractions, runny nose, respiratory distress, crackles, tachypnea, abnormal breath sounds, headache, stuffy nose, and dyspnea. (These are the findings that were at least 25% more likely to occur in a patient in the monitor window than one in the baseline window and were present in at least 10% of the patients in the monitor window).
During this time period, the CDC identified an outbreak of Enterovirus D68 (EV-D68). In mid-August 2014 hospitals in Missouri and Illinois notified the CDC of an increase in admissions of children with severe respiratory illness. By September 8, 2014 officials at Primary Children’s hospital in Salt Lake City, Utah suspected the presence of EV-D68, and by September 23, 2014 the CDC confirmed the existence of EV-D68 in Utah. Since August 2014, the CDC and states began doing more testing for EV-D68, and have found that EV-D68 was causing severe respiratory illness in almost all states. Symptoms of EV-D68 include wheezing, difficulty breathing, runny nose, sneezing, cough, body aches, and muscle aches. (Severe symptoms of EV-D68 may also include acute flaccid paralysis, but this is not among the symptoms used by DUDE).
Fig 4 shows the results of running the algorithm on patients who tested negative for all four of influenza, RSV, parainfluenza, and hMPV patients for outbreak years 2010-2011 (top) through 2014-2015 (bottom).
We present a new journal selection to survey articles of infectious disease research. The 100 selected journals contribute to quantitative survey of research articles in not only international, but also regional and non-English journals, with little bias among countries and regions. We suggest that surveying these 100 journals is more beneficial than the SCI Infectious Disease Category, because it identifies more research articles and avoids underestimation of the numbers of articles in regional and non-English journals. Our survey method may require further development; nevertheless, the method provides an effective tool for grasping overall trends in infectious disease research around the world.
While the risk averseness constant c1 does not show up in any of our bounds for R0 and , it can play a critical role in who takes preemptive measures as illustrated by the example in Fig. 2. Thus, the equilibrium infectivity level can depend on c1. Our first result regarding the risk averseness constant shows that when the empathy constant c2 is zero, we obtain the same outbreak threshold condition as when the behavior response is not accounted for in a disease spread model, i.e., for any . This threshold is analytically obtained by first approximating the Markov chain dynamics by a n-state differential equation and then linearizing the approximate model around its trivial fixed point, the origin – see Supplementary Section K. The derivation is similar to the derivation of the threshold for disease dynamics over networks without behavior response14. This result implies that no matter how risk averse the susceptible individuals are, they cannot eradicate the disease with certainty without the empathy of infected individuals in the stochastic disease network game.
Both this analytical result and the bounds for R0 and assume initially only a single individual is infected. Hence, these conditions might not be accurate when the infected individuals are large. In Fig. 6 (bottom) and in Supplementary Section J, we consider numerical simulations where the expected number of initial infected is {5%, 20%, 50%, 100%}. These numerical simulations confirm that the disease cannot be eradicated at any c1 value if is large and the empathy constant is zero. For values of closer to 1, a high enough risk aversion helps to eliminate the disease Fig. 6 (left). Finally, we observe that the frequency of disease eradication increases as we increase the risk aversion constant c1 value when c2 is positive in Fig. 6.
This observation implies that even a little bit of empathy can go a long way in eradication of the disease given risk averse susceptible individuals. In other words, when the empathy term is a positive value c2 > 0, there exists a sufficiently high enough risk averseness constant that is likely to eradicate the disease. While it may be that the risk averseness by itself cannot eradicate the disease, we observe that it reduces the average number of infected individuals when the disease is endemic (see Supplementary Section H for corresponding figures). That is, risk averseness has longer-term effects on the disease dynamics comparable to empathy.
Understanding infectious disease patterns (i.e. space-time variations and/or changes) has always been a challenging affair. Disease diffusion can vary significantly from place to place and from time to time for a number of reasons, including heterogeneity of the hosts and pathogens, physical and social environments, and interactions across space and time. Moreover, uncertainties linked to population movement and records of infected individuals can increase the difficulty of understanding the spatiotemporal spread of an infectious disease. A number of key studies have shown that infectious disease spread depends significantly upon the spatial features of a population–[5] whereas major benefits of spatial disease modeling include the assessment of disease intervention and control strategies (e.g., border control and quarantine). Accordingly, several models have been proposed to quantify the spatial disease features at both population and individual scales,–[8]. Among the best-known models are the gravity, the spatial micro-simulation, and the network models,,. Most of these models focus primarily on interactions between the susceptible and infected populations across geographical locations, without considering the continuous local population dynamics of disease evolution. This is especially the case for the gravity model, where the geographical distribution and interaction patterns of populations are discretized into separated locations. Stochastic “Susceptible-Infected-Recovered” models (SIR,–[13]) have been widely implemented to represent disease evolution of populations over time. Spatial metapopulation approaches extend SIR models to explicitly account for the local or global population movements between different geographical locations, in terms of patches or networks with deterministic or stochastic characteristics–[16].
The present study proposes a realistic space-time extension of a purely temporal SIR model, i.e. metapopulation model, in the context of Bayesian maximum entropy (BME) theory,. The space-time BME-SIR model has certain attractive features: (1) it represents the population dynamics of infectious diseases within and across localities; (2) it takes into consideration the composite space-time variation of disease features; (3) it accounts for observation uncertainties (e.g., in the records of infected individuals); (4) in addition to the susceptible-infected-recovered disease dynamics, it integrates different sources of knowledge (e.g., hard and soft disease data together with epidemic models and physical laws); and (5) it updates the space-time model parameters in real time.
This study was approved by the Institutional Review Board (IRB) of Korea University (IRB No. KU-IRB-18-EX-51-A-1). Informed consent was waived by the board.
The training cohort consisted of 104 children (overall inclusion percentage 58%, off all patients/asked who were contacted informed consent was given in 77%, see supplemental flowchart) (table 1). Children with severe disease were significantly younger and had more often siblings than patients in the mild and moderate groups. The amount of children below 2 months of age is 5, 12 and 21 in the mild, moderate and severe group, respectively. Duration of hospitalisation significantly increased towards more severe disease. Two per cent of the patients were not hospitalised (2/104), whereas 17% of the hospitalised patients had only mild disease. RSV was detected in the majority of patients (65%), in 43% viral coinfections were present. The highest proportion of RSV mono-infections was seen in children with a severe course of disease (p<0.001), as was previously published by our group.35
The authors declare that they have no competing interests.
In late 2002, Chinese health officials reported an unusual number of atypical pneumonia cases in Guangdong Province, and within 2 months, the World Health Organization (WHO) was alerted of an outbreak widespread throughout the province [
1]. By March 2003, the illness designated severe acute respiratory syndrome (SARS) had spread to Hong Kong, Singapore, Vietnam, and Toronto, Canada [
2]. The global SARS outbreak ended that July, but during its existence the disease caused 774 fatalities and had a significant economic impact on Southeast Asia [
3–
5]. A special concern was its predilection for nosocomial spread, as 21% of SARS cases occurred in healthcare workers . In certain local outbreaks, hospital staff accounted for over 50% of cases, and nosocomial spread to other patients or family members accounted for a significant proportion of SARS cases [
6].
Developing adequate animal models for SARS is a high research priority. Coronaviruses tend to have a limited range of host species, but several animal species have been found to support SARS-associated coronavirus (SARS-CoV) replication [
7]. BALB/c mice were shown to support high levels of viral replication in the respiratory tract after intranasal challenge, and this model has been used to test SARS vaccines [
8–
10]. Infection of an additional rodent model (the strain 129SvEv mouse) showed infection of the respiratory epithelium after intranasal SARS-CoV challenge [
11]. Domestic cats and ferrets support viral replication after intratracheal challenge, and ferret models have been used to study active and passive immunization against SARS [
12–
14]. Research has focused on several animal species as potential natural reservoirs for SARS-CoV. Himalayan palm civets
(Paguma larvata) and raccoon dogs
(Nyctereutes procyonoides) were found to be susceptible to infection with a virus closely related to human SARS-CoV [
15]. Experimental infection of civets produced clinical illness and histopathological evidence of pneumonia [
16]. Chickens and pigs challenged with SARS-CoV had viral RNA in blood during the first week postinfection, but neither species appeared to support significant viral replication or manifested clinical illness [
17]. Recently, Li et al. [
18] reported that several species of wild bats in China are carriers of a coronavirus closely related to SARS-CoV. No studies have evaluated animal model infection or pathogenesis of recombinant infectious clone SARS-CoV (icSARS-CoV) derived from a molecular clone [
19].
Nonhuman primate (NHP) models of SARS-CoV infection have yielded absent to moderate observable disease that has not replicated the severity of human SARS [
20–
25]. Fever was notably absent in all studies, except for one African green monkey on day 3 postinfection [
20]. All studies detected SARS-CoV replication in one or several monkey species and documented seroconversion, thereby confirming established infection. Aside from observable clinical symptoms, these studies relied on virus shedding and histopathology specimens from necropsy as objective markers of disease. Most studies euthanized animals during the course of infection to document histopathological disease. Only two studies followed animals for more than 14 d after infection [
20,
22]. No study has examined radiographic evidence of pulmonary disease, which is one of the most prominent features of SARS in humans.
In adult humans, SARS presents as a severe febrile pneumonia [
1]. It has been characterized as a three-phase illness: a first phase consisting of a flu-like illness, followed by a phase of lower respiratory tract disease, with a third phase of clinical deterioration in a process resembling adult respiratory distress syndrome [
26]. Disease progression can be somewhat slow, with onset of severe respiratory disease occurring anywhere from 1 to 2 wk after initial symptoms [
27]. Pulmonary radiographic abnormalities are almost universally reported in SARS cases [
28]. However, early radiographs may be normal, and there is clear evidence of infection without radiographic abnormality in a small number of cases [
29,
30]. Multifocal disease is present in 30–50% of initial radiographs, and the majority of persons progress to multifocal disease that peaks between 8 and 14 d after symptom onset [
28,
31–
34]. Severe disease develops in up to 30% of patients, with the most ill developing diffuse or confluent airspace consolidation consistent with adult respiratory distress syndrome [
28,
31,
33].
In contrast to adults, SARS in young children tends to be a relatively mild disease [
35]. Adolescents can experience significant respiratory disease similar to adults, but younger children generally do not [
36–
39]. Constitutional symptoms such as myalgias, chills, and headache that are common in adults are usually absent in children [
35,
40]. Children have a shorter course of illness, most being afebrile by 7 d, and generally do not develop pulmonary disease significant enough to require assisted ventilation or even supplemental oxygen [
36–
39,
41]. As a result, the WHO diagnostic criteria were not reliable in identifying SARS in pediatric patients [
38]. Some experts have recommended the term “mild acute respiratory syndrome” for SARS-CoV infection in children [
35].
Radiographic findings in children with SARS are also less significant than in adults, in both presentation and progression [
40]. Up to 50% of children have normal initial chest radiographs [
35]. In children with abnormal radiographs, unilateral, focal airspace disease predominates [
36,
37,
39]. Most children have worsening of radiographic disease as illness progresses, with multifocal or bilateral lung involvement developing in 20–50% of cases [
39,
42]. Radiographic abnormalities in children generally resolve quickly, within 6–14 d [
37,
42,
43].
In this report, we document the results of observational studies of SARS-CoV and icSARS-CoV infection in nonhuman primates. These studies focused on clinical and virologic parameters associated with infection in cynomolgus macaques in an attempt to examine the underlying mechanism of disease and to study a potential animal model for SARS.
The acute phase of diseases had the highest impact on the total burden (76%) (see Supplement 4). This was the result of the outcome trees that modelled case fatality proportions (CFP) as a direct risk to the acute infection. The high share of YLLs (72% of total DALYs, see Table 2) compared with YLDs was due to the limited amount of time lived with a disability, which is typical for infectious diseases.