Saturday, March 3, 2012

News and Events - 04 Mar 2012




02.03.2012 11:04:00

DOMINIC COYLE

Blockbuster drugs coming off patent will knock a major hole in our export figures and tax revenues

PHARMACEUTICALS HAVE been a driving force for Ireland’s export success in recent years. Even through the darkest days of our financial collapse and recession, the sector, dominated by the large multinational players, continued to deliver export growth and a glimmer of hope of economic recovery.

However, the most recent trade figures point to a looming problem for the Government. Reporting a 9 per cent fall in exports in December, the Central Statistics Office was unusually frank and detailed in stating that “a substantial part of the decline in the value of exports was due to a high value product in the chemicals and related products sector coming off patent”.

The drug is Lipitor, Pfizer’s blockbuster cholesterol lowering therapy and the world’s best-selling drug in recent years, accounting for revenues of $10.7 billion in 2010. Pfizer’s Cork plant produces 100 per cent of the company’s global requirements for the active pharmaceutical ingredient in the drug and a significant portion of the finished tablets.

Coming off patent will knock a major hole in the future revenues Pfizer can expect to get from the drug as generic competition kicks in. As a rule, loss of patent protection can hit the value of sales by anything between 40 and 70 per cent over time – and not much time at that.

For Ireland, the concern in the December figures was that, for now, generic competition to Lipitor is limited. If that was enough to skew the export figures so dramatically, the worry is what damage future, more intense competition will do to our trade balance.

And Lipitor is just one of a number of key drugs in which Ireland has a commercial interest and which are coming off patent. Chris van Egeraat, a lecturer in economic geography at NUI Maynooth, says seven of the 10 largest-selling drugs worldwide which are losing patent protection are currently produced in Ireland. They include the best-selling drug worldwide in 2010 after Lipitor; Sanofi/Bristol Myers Squibb’s blood clotting treatment Plavix, with sales of $9.43 billion. It comes off patent in May.

Globally, it is estimated that as much as $100 billion in sales will be lost to drug companies between 2009 and 2014 as a result of drugs coming off patent. Expected pipeline delivery in terms of market revenue over the same time amounts to about $30 billion, Dr van Egeraat says.

While he doesn’t expect the loss of patents to lead to huge imminent job losses, it does highlight the ambition of the Government’s new Action Plan for Jobs, which has targeted the health and life sciences sector for significant growth in the coming years to help reach the Government’s 100,000 job target.

However, loss of market sales will clearly impact on trade figures and tax revenues. The Irish Pharmaceutical Healthcare Association (IPHA notes that the pharmaceuticals sector accounts for roughly half of all exports and is the largest contributor to corporation tax, accounting for roughly 50 per cent of the ˆ3.5 billion collected last year.

In employment terms, IPHA president and Pfizer country manager David Gallagher says that about 25,000 people are employed directly in the industry, with a roughly similar number working in related sectors. He notes that pharmaceuticals has been more resilient than other sectors of the economy during recent “economically challenging times”.

The message is pointed, especially at a time when the sector is locked in a dispute with the Government over access to market for its new drugs and the contribution it can make to savings in the health budget sought by the State.

David Gallagher noted recently that an increasing number of innovative medicines are currently not being reimbursed by the Department of Health, despite being approved by regulators and meeting health technology assessments.

He recently accused the department of acting in bad faith by refusing to approve drugs for reimbursement as provided for under the industry’s current pricing agreement with the Department, even though the industry had delivered savings of about ˆ540 million over the past five years, a figure he says equates to a 20 per cent cut.

Even before the latest row, the IPHA said the delay between approval and market access had jumped by over 50 per cent to 157 days in recent years, and only 64 per cent of drugs that received market authorisation in the EU between 2007 and 2009 were made available to patients here.

Matt Moran, director of Ibec group PharmaChem Ireland, said Government policy “needs to urgently recognise the very serious challenges facing the industry”.

“A number of blockbuster drugs are coming off patent and healthcare spending in Ireland has been cut by ˆ600 million in the last five years,” he said. “The future success of the sector must not be taken for granted.”

In a speech last year, Gallagher said further price concessions were “simply untenable”, citing preliminary 2011 figures pointing to a 5.2 per cent decline in the value of the Irish market. “There is a limit to the amount which can be taken out of a market without its effective operation and employment being jeopardised,” he said.

Ireland is not alone. The commercial prospects for big pharma were also thrown into sharp focus with a report on the UK’s pharmaceuticals price regulation scheme, which reported collective industry losses of ?142 million in 2009 despite rising sales.

For its part, the Department of Health needs to find cuts in its budget. In a recent report on pharma pricing, the Economic and Social Research Institute (ESRI said that drug costs account for about 17.5 per cent of public health expenditure in Ireland, up from 14 per cent in 2000.

In 2009, the ESRI says, spending per head of population in Ireland on pharmaceuticals was “amongst the highest in the OECD”.

It is understood the Department of Health is targeting a saving of about ˆ112 million from the drugs bill – either in terms of pricing and access for new medicines or pricing of generics.

The ESRI report recommended a number of approaches. These included pricing drugs on the basis of the lowest cost in a basket of European markets rather than the average, and more regular price updates to capture the impact of falling prices earlier.

The industry says that, despite the small physical size of the Irish market, such a move would be negative in two ways. First, Ireland is itself a component of pricing baskets in eight other larger EU markets. A “match the lowest” price here will inevitably further eat into prices in other more important markets.

Secondly, the industry points to Ireland’s importance as a base of operations for most of the main players in the sector. An increasingly adversarial approach with the State will only damage the prospects for future investment, they say, with one industry source saying the recent approach of the department to market access for new drugs was creating a very poor impression in a number of important boardrooms State-side.

The seriousness with which the pharmaceutical sector views the current price negotiations in Ireland – where eight of the top 10 global players have operations – is highlighted by the engagement of some of the industry’s leading figures with the Government.

The chief executive of one major global player has made a point of briefly visiting Ireland next week. The message in his first visit to the State will not be lost on ministers. The following week, leading executives from another top 10 drug manufacturers gather in Dublin for a meeting at which the attitude of the State to the sector is certain to figure.

On the Government’s side, there is concern too at any adverse impact on such a major employer and contributor to the exchequer. Taoiseach Enda Kenny has recently engaged directly in private meetings with top industry figures here to assure them of the Government’s support despite the ongoing budgetary squeeze.

For their part, the drug companies say that current pricing pressures are restricting innovations. Without adequate compensation, they say, companies simply will not be able to invest in new products given the costs involved and the risk of failure.

This isn’t unique to Ireland. Reporting annual results earlier this month, Bayer chief executive Marijn Dekkers expressed concern “about the side-effects” of health service reforms taking place around the world “because the money we earn from today’s medicines pays for the development of tomorrow’s medicines”.

Pointing to the ˆ2 billion research and development cost of Xarelto, a new drug developed with Johnson Johnson to prevent blood clotting, he said: “We need innovative pharmaceuticals more than ever, because so many known diseases still cannot be treated adequately, or at all, with medicines.”

But that’s part of the problem for the major pharmaceutical companies. Many of the easy treatment areas are now well catered for. A good portion of the drugs that do so are shortly coming off patent and are easily accessible to generic competition.

The opportunities of the future lie in increasingly niche conditions or very high risk areas such as oncology and, especially, neurology. Added to this is the move to biologics and the trend towards more personalised medicines.

The challenge is evident in the fact that, last year, the US drug regulator, the Food and Drug Administration, licensed just a handful of new pharmaceutical therapies. Getting this more select group of drugs to as many markets as possible is increasingly critical for big pharma.

The age of the blockbuster is fading, along with the fat profit margins it offered. That presents major issues for the sector. Over time, through consolidation and acquisition, they have grown into massive unwieldy entities with poorly directed research failing to deliver sufficient pipeline.

In recent years, much effort has been devoted to streamlining operations and increasing productivity, especially on the research side. Thousands of jobs have been shed worldwide, and greater emphasis placed on outsourcing much of the early-stage RD work.

A case in point is Elan’s prospective Alzheimer’s treatment bapineuzumab. Originally developed by the company in association with Wyeth, it is now controlled by Pfizer (which acquired Wyeth to fill a perceived weakness in its biopharmaceuticals operations and Johnson Johnson, which bought an 18.4 per cent stake in Elan in 2009 in a deal valued at $1 billion. Its interest was driven largely by the Irish company’s pipeline – particularly bapineuzumab which is seen as one of the more promising candidates to address a disease with limited treatment options at present and which reports critical Phase III trial data later this year.

The second focus is on developing new markets. But that presents its own problems, not least with business practices that have reflected poorly on the industry.

Several of the largest drug companies have been implicated in an ongoing legal action in Serbia in which a group of 10 doctors and drug company officials were charged with taking, or offering, more than ˆ500,000 in bribes to use specific products. While all deny guilt in this case, an examination of US Securities and Exchange Commission (SEC filings by the world’s top 10 drug companies has found that eight of them recently warned of potential costs related to charges of corruption in overseas markets.

Life is unlikely to get any easier for the sector over the coming two or three years. That raises the stakes in the ongoing price negotiations. The new accord was due to come into force yesterday and, as the leaked Commission assessment this week illustrated, pressure on the Government to deliver the necessary savings to ensure their budgetary projections is only likely to intensify.

Permalink |
Leave a comment  »




03.03.2012 8:49:00


Open Access

Essay info

Why Most Published Research Findings Are False

John P. A. Ioannidis


Abstract  Top

Summary

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Citation: Ioannidis JPA (2005 Why Most Published Research Findings Are False. PLoS Med 2(8 : e124. doi:10.1371/journal.pmed.0020124

Published: August 30, 2005

Copyright: © 2005 John P. A. Ioannidis. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Competing interests: The author has declared that no competing interests exist.

Abbreviation: PPV, positive predictive value

John P. A. Ioannidis is in the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece, and Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts-New England Medical Center, Tufts University School of Medicine, Boston, Massachusetts, United States of America. E-mail: jioannid@cc.uoi.gr

Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [ 1–3] to the most modern molecular research [ 4, 5]. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [ 6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof.


Modeling the Framework for False Positive Findings  Top

Several methodologists have pointed out [ 9–11] that the high rate of nonreplication (lack of confirmation of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p -value less than 0.05. Research is not most appropriately represented and summarized by p -values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p -values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings.

It can be proven that most claimed research findings are false

As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study , the statistical power of the study, and the level of statistical significance [ 10, 11]. Consider a 2 ? 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. 1 Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fields where either there is only one true relationship (among many that can be hypothesized or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R /( R + 1 . The probability of a study finding a true relationship reflects the power 1 - ? (one minus the Type II error rate . The probability of claiming a relationship when none truly exists reflects the Type I error rate, ?. Assuming that c relationships are being probed in the field, the expected values of the 2 ? 2 table are given in Table 1. After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [ 10]. According to the 2 ? 2 table, one gets PPV = (1 - ? R /( R - ?R + ? . A research finding is thus more likely true than false if (1 - ? R > ?. Since usually the vast majority of investigators depend on a = 0.05, this means that a research finding is more likely true than false if (1 - ? R > 0.05.

thumbnail


Table 1. Research Findings and True Relationships

doi:10.1371/journal.pmed.0020124.t001


What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 ? 2 tables.


Bias  Top

First, let us define bias as the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced. Let u be the proportion of probed analyses that would not have been “research findings,” but nevertheless end up presented and reported as such, because of bias. Bias should not be confused with chance variability that causes some findings to be false by chance even though the study design, data, analysis, and presentation are perfect. Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias. We may assume that u does not depend on whether a true relationship exists or not. This is not an unreasonable assumption, since typically it is impossible to know which relationships are indeed true. In the presence of bias ( Table 2 , one gets PPV = ([1 - ?] R + u ? R /( R + ? ? ? R + u ? u ? + u ? R , and PPV decreases with increasing u , unless 1 ? ? ? ?, i.e., 1 ? ? ? 0.05 for most situations. Thus, with increasing bias, the chances that a research finding is true diminish considerably. This is shown for different levels of power and for different pre-study odds in Figure 1. Conversely, true research findings may occasionally be annulled because of reverse bias. For example, with large measurement errors relationships are lost in noise [ 12], or investigators use data inefficiently or fail to notice statistically significant relationships, or there may be conflicts of interest that tend to “bury” significant findings [ 13]. There is no good large-scale empirical evidence on how frequently such reverse bias may occur across diverse research fields. However, it is probably fair to say that reverse bias is not as common. Moreover measurement errors and inefficient use of data are probably becoming less frequent problems, since measurement error has decreased with technological advances in the molecular era and investigators are becoming increasingly sophisticated about their data. Regardless, reverse bias may be modeled in the same way as bias above. Also reverse bias should not be confused with chance variability that may lead to missing a true relationship because of chance.

thumbnail


Figure 1. PPV (Probability That a Research Finding Is True as a Function of the Pre-Study Odds for Various Levels of Bias, u

Panels correspond to power of 0.20, 0.50, and 0.80.

doi:10.1371/journal.pmed.0020124.g001


thumbnail


Table 2. Research Findings and True Relationships in the Presence of Bias

doi:10.1371/journal.pmed.0020124.t002


Testing by Several Independent Teams  Top

Several independent teams may be addressing the same sets of research questions. As research efforts are globalized, it is practically the rule that several research teams, often dozens of them, may probe the same or similar questions. Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation. An increasing number of questions have at least one study claiming a research finding, and this receives unilateral attention. The probability that at least one study, among several done on the same question, claims a statistically significant research finding is easy to estimate. For n independent studies of equal power, the 2 ? 2 table is shown in Table 3: PPV = R (1 ? ? n /( R + 1 ? [1 ? ?] n ? R ? n (not considering bias . With increasing number of independent studies, PPV tends to decrease, unless 1 - ?
< a, i.e., typically 1 ? ? < 0.05. This is shown for different levels of power and for different pre-study odds in Figure 2. For n studies of different power, the term ? n is replaced by the product of the terms ? i for i = 1 to n , but inferences are similar.

thumbnail


Figure 2. PPV (Probability That a Research Finding Is True as a Function of the Pre-Study Odds for Various Numbers of Conducted Studies, n

Panels correspond to power of 0.20, 0.50, and 0.80.

doi:10.1371/journal.pmed.0020124.g002


thumbnail


Table 3. Research Findings and True Relationships in the Presence of Multiple Studies

doi:10.1371/journal.pmed.0020124.t003


Corollaries  Top

A practical example is shown in Box 1. Based on the above considerations, one may deduce several interesting corollaries about the probability that a research finding is indeed true.


Box 1. An Example: Science at Low Pre-Study Odds

Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10?4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R /( R + 1 = 10?4. 1 Let us also suppose that the study has 60% power to find an association with an odds ratio of 1.3 at ? = 0.05. Then it can be estimated that if a statistically significant association is found with the p -value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 ? 10 ?4 .

Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specified, changes in the disease or control definitions, and various combinations of selective or distorted reporting of the results. Commercially available “data mining” packages actually are proud of their ability to yield statistically significant results through data dredging. In the presence of bias with u = 0.10, the post-study probability that a research finding is true is only 4.4 ? 10?4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them finds a formally statistically significant association, the probability that the research finding is true is only 1.5 ? 10?4, hardly any higher than the probability we had before any of this extensive research was undertaken!

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Small sample size means smaller power and, for all functions above, the PPV for a true research finding decreases as power decreases towards 1 ? ? = 0.05. Thus, other factors being equal, research findings are more likely true in scientific fields that undertake large studies, such as randomized controlled trials in cardiology (several thousand subjects randomized [ 14] than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller [ 15].

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease (relative risks 3–20 , than in scientific fields where postulated effects are small, such as genetic risk factors for multigenetic diseases (relative risks 1.1–1.5 [ 7]. Modern epidemiology is increasingly obliged to target smaller effect sizes [ 16]. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. As shown above, the post-study probability that a finding is true (PPV depends a lot on the pre-study odds (R . Thus, research findings are more likely true in confirmatory designs, such as large phase III randomized controlled trials, or meta-analyses thereof, than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information, such as microarrays and other high-throughput discovery-oriented research [ 4, 8, 17], should have extremely low PPV.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias, u . For several research designs, e.g., randomized controlled trials [ 18–20] or meta-analyses [ 21, 22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes [ 23]. Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test [ 24] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails [ 25]. Simply abolishing selective publication would not make this problem go away.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u . Conflicts of interest are very common in biomedical research [ 26], and typically they are inadequately and sparsely reported [ 26, 27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [ 28].

Corollary 6: The hotter a scientific field (with more scientific teams involved , the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [ 29]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [ 29].

These corollaries consider each factor separately, but these factors often influence each other. For example, investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field, further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely, the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research, enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation.


Most Research Findings Are False for Most Research Designs and for Most Fields  Top

In the described framework, a PPV exceeding 50% is quite difficult to get. Table 4 provides the results of simulations using the formulas developed for the influence of power, ratio of true to non-true relationships, and bias, for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases, but power and pre-test chances are higher compared to a single randomized trial. Conversely, a meta-analytic finding from inconclusive studies where pooling is used to “correct” the low power of single studies, is probably false if R ? 1:3. Research findings from underpowered, early-phase clinical trials would be true about one in four times, or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse, especially when underpowered, but even well-powered epidemiological studies may have only a one in five chance being true, if R = 1:10. Finally, in discovery-oriented research with massive testing, where tested relationships exceed true ones 1,000-fold (e.g., 30,000 genes tested, of which 30 may be the true culprits [ 30, 31], PPV for each claimed relationship is extremely low, even with considerable standardization of laboratory and statistical methods, outcomes, and reporting thereof to minimize bias.

thumbnail


Table 4. PPV of Research Findings for Various Combinations of Power (1 - ? , Ratio of True to Not-True Relationships (R , and Bias (u

doi:10.1371/journal.pmed.0020124.t004


Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias  Top

As shown, the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias.

For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance are simply those that have sustained the worst biases.

For fields with very low PPV, the few true relationships would not distort this overall picture much. Even if a few relationships are true, the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results.

Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a “null field.” However, other lines of evidence, or advances in technology and experimentation, may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods, technologies, and conflicts may be operating.


How Can We Improve the Situation?  Top

Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability.

Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown “gold” standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. 1 Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [ 32–34].

Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized trials [ 35]. Registration would pose a challenge for hypothesis-generating research. Some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment. Regardless, even if we do not see a great deal of progress with registration of studies in other fields, the principles of developing and adhering to a protocol could be more widely borrowed from randomized controlled trials.

Finally, instead of chasing statistical significance, we should improve our understanding of the range of R values—the pre-study odds—where research efforts operate [ 10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test [ 36].

Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [ 37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context.


References  Top

  1. Ioannidis JP, Haidich AB, Lau J (2001 Any casualties in the clash of randomised and observational evidence? BMJ 322: 879–880. Find this article online
  2. Lawlor DA, Davey Smith G, Kundu D, Bruckdorfer KR, Ebrahim S (2004 Those confounded vitamins: What can we learn from the differences between observational versus randomised trial evidence? Lancet 363: 1724–1727. Find this article online
  3. Vandenbroucke JP (2004 When are observational studies as credible as randomised trials? Lancet 363: 1728–1731. Find this article online
  4. Michiels S, Koscielny S, Hill C (2005 Prediction of cancer outcome with microarrays: A multiple random validation strategy. Lancet 365: 488–492. Find this article online
  5. Ioannidis JPA, Ntzani EE, Trikalinos TA, Contopoulos-Ioannidis DG (2001 Replication validity of genetic association studies. Nat Genet 29: 306–309. Find this article online
  6. Colhoun HM, McKeigue PM, Davey Smith G (2003 Problems of reporting genetic associations with complex outcomes. Lancet 361: 865–872. Find this article online
  7. Ioannidis JP (2003 Genetic associations: False or true? Trends Mol Med 9: 135–138. Find this article online
  8. Ioannidis JPA (2005 Microarrays and molecular research: Noise discovery? Lancet 365: 454–455. Find this article online
  9. Sterne JA, Davey Smith G (2001 Sifting the evidence—What's wrong with significance tests. BMJ 322: 226–231. Find this article online
  10. Wacholder S, Chanock S, Garcia-Closas M, Elghormli L, Rothman N (2004 Assessing the probability that a positive report is false: An approach for molecular epidemiology studies. J Natl Cancer Inst 96: 434–442. Find this article online
  11. Risch NJ (2000 Searching for genetic determinants in the new millennium. Nature 405: 847–856. Find this article online
  12. Kelsey JL, Whittemore AS, Evans AS, Thompson WD (1996 Methods in observational epidemiology, 2nd ed. New York: Oxford U Press. 432 p.
  13. Topol EJ (2004 Failing the public health—Rofecoxib, Merck, and the FDA. N Engl J Med 351: 1707–1709. Find this article online
  14. Yusuf S, Collins R, Peto R (1984 Why do we need some large, simple randomized trials? Stat Med 3: 409–422. Find this article online
  15. Altman DG, Royston P (2000 What do we mean by validating a prognostic model? Stat Med 19: 453–473. Find this article online
  16. Taubes G (1995 Epidemiology faces its limits. Science 269: 164–169. Find this article online
  17. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, et al. (1999 Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 286: 531–537. Find this article online
  18. Moher D, Schulz KF, Altman DG (2001 The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357: 1191–1194. Find this article online
  19. Ioannidis JP, Evans SJ, Gotzsche PC, O'Neill RT, Altman DG, et al. (2004 Better reporting of harms in randomized trials: An extension of the CONSORT statement. Ann Intern Med 141: 781–788. Find this article online
  20. International Conference on Harmonisation E9 Expert Working Group (1999 ICH Harmonised Tripartite Guideline. Statistical principles for clinical trials. Stat Med 18: 1905–1942. Find this article online
  21. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. (1999 Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 354: 1896–1900. Find this article online
  22. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, et al. (2000 Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-analysis of Observational Studies in Epidemiology (MOOSE group. JAMA 283: 2008–2012. Find this article online
  23. Marshall M, Lockwood A, Bradley C, Adams C, Joy C, et al. (2000 Unpublished rating scales: A major source of bias in randomised controlled trials of treatments for schizophrenia. Br J Psychiatry 176: 249–252. Find this article online
  24. Altman DG, Goodman SN (1994 Transfer of technology from statistical journals to the biomedical literature. Past trends and future predictions. JAMA 272: 129–132. Find this article online
  25. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004 Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 291: 2457–2465. Find this article online
  26. Krimsky S, Rothenberg LS, Stott P, Kyle G (1998 Scientific journals and their authors' financial interests: A pilot study. Psychother Psychosom 67: 194–201. Find this article online
  27. Papanikolaou GN, Baltogianni MS, Contopoulos-Ioannidis DG, Haidich AB, Giannakakis IA, et al. (2001 Reporting of conflicts of interest in guidelines of preventive and therapeutic interventions. BMC Med Res Methodol 1: 3. Find this article online
  28. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992 A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 268: 240–248. Find this article online
  29. Ioannidis JP, Trikalinos TA (2005 Early extreme contradictory estimates may appear in published research: The Proteus phenomenon in molecular genetics research and randomized trials. J Clin Epidemiol 58: 543–549. Find this article online
  30. Ntzani EE, Ioannidis JP (2003 Predictive ability of DNA microarrays for cancer outcomes and correlates: An empirical assessment. Lancet 362: 1439–1444. Find this article online
  31. Ransohoff DF (2004 Rules of evidence for cancer molecular-marker discovery and validation. Nat Rev Cancer 4: 309–314. Find this article online
  32. Lindley DV (1957 A statistical paradox. Biometrika 44: 187–192. Find this article online
  33. Bartlett MS (1957 A comment on D.V. Lindley's statistical paradox. Biometrika 44: 533–534. Find this article online
  34. Senn SJ (2001 Two cheers for P-values. J Epidemiol Biostat 6: 193–204. Find this article online
  35. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004 Clinical trial registration: A statement from the International Committee of Medical Journal Editors. N Engl J Med 351: 1250–1251. Find this article online
  36. Ioannidis JPA (2005 Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218–228. Find this article online
  37. Hsueh HM, Chen JJ, Kodell RL (2003 Comparison of methods for estimating the number of true null hypotheses in multiplicity testing. J Biopharm Stat 13: 675–689. Find this article online

Permalink |
Leave a comment  »




Pharma International's US Correspondent
02.03.2012 12:59:54

A modified osteoporosis treatment drug might be able to tackle human malaria infections, according to new US research.

Based at the University of Illinois, a team spearheaded by Professor Eric Oldfield has discovered that this drug, in adapted form, can enter malaria-infected red blood cells and destroy the disease although, so far, only in mice.

Crucially, though, no significant side effects were recorded as far as the mice's health was concerned.

Modified Osteoporosis Drug

The modified osteoporosis drug draws on chemically-altered variants of zoledronate (marketed as Zometa and risedronate (marketed as Actonel . These, in their original forms, can't traverse the membrane that surrounds red blood cells but, with the addition of new features like an extended lipid structure, they're able to get through.

"We found that compounds that were really active had a very long hydrocarbon chain", Oldfield explained, in a statement. "These compounds can cross the cell membrane and work at very low concentrations."

Malaria Drug Treatments

According to the World Health Organization, it's possible that malaria infections claimed the lives of over one million victims in 2008, the majority of them located in Asia and Sub-Saharan Africa. The parasite that causes malaria is ever-changing and that means, in theory, Malaria drug treatment developers are caught in a constant catch-up game.

There's a constant stream of emergent malaria strains that no treatments presently in production can tackle and that's why, according to Oldfield: "it's important to find new drug targets because malaria drugs last only a few years, maybe 10 years, before you start to get resistance. The parasites mutate and then you lose your malaria drug."

Further details of this new malaria treatment research have been published by the Proceedings of the National Academy of Sciences.

Image copyright US Centers for Disease Control and Prevention




03.03.2012 6:18:25


The institute offers an MSc in '

Bioinformatics
and

Biotechnology
'. The course will provide competence in both

Biotechnology
and computational biology/

Bioinformatics
by providing training in the areas of cell and

molecular biology
, computer science, statistics,

Bioinformatics
and so on (syllabus available below .

 

IBAB has been running a Postgraduate Diploma in

Bioinformatics
 since 2002, and the 11th batch is currently doing its coursework. It has also been running a Postgraduate Diploma in Laboratory Biotechniques since 2004, and the 8th batch has completed its coursework. The two popular programmes are now being discontinued to enable the faculty to focus on the MSc programme.

Every student is provided

  • Access to a fully loaded high-end PC and extensive access to the internet for the entire duration of 2 years.
  • Lectures are delivered by in-house faculty, distinguished academics and industry scientists from leading institutions/companies within the country or from abroad.
  • A significant amount of time is devoted to learning through material available on-line, journals, seminars, tutorials, group discussions etc.
  • Thus the extensive lecture/tutorial program is supplemented by an enabling environment and associated self-learning through assignments and project work.

The 2 year MSc course is structured into 4 semesters, as below.

1. ADMISSION PROCEDURE

  • Application processing fees (non-refundable

    • Rs. 500/- for students from India and non-industrialized countries
    • Rs. 1000/- for NRI/Foreign students from industrialized countries

A Demand Draft for the above mentioned amount in favour of 'Institute of

Bioinformatics
and Applied

Biotechnology
' payable at Bengaluru needs to be sent to the institute (please see next section for other documents that also have to be submitted .

2. Application form

Fill up all details on-line and seprately send DD, 2 passport size photos and photocopies of marksheets to IBAB by post/courier. THIS IS THE ONLY OPTION FOR APPLYING. There is no hardcopy form available by downloading or by requesting a copy.

The online form will be processed only on receipt of the DD.

3. List of necessary documents

  • Regular candidates:
  • Photocopies of all certificates/marksheets
  • NRI/Foreign candidates:
  • Photocopies of all certificates/marksheets
  • Proof of NRI/Foreign status (stamped page from passport

DO NOT send any original certificates/marksheets.

4. Important information:
a. Date of online test: To be announced later.

b. Date of Interview: You will be given a date between 16-19 July 2012. In case you have a problem with your allotted date, write to
msc2012@ibab.ac.in or call 080-285 289 00 or 01, 9611 872 946, 900 829 6547 between Monday and Friday from 10:00am to 5:30pm.

c. If a seat is offered, the last date for accepting it is 5 days after being intimated. At this time a token sum of Rs. 20,000/- (adjustable against the fees due at the time of joining has to be paid to the institute.

d. Course commences: 18 August 2012.

5. Eligibility for the course

a. A recognised BSc degree in any branch of science or technology, viz. life sciences (

Agriculture
,

Biotechnology
etc. , physics, chemistry, mathematics, statistics, engineering, pharma, veterinary or medicine with minimum of 55% marks. Thus, the following degree holders may apply: BSc, BTech, BE, MBBS, BDS, etc. Please contact IBAB in case you have a doubt about whether you are eligible.

b. Students awaiting their results can also apply, provided their average percentage of previous semesters/years is at least 55%.

c. There is no age limit.

Fees

Application processing fees (non-refundable :
1. Application processing fees (non-refundable :
Rs. 500/- for students from India or non-industrialized countries
Rs. 1000/- for NRI/Foreign students from industrialized countries

A DD for the above mentioned amount in favour of “Institute of

Bioinformatics
and Applied

Biotechnology
” payable at Bengaluru, should be sent to IBAB by post/courier or submitted in person after filling up the form online. Please note the other documents that also have to be submitted (2 passport photos and photocopies of marksheets .

2. Course fees per semester:

  • For Indian students and students from non-industrialized countries.
  • Academic fees: Rs. 50,000/- per semester. This includes tuition fees, laboratory fees, software fees and library fees. This has to be paid within a week of the joining date for each semester. In addition, there is a security deposit of Rs. 20,000/- (refundable only on completion of the course to be paid upon joining.


Please download the details and schedule of payment of course fees and hostel fees.

Partial waiver of fees through scholarships in case of a few deserving students may be feasible.

For NRI/Foreign students from industrialized countries

  • Academic fees: Rs. 75,000/- per semester. This includes tuition fees, laboratory fees, software fees and library fees. This has to be paid within a week of the joining date for each semester. In addition, there is a security deposit of Rs. 30,000/- (refundable only on completion of the course to be paid upon joining.

Please download the details and schedule of payment of course fees and hostel fees.

Contact for further queries (between 10:00am and 5:30 pm, Mon-Fri : 080 - 285 289 00/01/02.
Mobile: 9611 872 946 / 900 829 6547

You can also send an email to
msc2012@ibab.ac.in

Syllabus

  • The syllabus is liable to change over time. Please download the current syllabus from here. 

http://www.biotecnika.org/featured/ibab-announces-admissions-msc-biotech-bioinfo-95-real-placement-record#comments



03.03.2012 4:53:04

NATIONAL CONFERENCE ON 
Current Trends in Medicinal, Aromatic plants and Plant Products 
17th & 18th March 2012
Organized by
Osmania University, Hyderabad

OSMANIA UNIVERSITY- A BRIEF PROFILE :
Osmania university established in the year 1918, is the seventh oldest university in the country, third oldest in the South India and oldest in the sate of Andhra Pradesh.  It was founded by His exalted Highness Mir Osman Ali Khan, the Seventh Nizam of Hyderabad State. It was the first University to impart higher education through Urdu as the medium of instruction. It is the largest affiliating university in Asia with close to 800 affiliated colleges spread over 3 districts of Telangana (Hyderabad , Ranga Reddy and Medak providing academic and research facilities for nearly five lakhs students.  It was accredited with a ‘FIVE STAR’ rating by the NAAC in the year 2001 and reaccredited with the highest grade ‘A’ in 2008.  It has been ranked 7th among



Botany
DEPARTMENT, OSMANIA UNIVERSITY :

The department of

Botany
, Osmania University is one of the oldest and pioneering department in the country with decades of academic contributions of excellence.  The department has excellent infrastructural facilities and is carrying out research in cutting edge areas of pant sciences.  The department has received grants from several national funding agencies like UGC in the form of SAP & COSIST,

CSIR
,

DBT
, DST, MoENF and APNL.  DST identified the department under FIST programme.    

The department is offering following specializations of contemporary relevance at

M.Sc
., level:
1. Applied Mycology and Molecular Plant pathology
2. Applied Physiology and

molecular biology

3. Biodiversity of Angiosperms, Phytochemistry and  Biodiversity of Medicinal Plants.
4. Cytogenetics, Genetics and Molecular Genetics.
5. Applied Palynology and Palaeophytology.



ABOUT THE CONFERENCE :
Medicinal and Aromatic Plants have been used in different ways and methods in overcoming various health problems. Phytochemicals derived from medicinal Plants are been used in different systems of Medicine like Aurveda, Siddha, Unani, Homeopathy, Folklore and Allopathy. Majority of the Drugs used in Allopathy are derived from Medicinal Plants. Many of the Pharmaceutical and Agrochemical Industries are currently engaged actively in Research and Development of Eco-Friendly,organisms, Phytochemicals, and plant products like biofertilizers,biopesticides nutraceuticals,and dietary suppliments. Nearly 70-80 % of Population of Developing Countries are relining on Products which are of plant origin. This trend indicates the demanding position of Medicinal and Aromatic plants in the present market. Therefore this Conference reviews, discuseses,deliberate and share their Research experience for the benefit of students, teachers and researchers and comman man.

TECHNICAL PROGRAMME :
Theme of the Seminar will Review, Discuss and Deliberate on the following areas.
1. BIODERVISITY AND CONSERVATION OF MEDICINAL PLANTS
2. PHYTOCHEMISTRY,SECONDARY METABOLITES AND NATURAL PRODUCTS
3. CULTIVATION,  MICROPROPAGATION AND

Biotechnology
OF  MEDICINAL PLANTS
4. PHARMACOLOGY,  PHARMOCOGNOSY AND ETHANOBOTANY
5. HERBAL PRODUCTS, EFFECT OF BOTANICALS ON MICROBES AND INSECTS
6. MEDICINAL PLANTS AND MICROBIAL INTERACTIONS

SUBMISSION OF ABSTRACTS :
The official language of the conference will be English.  There will be Oral and Poster presentations.  Oral presentation will be by invitations through respective technical session, Conveners. Contributed papers will be presented as posters. Abstracts of both invited and contributed papers will be printed and distributed to registered participants.  Abstracts should be electronically typed on A4 size papers with 3cm margins on all sides, should not exceeded 300 words.  Those interested to participate in the conference are requested to fill in the enclosed registration form and send it on or before 7th March, 2012

REGISTRATION FEES :
For participants : Rs 600.  For students:300
PosterSize : 4ft×3ft

Please send with registration fees and abstract on or before 7th March2012 Positively to following address

Prof. G. Bagyanarayana                
Convener                                                
Department of

Botany
, Department of

Botany
                                               
Osmania university , Hyderabad-500007        
Email:  gbagyan@gmail.com                          

Prof. S. Gangadhar Rao
Organizing secretary                                
Osmania university, Hyderabad 500007
Email:
gangadharrao53@gmail.com

Deadline : 07.03.12

View Original Notification



http://www.biotecnika.org/content/march-2012/national-conference-current-trends-medicinal-aromatic-plants-and-plant-products-o#comments



02.03.2012 10:17:41
Sao Paulo, Brazil—August 26, 2010—Merck Millipore, the Life Science division of Merck KGaA* of Germany, today announced the inauguration of a Latin American Technology Center (LATC in Alphaville, Greater Sao Paulo. The new center, which integrates state-of-the-art research labs and training facilities, will meet the growing validation needs of the region’s pharmaceutical and vaccine laboratories, helping ensure the safety and efficacy of their processes.

No comments:

Post a Comment