The majority of pro inflammatory cytokines are regulated by
The majority of pro-inflammatory cytokines are regulated by NFκB, and recently, it has been found that p38 MAPK partly regulates NFκB-driven gene rxr receptor by increasing the association of the basal transcriptional factor, TATA-binding protein, with the C terminus of p65 subunit of NFκB and increasing the binding of TATA-binding protein to the TATA box (Carter et al., 1999). In this study, we observed partial inhibition of NFκB by the use of 5 μM IKK inhibitor (BMS345541), which was not enough to significantly lower production of the proinflammatory cytokine IL-6. Activated NFκB or P-NFκB was detected in similar amounts in thrombocytes of control, LPS-treated and LPS + PD98059 samples. Although only LPS + BMS345541 treated samples should have affected P-NFκB, the two other inhibitors also showed lower amounts of P-NFκB. It is unclear why control samples showed a similar level of P-NFκB.
Introduction Scientists and engineers often use computational modeling to replace (or augment) physical experimentation. For the remainder of this paper we will refer to the software created by these scientists and engineers as scientific software. The following examples help to illustrate some of the key reasons why computational models are becoming increasingly important in science and engineering domains. First, computational models allow scientists to react to events in near real-time. In meteorology, computational models allow scientists to adjust their forecasts based upon current conditions and analyze the potential effects of changing conditions. Without such models, meteorologists would have to extrapolate from historical data, which is time-consuming and too slow for real-time forecasts. Second, computational models allow scientists to study phenomena that occur at a very slow pace in reality. In climate science or geology, the slow pace of many natural phenomena make it infeasible for scientists to rely solely on empirical observations to draw conclusions. Computational models allow scientists to study these phenomena at a much more rapid pace. Third, computational models allow scientists to study phenomena that are too precise for manual observation. In astronomy and astrophysics, the combination of software models and advances in digital imaging systems have combined to allow scientists to discover new solar systems that are too faint for human detection. Finally, computational models allow scientists to study phenomena that are too dangerous to study experimentally. In astrophysics, osmoregulators is much safer for scientists to use computational models to explore the effects of various types of nuclear reactions compared with conducting physical experiments. As these examples highlight, scientists and engineers are increasingly reliant on the results of computational modeling to inform their decision-making process. Because of this reliance, it is vital for the software to return accurate results in a timely fashion. While the correctness of the scientific and mathematical models that underlie the software is a key factor in the accuracy of results, the correctness and quality of the software that implements those models is also highly important. Additionally, the software’s performance must be fast enough to provide results within the desired time window. To complicate these requirements, scientific software is typically complex, large, and long-lived. The primary factor influencing the complexity is that scientific software must conform to sophisticated mathematical models . The size of the programs also increases the complexity, as scientific software can contain more than 100,000 lines of code , . Finally, the longevity of these projects is problematic due to developer turn-over and the requirement to maintain large existing codebases while developing new code. Section 2 provides more details about these characteristics of scientific software. In the more traditional software world, software engineering researchers have developed various practices that can help teams address these factors so that the resulting software will have fewer defects and have overall higher quality. For example, documentation and design patterns help development teams manage large, complex software projects. Version control is useful in long-lived projects as a means to help development teams manage multiple software versions and track changes over time. Finally, peer code reviews support software quality and longevity, by helping teams identify faults early in the process (software quality) and by providing an avenue for knowledge transfer to reduce knowledge-loss resulting from developer turn-over (longevity).