Studien zu Robinet Testard – Zusammenfassung:
Robinet Testard ist ein Buchmaler des ausgehenden 15. Jahrhunderts, der von der Forschung im Westen Frankreichs angesiedelt wird und der einen großen Teil seiner Werke im Rang eines „Valet de chambre“ für Charles d’Angoulême und dessen Frau Louise de Savoie illuminiert hat. Aufgrund charakteristischer Stilmerkmale können ihm aktuell 33 Handschriften eindeutig zugeordnet werden. Neben Büchern zum liturgischen Gebrauch hat er ein breites Spektrum an profanen Werken gestaltet, die in der Regel zwischen 1470 und 1500 datiert werden, obwohl er als Person bis zum Jahr 1531 dokumentiert ist. Ausgehend von Testards Hauptwerk, dem Stundenbuch des Charles d'Angoulême (Paris, BnF, Ms. lat. 1173), wurde in der vorliegenden Arbeit mit Blick auf das Gesamtwerk eine Auswahl seiner illuminierten Handschriften unter stilkritischen Gesichtspunkten untersucht. Eine wiederkehrende Fragestellung befasste sich mit der Verwendung druckgrafischer Quellen, die zu dieser Zeit in Frankreich nur sehr selten zu finden ist. Es konnte belegt werden, dass Testard neben deutschen und niederländischen Kupferstichen ausgiebig Gebrauch von Italienischen Vorlagen macht und dass er dieses Material in weitaus größerem Umfang nutzt, als bisher angenommen. Zahlreiche Kunstwerke anderer Gattungen dienen ihm ebenfalls als Vorlage für seine Kompositionen. Die Analyse werkimmanenter Qualitätsunterschiede, die in manchen Arbeiten deutlich zutage treten, lässt auf Testards verantwortliche Stellung in einer eigenen Werkstatt schließen. Mit Blick auf seine Position am Fürstenhof ist es wahrscheinlich, dass er nach dem Tod von Charles d'Angoulême für Louise de Savoie auch in Amboise tätig war, sein Schaffenszeitraum lässt sich bis um 1515 erweitern. Vor seiner Anstellung als Hofmaler hat er an mehreren Handschriften in Poitiers mitgewirkt, seine Hand lässt sich deutlich von den anderen Buchmalern, dem Maler von Walters 222 und dem Maler des Yvon du Fou, unterscheiden. An der Zuweisung der Bildthemen läßt sich eine steigende Wertschätzung seiner Fähigkeiten ablesen. Verschiedentlich deuten sich Bezüge zur Werkstatt des Malers der Échevinage de Rouen an. Im Hinblick auf die Frage nach Testards künstlerischem Ursprung wurden einige seiner frühen Handschriften behandelt. Die bislang angenommene Beziehung zum Maler von Poitiers 30 kann im Vergleich nicht untermauert werden. Dagegen lassen sich zahlreiche Verbindungen zum Maler des Genfer Boccaccio aufzeigen. Unter zeitlichen und regionalen Aspekten ließen sich Gruppierungen der Handschriften vornehmen, die eine differenziertere Sicht auf Testards Stil und sein künstlerisches Bewußtsein eröffnen. Außerdem konnten dem Künstler im Verlauf der Untersuchung zwei weitere Handschriften zugeschrieben werden. Auf der Grundlage dieser Ergebnisse werden eine Neuordnung seines Werkes und Ansätze für die weitere Forschung vorgeschlagen.
Weniger anzeigenMajor Depression Disorder (MDD) is often accompanied by cognitive impairments, including concentration problems and attention deficits. These issues are related to the construct of working memory (WM). Additionally, a reduction in hippocampal volume is frequently observed in Major Depression. There is substantial evidence suggesting that physical exercise training can have positive effects on depressive and cognitive symptoms in MDD. This dissertation aims to integrate these areas of study to investigate the positive effects of physical fitness and exercise on WM in MDD patients, leading to a better understanding of the pathophysiology of the disease and its treatment through physical exercise training. The dissertation comprises three empirical studies that are part of the SPeED study (Sport/Exercise Therapy and Psychotherapy – evaluating treatment Effects in Depressive patients). In Study I (Heinzel et al. 2022), we examine whether a prior exercise intervention enhances the success of subsequent cognitive-behavioral therapy (CBT) and whether this effect is associated with specific physiological changes. Study II (Schwefel et al. 2023) analyzes neural activity and physical fitness in depressive patients during a WM task. Study III (Schwefel et al., sub) focuses on functional and structural neural changes following physical exercise training, with particular emphasis on the hippocampus. The n-back paradigm was used to measure WM function during functional magnetic resonance imaging (fMRI). The physical exercise intervention lasted 12 weeks and was supervised by sports therapists. The results indicate that physical fitness can be improved through training, and surprisingly, depressive symptoms improved in all groups. However, high exercise intensity did not lead to a general boosting effect for CBT. Nonetheless, regression analyses revealed that improvement of individual fitness predicted the success of CBT. MDD patients exhibited a specific activation pattern in frontoparietal brain regions, associated with longer reaction times and poorer performance during high demands in WM tasks compared to healthy controls. Additionally, a parietal fitness correlate was identified in the depressive sample. Improved performance and shorter reaction times were observed after the training intervention, particularly during high demands in the WM tasks. Furthermore, in patients who underwent intensive training, an increased activation of the hippocampus was observed as a result of the training. No structural changes in hippocampal volume were detected. The findings suggest that physical training holds promise as a treatment option for improving WM function in MDD patients. These insights may serve as a foundation for future research on the effects of physical fitness and exercise on mental health and cognition, offering valuable supplements to optimized physical exercise therapies for the treatment of MDD.
Weniger anzeigenFreshwater ecosystems, including rivers, lakes and wetlands, provide critical habitats for diverse species and support essential ecosystem services for human well-being. Meanwhile, they are under growing threats from various sources, including overexploitation, habitat degradation, flow modification, water pollution and invasive species. Consequently, monitored populations of freshwater vertebrates showed an average decline of 83% from 1970 to 2016. Freshwater megafauna (i.e., animals that spend a crucial part of their life cycle in freshwater or brackish ecosystems and have a maximum reported body mass of 30 kg) are particularly susceptible to anthropogenic impacts. Previous studies mainly focus on the impacts of overexploitation and dam construction on these large animals. The impacts of alien species on freshwater megafauna have been largely overlooked. While impacts of alien species on native freshwater megafauna are documented, many freshwater megafauna species, such as sturgeons, Asian carps, American beaver, hippo, crocodilians and the Chinese giant salamander, have been introduced outside of their native distributions. However, their impacts on native species and human well-being in the introduced regions have yet to be systematically investigated at a global scale. This thesis aims to provide a comprehensive understanding of alien species impacts related to freshwater megafauna. I used Environmental Impact Classification for Alien Taxa (EICAT) framework to assess the environmental impacts of alien species on native freshwater megafauna. Then I took a different angle and focused on environmental impacts of alien freshwater megafauna, using megafish as an example and considering both negative and positive aspects with EICAT and EICAT+ frameworks. Moreover, I adapted the approach of nature’s contributions to people (NCP) and assessed the beneficial and detrimental impacts on humans by alien freshwater megafauna. Freshwater megafauna have been affected by a wide array of alien species group from freshwater and terrestrial ecosystems, including both vertebrates and invertebrates. Alien species impact native freshwater megafauna through mechanisms such as predation, competition and hybridization, leading to declines in individual performance and population abundance, or even local extinction of native freshwater megafauna. In addition, native freshwater megafauna showed distinct susceptibility to alien-species impacts between life-cycle stages. Meanwhile, almost half of the 134 extant freshwater megafish species have been introduced to new freshwater ecosystems, with almost 70% of the introduced species established self-sustaining alien populations. These alien megafish caused negative impacts through nine different mechanisms. Predation is the most frequently reported mechanism, followed by herbivory and competition. More than half of the alien megafish species that have sufficient data for assessing impact magnitudes caused population declines of native species, or even species extirpation. A broad range of beneficial NCP categories have been documented for 59 alien freshwater megafauna species in 430 records, with food supply being the most frequently reported category (58%), followed by physical and psychological experiences (20%) and materials and companionship (12%). Much fewer records (154) were identified for detrimental NCP associated with 25 alien freshwater megafauna species, covering four categories including reduced food resources, damage to properties, reduced physical and psychological experiences and risk to health and safety. This thesis emphasizes the vulnerability of native freshwater megafauna to alien species and demonstrates the profound environmental and socio-economic impacts of alien freshwater megafauna. Additionally, it highlights gaps in long-term monitoring and bias in geographical and taxonomic coverage. Long-term monitoring studies are deemed critical for a comprehensive assessment of alien species impacts, given that short-term studies may underestimate the potentially severe, population-level effects. Furthermore, there is an urgent need for monitoring the introduction and assessing impacts of alien species in the Global South. Considering the high economic values of freshwater megafauna due to their use in aquaculture, recreational fishing and pet trade, it is anticipated that more species will be introduced and established outside of their native ranges. Strict biosecurity requirements, mandatory risk assessments and management plans should be implemented when introducing alien freshwater megafauna for activities These measures will help reduce the risk of escapes or releases into natural waterbodies and help safeguard freshwater biodiversity and human well-being.
Weniger anzeigenCompartmentalization is one of the decisive characteristics of cellular life, enabling the cell to build a complex network of enzymatic reactions and metabolic pathways. Drivers of this cellular management are organelles and the vesicles and tubular structures that connect them, all of which are separated from the cytoplasm by a lipid membrane. Intracellular transport, especially secretion to the extracellular space or to the cell membrane, is a highly regulated and organized process. In this work, we focused on the secretion of big protein cargo like chylomicrons and collagens. We aimed to analyze the role of alternative splicing as a regulation mechanism for the secretion of large cargo. Alternative splicing is a mechanism in gene expression where different combinations of exons are included or excluded from mRNA transcripts, leading to the production of multiple protein isoforms from a single gene. We comprehensively analyzed RNA sequencing data from human tissues as well as B-lymphocytes. We found a previously uncharacterized exon in the gene SEC31A that codes for the outer layer of coat protein complex II (COPII), a complex responsible for the transport from the endoplasmic reticulum (ER) to the Golgi-apparatus. Using a correlation analysis with the inclusion levels of the exon and gene expression throughout the human tissue data, we found leads to the functionality and regulation of this alternatively spliced exon. We could show that the inclusion of the exon enhances the transport of lipids in polar differentiated Caco-2 cells. Mini gene experiments demonstrated that the inclusion is regulated by the splicing factor RBM47. Finally, AlphaFold structure prediction with the resulting alternative protein isoforms revealed a change in interaction with COPII partner SEC23A. We also investigated a previously characterized alternative splice site in SEC31A, which leads to a shortened exon. This shortened exon decreased the secretion efficiency of glycosylphosphatidylinositol (GPI)-anchored and e-cadherin cargo in the RUSH assay but enhanced collagen secretion. In a fluorescent recovery after photobleaching (FRAP) experiment, we observed a reduced mobile fraction of COPII puncta with the shortened exon. Lastly, we analyzed RNA sequencing data of differentiating B-lymphocytes as a model for the regulatory effect of the transcriptome on intracellular trafficking and secretion. B-lymphocytes show rather extensive changes in gene expression during their change from memory cells, to plasmablasts, and plasma cells, but alternative splicing changes are most abundant in memory cells. We found multiple splice events, which are unique to memory cells, particularly in PICALM and KLHL12, which could potentially have a significant impact on the function of the proteins. In summary, we were able to showcase the mechanism by which the structure and function of COPII is altered via alternative splicing of SEC31A, and we demonstrated similar mechanisms in B-lymhocytes. With these results, we depended our knowledge of the role that alternative splicing has on secretory specificity and regulation of large cargo secretion.
Weniger anzeigenThe artificial tick feeding systems (ATFS) offer a promising alternative to animal experiments. They can be used for studies of tick biology and physiology, for experiments investigating the tick microbiome, novel tick control strategies and the tick-(microbiome)-pathogen interface as well as to facilitate tick rearing. At a time where globalization and global warming is affecting the prevalence, distribution and host encounters of ticks, research regarding ticks and tick-borne diseases is particularly important. The junior research group “Tick-borne Zoonoses” was founded to develop innovative and practical approaches to studying the vector biology of Europe’s most common hard tick species, Ixodes ricinus. This PhD project specifically focused on the use of ATFS for feeding of I. ricinus ticks. Chapter 1 provides an introduction to the biology and relevance of I. ricinus in Europe, as well as delving into an explanation of the different components and applications of ATFS. In Chapter 2, results from our first study are used to compare and contrast tick feeding of multiple consecutive life stages of I. ricinus between the in vitro feeding by ATFS and in vivo on live cattle. Findings showed that artificially fed ticks were generally inferior to ticks fed on live cattle and all life stages showed significantly longer feeding durations. However, in larvae, higher engorgement and molting proportions indicated that the ATFS was more effective than in vivo feeding. Further, the feeding and fecundity parameters for F1 adults improved after B vitamin supplementation and a water bath system. The prolonged feeding durations commonly observed in the ATFS are associated with an increased risk of contamination, even if the blood meal is supplemented with the antibiotic gentamicin. However, both bacterial contamination and antibiotic treatment have been linked to negative effects on feeding success. To further investigate the effects of using gentamicin in the ATFS on the tick microbiome, ticks were consecutively fed on cattle blood with and without gentamicin in an ATFS. A multimethod approach involving amplicon sequencing of 16S rRNA and further quantification of common bacteria using qPCR was used. Chapter 3 describes the results and compares the findings to ticks fed on live cattle. Despite facing challenges around the extraction and sequencing of DNA from individual ticks, we were able to show that in female ticks fed on live cattle the dominant symbiont was Candidatus Midichloria mitochondrii. In female ticks fed in the ATFS using a gentamicin-treated blood meal, the most abundant bacterium was Rickettsia helvetica. Hence, we were able to deduce that both fecundity and microbial composition are likely to be influenced by the ATFS. These findings should be taken into account for future studies using the ATFS.
Weniger anzeigenFluorine is almost excluded from the biotic world. Although largely ignored by nature, a life based on fluorine is both an interesting concept and an absolutely conceivable scenario. Fluorine-containing building blocks have been extensively used to investigate and modify proteins and their interactions. However, the adaptation of living organisms represents a step forward in exploring and understanding the consequences of global fluorination. Hence, a reliable platform and methodologies for the experimental evolution of novel organisms with fluorine as a bioelement were established, created by mimicking natural selection in the laboratory. In this study, adaptive laboratory evolution (ALE) was used to force Tryptophan-auxotrophic Escherichia coli bacteria to fully adapt metabolically to grow and proliferate indefinitely on 6-fluoroindole (6Fi) and 7-fluoroindole (7Fi) as substitutes for indole (Ind). During long-term cultivation, fluorinated indole precursors were supplied, in situ converted into the corresponding fluorinated amino acids, and 6-fluorotryptophan (6FTrp) and 7-fluorotryptophan (7FTrp) were incorporated into the proteome in response to the UGG codon. As consequence of the imposed selection pressure, the cells relinquished their dependence on canonical Trp and instead acquired the ability to use either of the fluorinated counterparts as an integrated part of their metabolism. In the course of this process, five independent descendant E. coli evolved that have converted their entire lifestyle to exist on these unnatural molecules. 6FTrp adaptation revealed to be superior to 7FTrp adaption. While in 6FTrp even tendencies of an indole-rejecting phenotype were observed, on the contrary, the basic adaptation to 7FTrp required extraordinary perseverance. The effect of extensive fluorination and how living organisms cope with it was investigated through comprehensive analyses of genomic, proteomic, and metabolomic levels. One major strategy found involves mitigation of the stress response system, ignoring detrimental effects caused by global fluorine integration. This study is a further step and establishes a strong foundation for further exploration of the mechanisms underlying fluorine-based life and how a former stressor (fluorinated indole) becomes a vital nutrient.
Weniger anzeigenThis thesis aims to explore and rethink critically the concept of labour, which is central to the 19th-century social science paradigm and continues organising the production of social scientific knowledge. Given that and with the focus on the Marxist political economy of labour, I will problematise the classical narratives of capitalist labour and working-class formation theoretically, epistemologically, temporally, and spatially. The questions that I propose to answer are, first, whether the concept of labour, which is transformed into the norm of capitalist modernity, can take into account the social categories which do not fit into the defining one, particularly the coerced (unfree) labour (waged or not) from geographical and social peripheries as configured in historical capitalism. The second question asks whether incorporating peripheral knowledge production can contribute to the renovation of the dominant concept. It will be argued that by defining capitalism with “free” wage labour, this 19th-century paradigm, in which the Marxist political economy is rooted, is limited in taking into account the coerced (unfree) labour (waged or not) in the conceptualisation of capitalist labour and working-class formation. It does not consider either the complex and multi-layered experience of labour force formation in the periphery or the core countries of capitalism. In other words, by defining capitalism with “free” wage labour, or the 19th-century urban industrial proletarian in England, orthodox Marxism has relegated modern slavery and other forms of coerced (unfree) labour (waged or not) as instances of primitive accumulation to the past. Theoretically, free labour and unfree labour are seen as antithetical and incompatible in the capitalist formation, and unfree labour has been left out of the ambit of capital. Partly, these limitations of incorporating peripheral realities in the global sociological theory-making have to do with the enduring asymmetry in the global production and circulation of knowledge. Hence, this dissertation has two main aims. Theoretically, the aim is to contribute to the renewal of the Marxist political economy of labour by critically rethinking the concepts of labour and the working class from the viewpoint of global intersected processes of expropriation, exploitation, commodification, and coercion as a common class-basis of diverse dependent and subordinated labourers. The epistemological aim implies resorting to the Brazilian historical social science as an instance of peripheral knowledge and evaluating its potential to contribute to renovating the Marxist political economy of labour and, thereby, to global sociology. In the first chapter, I will review the critical theoretical approaches within the Marxist political economy produced in the Global North, the relationships between them and their contributions to the critical theory of capitalism from the standpoint of the broadened notion of the social organisation of labour. This includes investigating Marx’s ambiguities regarding the role of enslaved and other forms of compulsory labour within capitalism and his non-linear account of history. Moreover, I will explore the older and newer criticisms of the Marxian notion of the “so-called primitive accumulation”. Instead of pre-historic, they show that its component, “expropriation” through violent means, is integral to the capitalist logic and functional to the capitalist form of capital accumulation based on the exploitation of wage labour. Whereas the World-Systems Analysis permits understanding various forms of commodified and value-producing labour (free or unfree) relationally within the hierarchical capitalist world economy since the 16th century, the Global Labour History contributes to the definition of coerced commodification of labour-power commodification as a common class-basis of all the subaltern workers. With the proposition to decentre the “free” wage labour as the defining form and moment of capitalism and freedom as the norm of wage labour, the second chapter analyses the wage form from the viewpoint of coercion, dependency and subordination. By investigating the recent literature about the history of wage labour in England, it will be demonstrated that “coerced contract labour” or “restricted wage labour” was integral to the development of the wage form until the mid-19th century. Moreover, colonial slavery was historically related to metropolitan wage labour through global value relations. Still, they also shared a common history of capital’s challenge to fix down and discipline the available expropriated workers for value- and capital-producing labour in different modalities of exploitation, which makes sense to argue that capitalism was born out of “dependent labour”. The “free” wage labour itself, presented as “normal employment,” is a mode of labour control in a commodity-determined society, subject to double economic coercion and subordination. In the third chapter, I will investigate the works of Brazilian historical social science focused on the relationship between modern colonial slavery and capitalism and identify its critical theoretical-historical formulations, which have regarded the Brazilian path to capitalism from the viewpoint of structural transregional entanglements as a “unit of contradiction” and not as a “relation of exteriority”. It will be demonstrated how these works have contributed to the rethinking of such notions as private property, expropriation, and labour-power commodity – central to Marxian theory of capital – from the viewpoint of colonial slavery, the “necessary other” of the development of free labour, in historical capitalism. In that light, they have contributed to understanding the general social organisation of labour and value and capital production based on totally expropriated labour at the specific moment of capitalist history marked by the birth and expansion of the capitalist world economy. The examination of Brazilian historiography, in the fourth chapter, will show how modern colonial slavery was reconstituted as the totally expropriated, violently open and legally guaranteed value- and capital-producing form of labour control in the South-Atlantic system of colonial exploitation, determined by the emergent capitalist world economy. The bi-polar system of exploitation, integrated into the world system, is shown to be constituted of the transatlantic slave trade as the first global (unfree) labour market, rooted in the violent expropriation of people from the African continent through wars, on the one hand, and of the colonial economic system of production, understood as the “capitalist enterprise” due to its labour organisation and sustaining the production of “slave-mercantile capital”. The fifth chapter critically scrutinises the contributions of the Brazilian so-called “new historiography” about 19th-century labour relations to the questioning of the “transition” argument about the evolutionary development from unfree to “free” labour followed by proletarianisation. It will be demonstrated that instead of the substitution of enslaved labour by typical double-free proletarian, the previous “slavery regimes” were transformed in different sectors and regions into new mixes of enslaved, coercive, and semi-proletarian labour arrangements by the middle of the 19th century. Hence, under the qualitatively different world capitalism dominated by global value relations in the 19th century, in the new frontiers of commercial agriculture, plantation slavery went through a qualitative and quantitative metamorphosis. At the same time, the ongoing employment of enslaved labour was enabled by the legal and discursive reframing of slavery in terms of the liberal principles of the right to private property and the free market. Moreover, the so-called intermediary stages between slavery and freedom took the form of coercive labour of free persons. In gold mining, slavery developed into other modalities, such as the slave hiring system. Hence, 19th-century labour mixes express a complex “permutation of labour relations” and social and legal ambiguities that the “narrow free labour ideology” is limited to take into account.
Weniger anzeigenRock weathering is a fundamental geological process which primes fresh rock to form soil by chemical, physical and biological processes. However, despite the importance of weathering, little is known about deep weathering and its dependence on climatic factors. In this thesis, deep weathering was investigated at four study sites in the Chilean Coastal Cordillera. For this purpose, drilling campaigns were performed to obtain undisturbed weathering profiles from fresh bedrock to soil. The sites are located in four climatically different settings (arid, semi-arid, mediterranean, humid), and thus permit investigation of the climate dependence of weathering depth and degree. The drill cores extend to a depth of ca. 90 m at the arid, semi-arid and mediterranean sites; at the humid site two ca. 50 m deep drill cores were obtained. In these five drill cores, deep weathering was analysed using geochemical methods, which include weathering parameters that indicate chemical mass loss. In addition, a new indicator for the retention of reactive elements in secondary weathering products is developed in this thesis using a sequential extraction procedure. Moreover, denudation rates (i.e. the rate of chemical and physical removal of material from a landscape) were determined with a method that has not yet been applied for this application: the isotope ratio (10Bemet/9Be) of meteoric beryllium-10 (10Bemet), a cosmogenic radionuclide, and the stable isotope beryllium-9 (9Be). The weathering indicators define weathering profiles that differ in their depth and degree along the climate gradient. Deep weathering was identified at the semi-arid and mediterranean sites. At the arid site, no weathering was observed due to the lack of precipitation. Rather, alterations caused by hydrothermal overprinting were present. This drill core was therefore taken as a reference for hydrothermal alterations that have not been overprinted by meteoric processes. At the humid site, the two drill cores show weathering only in the upper 15 m of the weathering profile. Primary mineral dissolution and the concentration of extracted reactive elements like aluminium and iron that are incorporated into secondary weathering products are highest at this site. The comparison between the four sites therefore indicates that deep weathering is caused by a variety of different factors. One main factor is the transport of water and gases (e.g. O2) to depth, which occurs via diffusion through pore spaces or advection through open fractures in the rock. Primary mineral dissolution creates secondary porosity and introduces new transport pathways for water and gases. Tectonic fractures connect the Earth's surface with the subsurface and enable fast advective transport of water to depth. At the semi-arid site, deep weathering was observed to a depth of ca. 77 m – mainly located along tectonic fractures – but a continuous weathering gradient was found only in the upper 10 m. Therefore, the advective transport of water and gases through fractures facilitates deep weathering at this site. Tectonic fractures also enable deep weathering at the mediterranean site (to at least 75 m), where a continuous weathering gradient was observed down to a depth of ca. 42 m. At the humid site, there are less open fractures and thus fewer direct transport pathways for water to depth are available. In addition, the high water availability results in intense precipitation of secondary weathering products that clog the porosity and prevent water flow to depth. The pre-conditioning of bedrock can further promote deep weathering. Different processes can prime the minerals in bedrock: hydrothermal overprinting, post-magmatic cooling processes, mineral dissolution due to deep groundwater flow, and reactions with deeply diffused oxygen. These processes facilitate the mobilisation of elements from primary minerals. With sufficient water flow through the weathering profile, these mobilised and soluble elements can be removed and lost into the dissolved phase. The applied sequential extraction of reactive elements from secondary weathering products is a promising new indicator for this potential pre-conditioning of primary minerals. Another factor is the time that is available for weathering processes. This time was determined by denudation rates derived from cosmogenic beryllium-10. Considering the time required for 10 m of the weathering profile to be removed, it takes ca. 900 000 years at the semi-arid location. Hence, deep weathering along fractures is possible even with the minute water flow at this site. At the mediterranean site, the denudation rate is significantly higher, such that the removal of 10 m takes ca. 200 000 years. Despite the shorter time available for weathering processes, a deep and continuous weathering gradient has formed due to the higher water flow in combination with tectonic fracturing, porosity formation and the pre-conditioning of primary minerals which was identified in this thesis by means of extractable elements. At the humid site, the removal of the upper 10 m of the weathering profile requires ca. 700 000 years. This time enables intensive near-surface weathering, as indicated by high primary mineral dissolution and precipitation of secondary phases, though the clogging of porosity prevents deep weathering. The 10Bemet/9Be isotope system was developed for calculating denudation rates from surface samples and applied as an indicator for the time that is available for weathering processes and the depth-dependence of weathering. In this thesis, it was found that below an annual precipitation of 400 mm the depositional flux of cosmogenic 10Bemet produced in the atmosphere cannot be determined with a global climate model, as these models overestimate precipitation and thus the deposition of the nuclide. Above this precipitation limit denudation rates determined from 10Bemet /9Be agree well with the more established method that uses in situ 10Be produced in quartz. Furthermore, the depth distribution of both isotopes shows that reactive 9Be can be used as an indicator of strong alteration, either by hydrothermal overprinting or intense primary mineral dissolution, even at depth. 10Bemet only infiltrates into the upper meters of a weathering profile and adsorbs onto reactive weathering products, which prevents infiltration to depth. This thesis presents new findings on deep weathering and tests new indicators to identify deep weathering. At two of four sites, the semi-arid and mediterranean, deep weathering was identified in the drill cores. Tectonic fractures facilitate the transport of water and gases to depth at both sites. High water availability and the resulting high vegetation density enable intensive weathering, especially in the uppermost meters below the surface. As fractures connect the surface with the subsurface, microorganisms as well as water and gases can potentially migrate to depth. Moreover, the chemical extractions indicate that mineral nutrients and inorganic reaction partners for microbial growth are also present at great depth when minerals are pre-conditioned.
Weniger anzeigenIn den durchgeführten Laborversuchen wurde die Inaktivierung von viralen Erregern auf Keimträgern verschiedener Holzarten untersucht. Dabei wurden unterschiedliche Ansätze verwendet, um einerseits die Hygienisierung durch ein Desinfektionsmittel zu überprüfen sowie andererseits die Tenazität der Erreger über einen bestimmten Zeitraum zu analysieren. In den Desinfektionsprüfungen mit chemischen Grundchemikalien wurde gezeigt, dass sich intaktes, feingesägtes Bauholz mit geringer Rautiefe wirksam desinfizieren lässt. Die Reinsubstanz Peressigsäure stellte sich über alle Versuche gesehen als wirksamstes Desinfektionsmittel heraus. Abschließend kann unabhängig vom Erreger und der Holzart eine Konzentration von 0,1 % bei einer Temperatur von 10 °C und einer Einwirkzeit von einer Stunde empfohlen werden. Bei einer Temperatur von −10 °C wird eine Konzentration von 0,75 % empfohlen. Die Grundchemikalie Ameisensäure zeigte ebenfalls eine gute viruzide Wirksamkeit. Bei einer Einsatztemperatur von 10 °C und einer Einwirkdauer von einer Stunde kann unabhängig von der Holzart und dem viralen Erreger eine Konzentration von 2 % empfohlen werden. Bei einer Temperatur von −10 °C erhöhte sich diese Konzentration auf 5 %, was über gängigen Praxisstandards liegt. Für die Grundsubstanz Glutaraldehyd konnte nur eine begrenzte viruzide Wirkung festgestellt werden. Zur Desinfektion von behüllten Viren wird eine Konzentration von 1,5 % empfohlen, bei einer Einwirkzeit von einer Stunde und einer Temperatur von 10 °C. In den Langzeitbeobachtungen wurden signifikante Unterschiede zwischen den geprüften Hölzern mit geringer Rautiefe sowie Unterschiede zu den Stahlkontrollen festgestellt. Bei den Versuchen mit dem unbehüllten Virus stellte sich heraus, dass Douglasienholz den Erreger am schnellsten inaktivierte, während dies bei dem behüllten Virus für Kiefernholz der Fall war. Diese positiven Effekte der hölzernen Keimträger lassen sich vermutlich auf ihre hygroskopischen Eigenschaften sowie auf die im Holz enthaltenen Stoffe wie Gerbsäuren, Harze und Tannine zurückführen. Diese Wirkungen bieten Potenzial für weitere Untersuchungen, einschließlich der Prüfung mit anderen extraktstoffreichen Holzarten wie beispielsweise Eichenholz. Ebenfalls sollte in weiteren Studien die Desinfektionsmitteltestung in Praxis- bzw. Feldversuchen abschließend überprüft werden, um eine Übertragbarkeit der Ergebnisse aus den standardisierten Laborversuchen auf reale Stallbedingungen abzusichern.
Weniger anzeigenDynamic hydrogen-bonds (H-bonds) and H-bond networks govern essential biomolecular processes. From providing conformational flexibility needed for the functioning of proteins, governing the fluidity, stability and permeability of cell membranes to serving as proton transfer pathways, H-bonds are observed abundantly in nature. It is thus crucial to understand the dynamics of H-bond networks in biological systems to guide drug discovery. In this thesis, I focus on characterizing and identifying the dynamic H-bond networks in mainly two biomolecular systems - (i) lipid membranes containing anionic lipids and (ii) Human voltage gated proton channel Hv1. I use atomistic molecular dynamics simulations along with graph theory based approach for efficient computations of dynamic H-bonds and H-bond networks of proteins and lipid membranes.
At the lipid bilayer interface, dynamic H-bonding can give rise to local lipid clusters of interest for reactions. The dynamics of these H-bonded lipid clusters can depend on the nature of lipid headgroups. To dive deeper into the role of lipid headgroups in H-bonded lipid clusters, I use a previously developed graph theory based approach to analyze the topology of lipid clusters in zwitterionic and anionic lipid membranes including bacterial cell membranes. To understand the dynamics of anionic membranes, I further do a topology analysis of bilayers of phosphatidylserine containing varying concentrations of cholesterol. I find that the presence of cholesterol can hinder the formation of extended water-mediated H-bond networks in phosphatidylserine membranes.
H-bond networks formed by clusters of carboxylate and histidine protein sidechains or anionic lipid headgroups can form pathways for proton transfer across and along the lipid membranes. To understand the functioning mechanism of proton transporters, it is crucial to identify and characterize these proton-binding clusters and the H-bond pathways between them. To this aim, I developed a graph theory based protocol to find the most frequently sampled water-mediated H-bond paths formed by titratable sidechains of transmembrane proteins and/or lipid headgroups. I implement this protocol to identify potential proton antennas of the human voltage gated proton channel Hv1. The functioning of Hv1 is regulated by a network of H-bonds formed between the titratable sidechains of the transmembrane protein. How does the pH and lipid composition of the membrane affects the H-bond network of Hv1 remains an open question. I apply the newly developed protocol to study the protonation-coupled and lipid-coupled H-bond dynamics of Hv1. I find that depending on the location of the protonated carboxylate or histidine, the H-bond network extends or collapses on either the intracellular or extracellular side. A continuous H-bond network spanning the proton channel is sampled only in phosphatidylserine bilayer in contrast to bacterial and zwitterionic bilayers. This suggests the role of lipid composition in regulating the H-bond network dynamics of Hv1. In this thesis, I also present the work done towards characterizing the impact of Hv1 inhibitors on H-bond networks of Hv1.
Weniger anzeigenThis dissertation makes a contingent argument for reviewing existent historiographical practices in the discourse around student movements – socio-political struggles and protests that are initiated and led by students and youth. The proposal for the revision comes from an acknowledgement that academic writing and the discursive flavour around student and campus movements make use of a particular repertoire that is connected to the “global moment” of student unrest in 1968, and this leads to universalist generalizations with respect to the repertoire that contemporary student movements may or may not deploy. The thesis argues that a close reading of contemporary student movements through its performances of protest – both within the sociopolitical and cultural spheres – shows many obvious and non-obvious aspects of the contemporary student struggles, such as changes in the university system through neoliberal privatization, changes in the relationship between students and the university, changes in the relationship of the university with the state and state apparatuses such as the police. If one has to take into account these changes within historiographical practice, there is a possibility to open up to not only more viscerally and experientially situated histories of student movements (and political movements in general), but also to confront questions of ethics within history writing and historical representation of resistant political groups. Bringing in examples from contemporary insurgent youth politics and artistic practice of the Global South and its diaspora, the work views historiography as a political practice of arranging spatio-temporality with clear relationships to existent power structures, and interrogates the extent to which the insertion of the resisting body within this practice, through the discipline of theater and performance historiography, can threaten and break such arrangements. In this process of interrogation, the work in turn questions the efficacy of traditional historiographical frameworks such as “source”, “archive”, “subject”, “event”, by bringing into the conversation contemporary academic methodologies from Theater and Performance Studies, Social Movement Theory, Psychology, Trauma Studies, Disability Studies, Praxeology, Critical Legal Theory, Culture Studies and Education Studies.
Weniger anzeigenLegionella pneumophila, an intracellular pathogen and a common cause of community-acquired pneumonia, employs sophisticated strategies to infect and replicate in alveolar macrophages (AMs). These strategies include translocating over 300 bacterial effector proteins into the host cytosol, where they manipulate various cellular pathways. However, infected cells have developed numerous strategies to detect and counteract the infection. The outcome of the host-pathogen interaction, whether it results in bacterial clearance or an extensive infection, depends on the balance between the pathogen’s virulence strategies and the host’s defense mechanisms. Several studies have investigated the host-pathogen interaction in vitro, mainly using murine and human hematopoietic cell culture systems. However, the exact mechanisms of bacterial detection by the innate immune system are incompletely explored, and little is known about how tissue-resident AMs respond to the infection. Within the first part of this study, the role of the C-type-lectin receptor CLEC12A, which binds to L. pneumophila, in the immune response was examined. The findings from infection experiments in a murine in vivo model and in murine and human macrophages in vitro indicate that the receptor has no significant impact on the outcome of the infection with L. pneumophila. The second part of the study investigated the response of in vivo L. pneumophila-infected and uninfected bystander AMs. Transcriptome analysis revealed a robust upregulation of various proinflammatory and immunoregulatory genes in infected AMs, while uninfected bystander cells seem to be only activated towards the end of the first replication cycle of L. pneumophila (20 h post infection). Proteome analyses further indicate that several proinflammatory proteins are impaired in their translation in virulent infected AMs (e.g., IL-1β, CCL6, CCL9) and that only a limited number of proteins including IL-1⍺, ATF3, GDF15, and A20 were found to be expressed on protein level in infected AMs. Furthermore, L. pneumophila seems to affect the cholesterol homeostasis of AMs in vivo. In conclusion, this study enables a deeper understanding of the immune response against L. pneumophila and provides a unique view of the overall cellular response in tissue-resident AMs towards the infection in vivo.
Weniger anzeigenLarge scale sensor networks form an important part of the Industrial Internet of Things. To maintain the operation of such networks over time, quality of the sensor readings needs to be ensured. This leads to the development of a metrological traceable in-situ calibration method based on a Bayesian framework which leverages local sensor redundancy. Furthermore, automation of such in-situ calibration tasks is a key feature. To this end, an extension of existing sensor-related ontologies is proposed to cover relevant metrological terms. Sensor self-descriptions based on these knowledge representations allow for support of in-situ calibration by finding suitable reference sensors and initialization the mathematical method presented here. The mathematical method is evaluated in simulation studies against a state of the art in-situ calibration. The evaluation results show good estimation performance in cases of time-depending input signals or sensors of comparable uncertainty levels, but also reveal higher computational costs. The developed ontologies are evaluated by a corpus comparison, ontology metrics as well as logical checks of the taxonomic backbone and indicate a good agreement with existing ontology quality standards.
Weniger anzeigenInteraktionen zwischen den im Zuge der zahnärztlichen und kieferchirurgischen Therapie im Mund oder am Schädel fixierten Metallprodukten und der Magnetresonanz-Tomographie sind seit Jahrzehnten Gegenstand verschiedenster Studien sowohl von radiologischen als auch zahnmedizinischen Autoren. Trotzdem ist im klinischen Alltag beider Fachgebiete immer noch eine deutliche Verunsicherung spürbar, wenn es um die Einschätzung etwaiger Diagnostikstörungen durch solche Produkte geht, die im Wesentlichen auf dem Grad ihrer Magnetisierbarkeit beruhen. Zu den unerwünschten Interaktionen gehören aber auch potenzielle Alterationen des Eigenfeldes kleiner, medizinisch genutzter Permanentmagnete im MR-Tomographen. Weil die für Implantat getragene Zahnprothesen oder Gesichts-Epithesen genutzten Magnetattachments starr befestigt sind, werden sie in bestimmten Position direkt antiparallel dem sehr viel stärkeren Hauptfeld B0 ausgesetzt. Käme es dabei zu einer relevanten Entmagnetisierung, wären diese Kopplungselemente sofort insuffizient. Drei in diesem Gesamtkontext denkbare Szenarien wären ärgerlich, weil mit unnützen Kosten, unnützem Zeitverlust und unnützem Dyskomfort für die Patienten verbunden: • eine relevante Diagnostikeinschränkung durch unerwartete Suszeptibilitäts-Artefakte um hoch magnetisierbare Objekte, was eine erneute Bildgebung nach Materialentfernung erfordert, • eine routinemäßige Entfernung auch nicht magnetisierbarer Metallobjekte, wodurch überflüssige Kosten, eine zusätzliche iatrogene Hartsubstanzschädigung und der entsprechende Behandlungsstress durch Debonding und anschließendes Rebonding entstehen, • eine Entmagnetisierung oder sogar Umpolung der Magnetattachments bei ungünstiger Positionierung zu B0 und die in der Folge anfallenden Wiederherstellungskosten. Um diese Situationen zu verhindern, bedarf es valider Prädiktoren für das Auftreten bzw. die Stärke dieser Interaktionen. Unter den Werkstoffeigenschaften der Metalle ist hierfür ihre Magnetisierbarkeit geeignet, die mit Hilfe der legierungsspezifischen Permeabilitätszahl quantifizierbar ist. Aufgrund einer Ausnahmeregelung in der Europäischen Medizinprodukteverordnung besteht aber für zahnärztliche und kieferchirurgische Metallprodukte keine Deklarationspflicht ihres Verhaltens in Magnetfeldern. Dies hat weitreichende Folgen, weil damit auch andere Informationsquellen nutzlos sind und weil vielen der einschlägigen Studien deshalb die exakte Spezifikation der untersuchten Metallprodukte verwehrt blieb. Die vorliegende Arbeit widmet sich der Prädiktion von Suszeptibilitäts-Artefakten um intraorale Metallobjekte und der Beschreibung von positions- und materialabhängigen Alterationen des Eigenfeldes dentaler Magnetattachments in der MRT. Zunächst sollte eruiert werden, inwieweit MRT-Ärzte die mit zahnärztlichen Produkten einhergehende Sicherheits- und Kompatibilitätsprobleme einschätzen, auf welche Informationsquellen sie sich im Zweifelsfalle stützen und wie sie ihre diesbezüglichen Fortbildungsmöglichkeiten bewerten. Unsere Umfrage wurde über verschiedene Kanäle an ca. 2.000 aktive Radiologen in Deutschland adressiert, was etwa 35 % der hierzulande tätigen Kollegen entspricht. Allerdings konnte mit 3,7 % nur eine gerade noch auswertbare RR erzielt werden, was ein Schlaglicht auf das allgemeine Interesse an dieser vermeintlich komplett erforschten Problematik wirft. Die teilnehmenden Radiologen überschätzten die von intraoralen Metallobjekten ausgehenden Gefahren tendenziell. Gleichzeitig überbewerteten sie die Wirksamkeit einer typischen, Artefakt reduzierenden Sequenz. Sie stuften die für eine Recherche unbekannter Metallimplantate zur Verfügung stehenden Quellen als ungenügend ein und forderten nahezu alle (97,3%) eine Deklarationspflicht der Magneteigenschaften solcher Produkte durch deren Hersteller. Die Frage, ob eine direkte intraorale Permeabilitätsmessung in den Klinikalltag implementiert werden solle, um damit potenzielle Interaktionen besser abschätzen zu können, bejahten 40,5 % der Teilnehmer. Als dafür nötigen Zeitaufwand hielt eine Mehrheit von 63 % maximal 2 min für akzeptabel, was sich vermutlich am Aufwand eines hauseigenen „Magnettests“ orientierte. Nur 15,5 % halten eine eher realistische Zeitspanne von bis zu 15 min für angemessen. Insgesamt 78 % bekundeten einen deutlichen Weiterbildungswunsch zur Thematik. Zur in-vitro Prüfung einer Korrelation zwischen der Artefaktausdehnung um orthodontische Produkte bei TSE- und GRE-Sequenz im 1,5 T Hauptfeld und ihrer relativen Permeabilität nutzten wir zunächst selbst hergestellte Prüfkörper mit definierter Größe und aufsteigenden µr -Werten, welche die Übergangszone vom paramagnetischen in den eindeutig ferromagnetischen Bereich repräsentierten. Die Wertepaare aus den im Phantom in coronarer und sagittaler Ebene gemessenen Artefaktradien und den zugehörigen µr -Werten ergaben im Koordinatensystem eine Wechselbeziehung entsprechend einer Wurzelfunktion. Bei GRE-Sequenz war der Anstieg der Kurven etwas steiler. Die analog erhobenen Wertepaare der anschließend untersuchten realen orthodontischen Produkte reihten sich entsprechend den auf ihnen gemessenen µr -Werten stets unterhalb dieser Kurve auf. Damit konnte gezeigt werden, dass die auf Metallobjekten dieser Größe gemessene Permeabilität als Prädiktor der bei Standard-Sequenzen auftretenden Artefaktradien geeignet ist. Der gefundene Abstand zwischen den Werten der realen Objekte und der Prüfkörper entspricht einem Sicherheitsabstand für die im klinischen Falle zu treffende Entscheidung. Nach diesem erfolgreichen in-vitro-Versuch sollte die Eignung des dafür bereits genutzten Messgerätes „Ferromaster“ (Fa. Stefan Mayer Instruments, Dinslaken/D) für den klinischen Einsatz geprüft werden. Dafür modifizierte der Hersteller die Tastsonde unseres Gerätes, sodass sie auch im Seitenzahngebiet einsetzbar und, vor allem, einfach desinfizierbar ist. Zunächst wurde die Relevanz dreier vom Hersteller aufgeführten Einsatzbeschränkungen untersucht: Die Anwendung (I) nur auf planen Oberflächen, (II) nur auf entmagnetisierten Objekten und (III) nur auf Objekten mit einer Mindestgröße. Dabei erwies sich die Präzision der Messung auch auf oberflächlich zerklüfteten Objekten als hoch. Zudem zeigten wir an für medizinisch genutzte Edelstähle beispielhaft ausgewählten Brackets, dass die bei Sättigungsmagnetisierung induzierten remanenten Felder Werte von 0,62 mT nicht überschreiten. Damit ist eine Alteration der für den Messvorgang im Tastkopf des Ferromasters installierten Permanentmagnete ausgeschlossen. Zuletzt bestimmten wir die Richtigkeit der Ferromaster-Messung auch auf untermaßigen Objekten, die erwartungsgemäß nicht ausreichend war. Mit Hilfe mathematisch ermittelter Korrekturfaktoren für jeden Ausprägungsgrad der Untermaßigkeit konnten aus den gefundenen Ist-Werten die korrekten Soll-Werte berechnet werden. Bei Prüfkörpern mit µr ≤ 0,002 stimmten die Messwerte mit den wahren Werten überein, sodass diese Methode als Screening-Instrument zur Detektierung nicht magnetisierbarer kleinster Metallobjekte geeignet ist. In einer anschließenden klinischen Messung auf intraoral fixierten orthodontischen Produkten durch je eine Gruppe aus Zahnärzten und MTRA erwiesen sich sowohl die Inter- als auch die Intra-Rater-Reliabilität als exzellent. Dabei waren die Ergebnisse der Zahnarzt-Gruppe nur tendenziell etwas besser. Bei Prüfung der Validität erwies sich die Zahnarztgruppe als signifikant besser, allerdings zeigte die Pearson-Korrelation beider Gruppen sehr stark positive Zusammenhänge. Diese Methode ist also auch von intraoral wenig erfahrenen Prüfern valide durchführbar. Auf medizinisch genutzten Permanentmagneten wäre diese Permeabilitätsmessung kontraindiziert, denn ihr Eigenfeld hat mit ca. 130 mT eine mehr als 200fache Stärke als die in orthodontischen Einzelprodukten induzierbaren remanenten Felder. Und es wäre auch nicht nötig, denn durch sie wird in der jeweils akquirierten Sequenz stets das vergleichsweise ausgedehnteste Artefakt erzeugt. Dessen absolute Größe hängt dann in erster Linie von ihrem Volumen ab. Unsere dazu durchgeführte Phantomstudie sollte in erster Linie klären, wie groß die Distanz zwischen Minimagnet und den zu beurteilenden anatomischen Strukturen bei SE- und GRE- Sequenzen im 1,5 und 3 T-MRT sein muss, um eine störungsfreie Diagnostik zu ermöglichen. Außerdem sollte eruiert werden, ob bei vergleichbaren Volumina relevante Unterschiede zwischen permanent- und weichmagnetischen Objekten oder zwischen offen und geschlossen konstruierten Feldern auftreten. Es fanden sich weder in Bezug auf die MRT-Hauptfeldstärke noch hinsichtlich der Prüfkörper-Kenndaten klinisch relevanten Unterschiede. Tendenziell waren die Artefakte in der parallel zu B0 liegenden Bildebene stets größer als in der senkrecht zu B0 liegenden Ebene. Der deutlichste Unterschied trat zwischen den Sequenzen auf: Bei GRE waren die gemessenen maximalen Artefaktradien um ca. 2 cm größer als bei SE-Sequenzen. Am auffälligsten aber war der Unterschied der Artefakt-Konfiguration: Bei GRE traten großvolumige pilzförmige Auslöschungszonen auf. Dagegen fanden sich bei SE kleinere, aber mehrfache Artefakte: Zusätzlich zur zentralen Löschungszone ergab sich jeweils eine zweite, kugelförmige Artefaktzone, in der eine zentrale Signallöschung mit einer peripheren Signalanhebung kombiniert war. Weil die Artefakt auslösenden Objekte aber nicht in standardisiert orientiert sind, ist die zu erwartende Asymmetrie dieser Artefakte kaum vorhersagbar. Deshalb ist aus Sicherheitsgründen auch hier der maximale Radius heranzuziehen, der sich nicht von denen der GRE-Sequenzen unterscheidet. Bei der Exposition kleiner Permanentmagnete im MRT besteht noch ein weiteres Kompatibilitätsproblem: die Schwächung des Eigenfeldes der medizinischen Magnete durch das deutlich stärkere MR-Hauptfeld. Nachdem wir bei einer früheren Studie bei allen untersuchten Dentalmagneten nach ihrer antiparallelen Exposition im 3 T MRT eine Umpolung ihres Eigenfeldes fanden, testeten wir jetzt ein für den Einsatz auf Dentalimplantaten gedachtes SmCo-Duomagnetsystem mit gesteigerter Koerzitivkraft. Bei deren MRT-Exposition wurden mit Hilfe einer dreidimensional einstellbaren Halterung die üblichen Orientierungen zu B0 geprüft: Damit konnten Implantatpositionen im Ober- und Unterkiefer, in der Glabella und im Mastoid bei gerader und jeweils 45° reklinierter und inklinierter Kopfposition simuliert werden. Die modifizierten Magnete wiesen tatsächlich eine höhere Gegenfeldfestigkeit auf, im 1,5 T Hauptfeld kam es überwiegend zu klinisch noch akzeptablen Demagnetisierungen, sogar in antiparalleler Position betrug der Feldstärkeverlust nur 5 % des Ausgangswertes. Überraschenderweise traten die größten Feldverluste von 10 % bei antiparalleler Lage und gleichzeitig abgewinkeltem Kopf auf. Lag das benutzte Phantom in der Position „Füße voran“ und mit antiparallel orientiertem Magneten im MRT, entstand ein Feldverlust von 7 %. Im 3 T Hauptfeld zeigte sich die gleiche Verteilung, allerdings waren die Feldverluste in den kritischen Positionen deutlich höher: Bei Antiparallelität 72 %, bei dazu kommender Kopfabwinkelung bis zu 96 %. Die nach jeder Exposition durchgeführte Remagnetisierung ergab eine komplette Wiederherstellung ihrer ursprünglichen Feldstärke. Damit wurde gezeigt, dass auch die versehentlich bei 3 T und in ungünstiger Orientierung zu B0 exponierten Dentalmagnete mit einem überschaubaren Aufwand weiter verwendbar sind.
Weniger anzeigenPsychische Gesundheit stellt eine der zentralen Herausforderungen für das Gesundheitssystem des 21. Jahrhunderts dar. Vor dem Hintergrund global hoher Prävalenzraten psychischer Erkrankungen und zugleich ungedecktem Behandlungsbedarf, gepaart mit Limitationen hinsichtlich der Wirksamkeit und Sicherheit von psychopharmakologischen Interventionen, wird deutlich, dass weiterer Bedarf an wirksameren, sichereren, erschwinglicheren und leichter zugänglichen Therapieansätzen besteht. In diesem Zusammenhang hat auch die Integrative Medizin ihren Einzug in Psychiatrie, Psychosomatik und Psychotherapie gehalten und wird von Patient:innen entsprechend stark nachgefragt. Der Mind-Body-Medizin kommt hierbei eine besondere Rolle zu, bei der durch multimodale Interventionen körperliche, mentale, emotionale, soziale und spirituelle Einflussfaktoren zur mehrdimensionalen Förderung einer individuellen gesundheitlichen Lebensstilgestaltung zum Tragen kommen. Insbesondere achtsamkeitsbasierte Therapieansätze und körperorientiertes Yoga sind inzwischen wissenschaftlich gut untersucht und die Wirkung insgesamt zunehmend gut belegt. Seit einigen Jahren zeichnet sich die Entwicklung einer zweiten Generation von achtsamkeitsbasierten Interventionen ab, die über die bisherige Dekontextualisierung und Säkularisierung traditioneller fernöstlicher Praktiken hinaus gehen und gezielt ethische und spirituelle Inhalte der entsprechenden Interventionen thematisieren. In diesem Kontext ist auch die in dieser Arbeit präsentierte, auf traditionellem Yoga basierende Meditation Based Lifestyle Modification (MBLM) einzuordnen. Erste Ergebnisse zeigen ihre positiven Effekte bei gesunden Menschen sowie in den Indikationen leichte bis mittelgradige Depression und chronische Schmerzerkrankungen unterschiedlicher Genese. Der ethischen Komponente des Programms kommt dabei angesichts der quantitativen und qualitativen Ergebnisse eine besondere Rolle zu. Für die flächendeckendere Anwendung bzw. Verbreitung bereits gut untersuchter Mind- Body-Interventionen der ersten Generation sind Methoden der Versorgungsforschung auch in den Bereichen Präventionen und Gesundheitserhaltung zunehmend relevant. Für eine weitere Erforschung der vielversprechenden Interventionen der zweiten Generation sind sowohl Grundlagenforschung zur Untersuchung von spezifischen bzw. unspezifischen Effekten der jeweiligen Intervention als auch die weitere klinische Forschung und Versorgungsforschung unter der Einbeziehung von modernen Studiendesigns mit einem Mixed-Methods-Ansätzen und unter der Beachtung eines Whole-Medical Research Ansatzes relevant, um zu weiterführenden Erkenntnissen zu gelangen.
Weniger anzeigenThe concept of barrier crossing can describe many biological systems. Rare events, such as protein folding or chemical reactions, can be modeled as systems that must cross a barrier potential to change their states, e.g., folded or unfolded states. These systems do not occur in isolation but are rather coupled with their environment. Typically, we choose a reaction co- ordinate and project out all the other orthogonal degrees of freedom. If the orthogonal degrees of freedom relax as rapidly as the reaction coordi- nates, non-Markovian memory effects must be taken into account in order to describe the dynamics accurately. This thesis uses the generalized Langevin equation to study the mean first-passage time for various non-Markovian systems. We begin by considering a multi-exponential memory kernel exhibiting various memory times and friction coefficients. We then propose a heuristic formula that shows that the MFP is dominated by the single memory exponential with short memory time as well as large amplitude. Following this, we relax the equilibrium condition, and therefore we consider a generalized Langevin equation out of equilibrium. Also, in this case, we suggest a formula that takes into account the effect of non-equilibrium, showing non-Arrhenius behavior. Since many chemical and biological systems exhibit asymmetric free-energy profiles, we next consider an asymmetric potential. From simulation data, we gather evidence that the dynamics in one well are independent of the other. Therefore, we describe the dynamics via the times mean first-passage time (L,R) that are needed to reach the barrier from left and right, respectively. In the final section, we focus on another important factor that characterizes the process of barrier crossing, the mean transition path time. Again, we concentrate on a non-Markovian system, and, with the help of simulations, derive a heuristic formula. Contrary to the mean first-passage time, the mean transition path time reaches its maximum in the Markovian case; for intermediate memory time it decreases, particularly for smaller mass, and settles on a constant value for large memory times.
Weniger anzeigenMany mammals can control the timing of birth by temporarily suspending development which is marked by a reduction of metabolic activity. This interruption in the development process is called diapause is specific to blastocyst-stage embryos and is an apparent response to tide over adverse environmental and nutritional conditions. The establishment of diapause is an active process involving extensive rewiring of the epigenetic, transcriptomic and metabolic landscape of the embryo. How the above three cellular processes are coordinately re-wired during dormancy entry is not known. Here I show that the regulatory function of miRNAs is indispensable for the mouse embryos entering into the diapause state. Without the miRNA function mouse ESCs and embryos suffer developmental collapse upon mTORi-mediated diapause induction. Small RNA sequencing of single mouse embryos showed specific miRNAs to be upregulated during diapause induction. In silico miRNA-protein network of diapause was developed by the integration of small RNA sequencing data together with computationally predicted miRNA targets. The network showed miRNA-mediated regulation of nuclear and cytoplasmic bodies along with RNA splicing which are perturbed in miRNA null mouse ESCs. The study also shows nutrient and autophagy regulator TFE3 to be the upstream regulator for the expression of dormancy-associated miRNAs, linking cytoplasmic mTOR activity to nuclear miRNA biogenesis.
It is unknown whether the capacity to pause is a conserved trait across mammals, more specifically in humans. Mouse and humans show similar patterns of mTOR expression during pre-implantation development, suggesting the involvement of the primordial pathway in blastocyst development and timing in both species. Here I show human blastoids and pluripotent cells in naïve and naïve-like states retain the capacity to pause via mTOR inhibition and the pausing is functionally reversible even at the molecular level.
Taken together the above findings suggest that the development of human embryos may be controllable and that miRNAs play a critical regulatory role in bringing transcriptional re-wiring in mouse embryos for successful entry into dormancy.
Weniger anzeigenAcute kidney injury (AKI) is a common complication in hospitalized patients affecting approximately 10–15% of them and almost 50% of those patients in the intensive care unit. Whereas chronic kidney disease (CKD) - according to the 2010 Global Burden of Disease study - is ranked 18th in the list of causes of the total number of deaths and affects an estimated 800 million people worldwide. Hypertension is the second leading cause of end-stage renal disease and a significant risk factor for developing CKD. Since AKI and both hypertension-associated CKD have limited treatment options (mainly supportive or reduced to treating complications and consequences of renal function loss) there is an urgent need to find therapeutic targets. We developed novel mouse models and applied existing ones to discover and investigate novel signaling pathways and mediators in the kidney that could influence the outcome of ischemic AKI and inflammatory signaling pathways as novel mediators of hypertension-induced chronic renal damage. The presented experimental works shed light on the importance of renal tubular specific activation of nuclear factor kappa-light-chain-enhancer of activated B cells (NF-kB) as an important initiator of inflammatory processes in the course of AKI. On the other hand, lacking the canonical transient receptor potential 6 (TRPC6) ion channel, which has been recently identified as a cause of a familiar form of focal segmental glomerulosclerosis, or reduced levels of the gasotransmitter hydrogen sulfide, which has been shown to be protective in diverse kidney damage models, did not influence the immediate outcome of AKI. Investigating hypertension-induced renal damage we identified the protein Bcl10 as part of a complex bridging the angiotensin II receptor and NF-kB which mediates cell infiltration and renal fibrosis, on the other hand, indispensable for podocyte health in this model. Finally, we shed light on the role of T helper (Th)1 and Th17 immune cells in the development of angiotensin II-induced target-organ damage.
Weniger anzeigenCardiovascular diseases such as atherosclerosis and hypertension are characterized by a persistent inflammatory state, the severity of which can be influenced by both intrinsic cellular processes and extrinsic microbial factors. The proteasome as the central protein degradation system served as the cellular target structure in the present study, whose function in the context of atherosclerosis was influenced by the application of a proteasome inhibitor. Taking into account the central role of the proteasome and in order to avoid toxic effects, low concentrations of the proteasome inhibitor bortezomib were used to influence experimental atherosclerosis in LDLR -/- mice. An early stage of atherosclerosis could be favorably influenced by low-dose proteasome inhibition. Mechanistically, anti-oxidative and anti-inflammatory effects of low-dose proteasome inhibition were identified. In addition, the influence of a genetic deficiency of the immunoproteasomal subunit β5i/LMP7 was investigated in LDLR -/- mice, but this did not affect either early or late atherosclerosis in the mouse model. These studies suggest that although the proteasome could be a potential target for future therapies, more specific targeting of components of the ubiquitin-proteasome system is required to counteract inflammation in a more targeted manner with few side effects. Cardiovascular diseases are particularly dependent on environmental factors such as diet. The intestinal microbiota reacts sensitively to the environment and diet and interacts with the immune system. Many diseases have already been shown to be influenced by the microbiome. The work presented here investigates the role of the microbiome in hypertension. It was shown that hypertensive renal and cardiac damage is aggravated in the absence of a microbiome in germ-free mice, which could indicate the absence of protective bacterial metabolites. In addition, the influence of a high-salt diet on the microbiome was investigated and Lactobacillus was identified as a salt-sensitive intestinal bacterium that regulates TH17-dependent inflammation and blood pressure by producing a metabolite. Finally, in another study, the short-chain fatty acid propionate was identified as a protective bacterial metabolite that protects against hypertensive cardiac damage via partly Treg-dependent mechanisms. In summary, these studies highlight the microbiome as a promising target for organoprotective therapies in hypertension.
Weniger anzeigenTrotz bedeutender Fortschritte in der Tumortherapie und Verbesserung des Überlebens der meisten Tumorpatienten, bleibt die Therapie des Pankreaskarzinoms heutzutage eine große Herausforderung. Ein Merkmal des PDAC ist die Chemoresistenz gegen die gängigen Chemotherapeutika, wodurch die medikamentösen Therapieoptionen aktuell nur zu einem geringfügig verlängerten Überleben beitragen. Das desmoplastische Tumorstroma stellt eine Hauptursache für die Ausbildung von Chemoresistenzen in der Mehrheit der PDAC dar. Der IL-6/gp130/STAT3-Signalweg spielt eine entscheidende Rolle bei der Tumorprogression im PDAC sowie in der Entwicklung der fibrotischen Tumormikroumgebung, welche für die Chemoresistenz des PDAC verantwortlich ist. IL-6 ist als Schlüsselzytokin des PDAC gut untersucht. Seine Expression wird durch proinflammatorische Prozesse stimuliert. So wird beispielsweise die IL-6 Expression durch das Zytokin IFN-α über IFIT3 stimuliert. Die in dieser Arbeit beschriebene Identifizierung von IFIT3 als prognostischer Marker für das PDAC bestätigt die zentrale Rolle inflammatorischer Prozesse im PDAC. Eine Inhibition des IL-6-Signalweges als potenzieller Therapieansatz wurde für das PDAC bisher noch nicht erforscht. Für die Hemmung des IL-6/gp130-Signalweges wurde in dieser Arbeit der Östrogenrezeptormodulator Raloxifen mit zusätzlicher hemmender Wirkung an der IL6/gp130-Interaktion sowie der kleinmolekulare direkte gp130-Hemmer SC144 identifiziert. Der innovative Ansatz einer direkten gp130-Inhibition beruht auf der Tatsache, dass gp130 ein membranständiger Signaltransduktor für alle Zytokine der IL-6 Familie ist (IL-6, OSM, IL-11, IL-27, LIF, CNTF, CT-1, CLC). Im Vergleich zu der bisherigen isolierten IL-6 Inhibition findet durch die globale gp130-Blockierung eine Hemmung der STAT3-Phosphorylierung auch für die anderen Zytokine der IL-6 Familie statt. Darüber hinaus wurde die therapeutische Wirkung durch die oben genannten Substanzen in Kombination mit dem Erstlinien-Chemotherapeutikum Paclitaxel untersucht. Ziel der Untersuchungen war es, neue therapeutische Perspektiven für Patienten mit Pankreaskarzinom zu eröffnen. In den vorliegenden Arbeiten konnte an humanen PDAC-Zelllinien gezeigt werden, dass Raloxifen und SC144 die IL-6- bzw. OSM-induzierte STAT3-Phosphorylierung im IL-6/gp130/STAT3 Signalweg supprimieren. In diesem Forschungsvorhaben wurden grundlegende Mechanismen sowie die Rolle des IL-6/gp130/STAT3-Signalweges im PDAC erläutert. Des Weiteren wurde die Wirksamkeit des Östrogenrezeptormodulators Raloxifen als möglichen IL-6/gp130-Inhibitor sowie des kleinmolekularen gp130-Hemmers SC144 auf die Tumorproliferation in vitro sowie auf das Tumorwachstum in einem orthotopen PDAC-Mausmodell bestätigt. Darüber hinaus wurde gezeigt, dass die Kombination eines IL-6/gp130-Inhibitors (SC144 oder Raloxifene) mit dem Erstlinien-Chemotherapeutikum Paclitaxel zu einer signifikanten Reduktion des orthotopen Tumorwachstums führt und die Apoptose erhöht. Da die Lebermetastasierung eine entscheidende Rolle für die Prognose der PDAC-Patienten spielt und eine onkologische Resektion des Pankreas ausschließt, wurde zusätzlich in dieser Arbeit die Nahinfrarot-Fluoreszenzbildgebung in unserem Projekt integriert und als Methode evaluiert, um die Sensitivität der Erfassung möglicher Lebermetastasen im PDAC zu erhöhen. Obwohl ein PDAC-Wachstum und Lebermetastasen bei allen Tieren visuell bestätigt werden konnten, zeigte keine der Lebermetastasen ein erhöhtes Fluoreszenzsignal. Weitere Studien auf molekularer Ebene sind erforderlich, um den Mechanismus für die unzureichende ICG-Speicherung in den PDAC-Lebermetastasen und der Ränder der Leberläsionen zu erläutern. Diese in vitro und in vivo Ergebnisse deuten auf die IL-6/gp130-Interaktion als potenzielles therapeutisches Ziel für die Therapie des Pankreaskarzinoms hin. Die Kombination von Raloxifen/Paclitaxel oder SC144/Paclitaxel zeigt im Mausmodell einen therapeutischen Vorteil im Vergleich zu Paclitaxel allein. Dieses Versuchsvorhaben stellt somit einen wichtigen Bestandteil der präklinischen Untersuchungen der Inhibierung des IL6/gp130-Signalweges als additive Therapieoption zur Reduktion der Paclitaxel-Chemoresistenz beim PDAC dar. Diese Daten könnten eine neue Perspektive in der individualisierten molekularen Diagnostik und gezielten Therapie beim Pankreaskarzinom eröffnen.
Weniger anzeigen