text
stringlengths
21
4k
label
stringclasses
2 values
Results Parameter estimation Introduction Animal movement patterns can yield information about foraging behavior and prey abundance and distribution. These are important components of understanding breeding behavior, individual survival, and ultimately population dynamics. Over time, sensor technology for measuring movement patterns have changed and improved. These changes render old technologies obsolete, but the older data are still valuable, especially if new and old data can be compared to check if behaviors have changed over time. However, technologies that have aggregated measurements at different temporal scales make comparisons challenging, and it is unclear to which extent this impacts conclusions that can be drawn from data. Changing measurements of changing movements? Detecting movement patterns across multiple generations of tracking technology Leah R Johnson1, Philipp H Boersch-Supan1,2, Sadie J Ryan2 & Richard A Phillips3 1Integrative Biology, University of South Florida, Tampa, FL, 2Geography & Emerging Pathogens Institute, University of Florida, Gainesville, FL, 3British Antarctic Survey, Cambridge, UK Data sources Devices: immersion loggers were attached to albatrosses, creating series of trip data, which represent dry periods (flights) between wet periods (water landings). Data were aggregated by the devices using a fixed time interval, which was dictated by logger design and deployment duration. We use data collected a decade apart[3,4]. A selection of sampling characteristics are given below. Study species Our models focus on two species of albatross that breed on Bird Island, South Georgia, in the South Atlantic. With a wingspan of up to 3.7m, the wandering albatross is the largest of all seabirds. Wanderers feed on squid, fish and carrion, and some birds follow fishing vessels to feed on bait and offal. The black-browed albatross is an annual breeder. It has a wing span of c. 2.3m and nests in dense hillside colonies. Chicks are fed a diet dominated by krill. Both species have a circumpolar distribution breeding on a number of subantarctic islands. The tracking data for This study come from colonies on South Georgia in the South Atlantic,which have been studied since the 1950’s. Some of the first bio-logging devices were deployed here[2,3]. References [1] Edwards AM, Phillips RA, Watkins NW, et al. 2007 Nature 449, 1044-1048 [2] Prince PA & Francis MD 1984, The Condor 86, 297-300 [3] Afanasyev V & Prince PA 1993 Ornis Scandinavica 24, 243-246 [4] Phalan B, Phillips RA et al. 2007 Marine Ecology Progress Series 340, 271-286 Follow our project: http://leah.johnson-gramacy.com/albatross Contact lrjohnson0@gmail.com pboesu@gmail.com @pboesu Acknowledgements This project is funded by an NSF grant (PLR-1341649) to LRJ and SJR. The British Antarctic Survey developed and deployed the immersion loggers. We thank Andrew Edwards for sharing his MLE code. Interested in Bayesian inference for differential equation models? Come to our talk about the R package deBInfer Friday 1.30pm HUB250. More info at https://github.com/pboesu/debinfer Approach Wandering albatross Diomedea exulans ©Jamie Coleman Black-browed albatross Thalassarche melanophris ©British Antarctic Survey Foraging tracks of wandering albatrosses breeding on South Georgia. A 1990’s-type immersion sensor is deployed on an albatross. ©British Antarctic Survey Conclusions Our approach allows us to: 1) examine the impact of the nature of the observational method on scientific conclusions 2) make suggestions for comparing and combining disparate data sets, to maximize their value. We were not able to identify the process model underlying the earliest observations. Thus, they can not be used as baseline data to address whether this particular aspect of the birds’ foraging strategy has changed over the past two decades. We combine a simulation study with an analysis of two suites of data on water landings collected a decade apart in 1992-1993 and 2002-2004, resp
poster
Comparative analysis of cytogenetic effects of the neonicotinoid insecticides Nuprid 200 SL and Calypso 480 SC on Allium cepaL cells. Ivan Y. Stoyanov *, Penka L. Vasileva, Teodora A. Staykova, Magdalena L. Simidareva, Petya I. Rusinova, Iliyana I. Ilieva, Evgeniya N. Ivanova University of Plovdiv ”Paisii Hilendarski”, Faculty of Biology, Department of Developmental Biology, Section of Genetics, 24 Tsar Asen Str., Plovdiv 4000. Corresponding author: ivan100yanov@abv.bg Introduction Material and methods Results Table 1. Mitotic indices (in %) in Allium cepa, treated with different concentrations of Nuprid 200 SL and Calypso 480 SC. Samples Mitotic index Samples Mitotic index Control 51.58 ±1.50 Control 51.58 ±1.50 Nuprid 25 25 mg.l-1 imidacloprid 47.32 ±7.82 Calypso 25 24 mg.l-1 thiacloprid 49.85 ±6.56 Nuprid 50 50 mg.l-1 imidacloprid 45.71 ±4.05*** Calypso 50 48 mg.l-1 thiacloprid 49.04 ±10.79 Nuprid 75 75 mg.l-1 imidacloprid 46.54 ±1.83*** Calypso 75 72 mg.l-1 thiacloprid 48.16 ±4.39 Nuprid BS 100 mg.l-1 imidacloprid 46.55 ±2.46*** Calypso BS 96 mg.l-1 thiacloprid 46.60 ±4.35* Samples Total frequency of chromosomal aberrations CAI (%) Frequency of chromosomal aberrations among dividing cells (%) Control 0.27 ±0.24 0.53 ±0.29 Nuprid 25 25 mg.l-1 imidacloprid 0.45 ±0.15 0.97 ±0.27* Nuprid 50 50 mg.l-1 imidacloprid 0.48 ±0.20 1.05 ±0.41* Nuprid 75 75 mg.l-1 imidacloprid 0.65 ±0.27* 1.41 ±0.60** Nuprid BS 100 mg.l-1 imidacloprid 0.70 ±0.41*** 1.49 ±0.84*** Calypso 25 24 mg.l-1 thiacloprid 0.48 ±0.29 0.92 ±0.49 Calypso 50 48 mg.l-1 thiacloprid 0.54 ±0.19* 1.17 ±0.47* Calypso 75 72 mg.l-1 thiacloprid 0.55 ±0.21* 1.12 ±0.41* Calypso BS 96 mg.l-1 thiacloprid 1.07 ±0.38*** 2.29 ±0.77*** p < 0.05*; p < 0.01**; p < 0.001*** p < 0.05*; p < 0.01**; p < 0.001*** Neonicotinoids are a major class of insecticides developed over the last 30 years that are used to increase crop yields. Their use is associated with a number of adverse environmental effects. The potential genetic risk of neonicotinoids requires extensive study of their genotoxicity, using a variety of approaches and test systems. In this study, we evaluated the cytostatic and mutagenic effects of the neonicotinoid insecticides Nuprid 200 SL (with the active substance imidacloprid) and Calypso 480 SC (with the active substance thiacloprid) on the cells of the root meristem of Allium cepa L. Table 2. Frequency of chromosome aberrations analyzed by the Allium test For the purposes of our study, bulbs from Allium cepa, provided by the Maritsa Institute of Vegetable Crops - Plovdiv, were used. From the tested pesticides, stockworking solutions (BS) were prepared according to the instructions of the producers in a concentration that is recommended for use in agricultural practice. Working solutions with lower concentration - 75%, 50% and 25% of the experimental solutions for each pesticide were prepared from the stock solutions. The use of the Allium test system (according to Fiskesjö, 1985) allows for a combined approach in cytotoxicity and mutagenicity studies by applying genetic methods. The cytostatic action of pesticide solutions in different concentrations was determined by calculating the total mitotic index and phase indices. Genotoxicity was assessed by determining the frequency of chromosomal aberrations in Allium cepa meristem cells using an anaphase method and a micronucleus test. Data from the present study indicate that the insecticides Nuprid 200 SL and Calypso 480 SC have a cytostatic effect by inhibiting cell division from the root apical meristem of Allium cepa. The value of the mitotic index is lower than the control variant at all tested pesticide concentrations (Table 1). The comparative analysis of the mutagenic action of the studied pesticides shows a higher genotoxic potential of Calypso 480 SC. Stock solutions of both pesticides induced chromosomal abnormalities in Allium cepa cells with a much higher frequency than the control, but the chromosomal aberration i
poster
Researcher Manager Researcher Researcher Researcher Consultant Service lab Service lab PI lab wiDB Distributed CI (DataONE, EarthCube) Waterisotopes.org Portal Community IsoMAP.org wiSamples App wiLabs App wiDB API Compressing the data lifecycle: Can laboratories be a bridge to open data in the stable isotope research community? Gabriel J. Bowen, Geology & Geophysics, University of Utah; gabe.bowen@utah.edu, @bumbanian Stakeholder: Data-Producing Isotope Scientist Challenges: Workload of data management plan; organizing and securing data for large projects; in- formation loss throughout project lifetime; needs control of data and credit for effort Solutions: wiSamples App streamlines collection and storage of robust, standardized metadata from project start; integration with wi information infrastructure supports data security and removes burden of end-of-project archival; community portals create exposure for project while allowing scientist-contributors to control data from their projects Archival and sharing of data is increasingly emphasized as a critical com- ponent of the science endeavor and lies at the heart of the ‘big data’ revolu- tion. As traditionally structured, however, this process is arduous and pro- tracted. The data lifecycle begins with project conception and extends through data collection, analysis, and publication prior to archival. In this model, archival becomes an afterthought and added workload on the re- searcher, and the archived products are often incomplete or poorly struc- tured, reflecting the outcomes of a specific research project rather than the nature of the samples and observations themselves. Widespread reliance on centralized laboratories and a limited suite of ana- lytical equipment in fields such as stable isotope biogeoscience offers po- tential to ‘compress’ the data lifecycle. All laboratories utilize some type of formal or informal data management system. Data generated in the lab are formatted and organized during the analysis and post-analysis data reduc- tion process, offering the ability to ‘capture’ data in a standardized format at this early lifecycle stage. Early capture ensures that data are preserved with minimal information loss and reduces workload in later stages of re- search. It maximizes data security and can support a variety of models for open data and open science, depending on investigator preference. We describe a model system developed for direct capture and distribution of water isotope data from analytical laboratories, and how the implemen- tation of the system provides benefits to multiple stakeholders. We note cultural and technical challenges related to the development and adoption of such a system, and describe design decisions made in order to address these. We argue that the potential benefits of such an approach in stable isotope biogeosciences are significant, stimulate discussion of future ef- forts in this domain, and solicit partners who are interested in exploring the potential of a lab-centric model to condense the data lifecycle and sup- port the growth of open science in our field. Organize sample collec- tion by projects to coordi- nate sampling teams or standardize metadata for individual sampling bouts. Online and offline modes obtain and display existing collection sites, allowing collector to associate new samples with existing sites. Sample properties are set leveraging mobile device data (e.g., location, date and time) to reduce collector’s workload. Sample lists can be creat- ed on-the-go using custom or standardized, auto-gen- erated IDs. Stakeholder: Laboratory Manager Challenges: Establishing and main- taining data management system sup- porting QA/QC activities, data reduc- tion and reporting, and secure data storage, often with little to no support; key piece of data pipeline but not in- volved in traditional data preservation efforts Solutions: wiLabs App provides tools supporting lab data management de- mands and ‘plugs in’ to d
poster
Yahoo! #1 MS-DOS English, 1994 - 1995 Demozine with built-in database about the demoscene and its history Floppy Disk Magazines Floppy Disk Magazines Preserving Forgotten Digital Heritage Preserving Forgotten Digital Heritage Optron #16 ZX Spectrum Russian, 1997 - 2001 Ukrainian non-commercial disk magazine, developed and published by various creators from Lviv Bad Mag #2 Amstrad/Schneider CPC English/German, 1992 - 1993 Bilingual German-British-Irish zine of the international democoder scene and the only diskmag on 3“ disk Project Progress 1. Data collection • Prehistory: a number of fan databases and collections of diskmags and related information • Next step: applying data scraping techniques to gather data and references • Outcome: fi ve comprehensive datasets with 2,000+ unique magazine titles and references to 20,000+ issues 2. Data preparation • Measure 1: design a cohesive structure for the available datasets • Measure 2: data cleaning and standardization with OpenRefi ne • Measure 3: transformation of single datasets into object relational data model 4. Further steps • Next project stage: create a text corpus of German diskmags • Extract text from binary disk images (requires decompression and unicodifi cation) • Collect and digitize images of printed material (covers, disks, manuals) • Study the collection and the corpus with computational methods 3. Data publication • Confi gure a Semantic MediaWiki instance and open it to members of the diskmags community to create and edit records • Confi gure a FactGrid instance to generate authority records for diskmag titles and related entities • Include binary data of original disks where legally and technically possible Coming soon! Project Homepage: http://diskmags.de Project Manager: Dr. phil. Torsten Roeder • torsten.roeder@uni-wuerzburg.de Participants: Johannes Leitgeb, Madlin Marenec, Tomash Shtohryn, Yannik Herbst Center for Philology and Digitality at University of Würzburg • https://www.uni-wuerzburg.de/zpd Disk Magazine publications across systems Magic Disk #1 Commodore 64 German, 1987 - 1993 Popular commercial diskmag on hardware and software, especially for gamers and hobby programmers Digital Talk #113 Commodore 64 English/German, 1993 - now Active fanzine of the retrocomputing community, still published on original fl oppy disks About Diskmags were born-digital journals published on electronic media in the 1980s and 1990s. Some were commercial products, many of them were created by young enthusiasts for communication in the home computer scene. Today, they give insight to a highly creative, early digital and multi-medial culture in the fi rst decades of microcomputing. But the state of preservation is poor, as neither libraries nor archives did collect diskmags systematically. This one-year project, funded in 2023, aims at preserving this valuable cultural heritage by creating a scientifi c collection and a digital text corpus for further research. Maggie #20 ATARI ST English, 1990 - 2016 Multimedia diskmag which also contained poetry and other literary forms... and a poem about diskmags! Warp #2 ATARI ST English, 1995 - 1996 Star Trek fanzine with a fascinating menu Softdisk #2 APPLE II English, 1981 - 1995 The fi rst magazine published on 5.25“ fl oppy disk Le Petit Amiga Illustre #8 Amiga French, 1992 - 1996 Francophone diskmag with a wide variety of scene contents and a colorful menu system DOI DOI 10.5281/zenodo.8158532 10.5281/zenodo.8158532
poster
(1) Department of Ocean Sciences, Rosenstiel School of Marine and Atmospheric Science, University of Miami, Miami, Florida, USA (2) Jet Propulsion Laboratory, NASA, Pasadena, California, USA Yang Liu(1), Toshio Chin(2), Peter Minnett(1) The Parameterization of Sampling Errors in Infrared Sea Surface Temperatures Clouds and inter-swath gaps are the primary reasons for incomplete coverage of satellite infrared (IR) measurements of the Earth’s surface, and yield sampling errors in averaged IR Sea-surface Temperature (SST) fields. In a recent paper (Liu & Minnett. 2016; hereafter referred to as LM16) we found that the MODIS (Moderate Resolution Imaging Spectroradiometer (Esaias et al. 1998)) monthly SST sampling error referenced to MUR SSTs (Multi-scale Ultrahigh Resolution (Chin et al. 2010), is up to O(1 K), which far exceeds the error threshold needed for climate research. The next question to ask is, can the sampling errors be predicted? In reality, when assessing sampling errors in satellite-derived IR SST fields we do not have an appropriate reference field at the various temporal and spatial averaging intervals. However, we do have access to a number of relevant variables that can be used with the results of LM16 to estimate the sampling errors, for example in terms of the local SST difference from a reference, gap fraction, cloud persistence (the number of consecutive days during which a location is detected to be cloudy), or season and region. Introduction Figure 3. Global seasonal sampling errors quantified using MUR SSTs (red) and MUR SST anomalies (blue). Figure 2. Sampling errors generated using OISST climatology. Upper: temporal averaging of [0.25º, mon]; lower: spatial averaging of [5, 1d]. Boreal seasons are denoted on the Asian continent. As a preliminary exploration, we assume the error function can take the form: 𝜀𝜀𝑚𝑚= 𝛼𝛼0𝜀𝜀𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐+ (𝛼𝛼1𝑓𝑓𝑎𝑎2 + 𝛼𝛼3 ො𝑝𝑝𝑎𝑎4)𝜎𝜎+ 𝛼𝛼5 where 𝛼𝛼0, 𝛼𝛼1, 𝛼𝛼2, 𝛼𝛼3, 𝛼𝛼4, 𝛼𝛼5, are coefficients found by a Levenberg-Marquardt algorithm to reach a non-linear least squares fit; 𝛼𝛼3 = 0 in spatial sampling error estimates, 𝑓𝑓is the gap fraction (0 < 𝑓𝑓< 1), ො𝑝𝑝is the normalized cloud persistence (0 < ො𝑝𝑝< 1), and 𝜎𝜎is the standard deviation of the SSTs in the averaging grid cell from MUR (𝜎𝜎𝑀𝑀𝑀𝑀𝑀𝑀). We also test the model using SST standard deviation of the climatology (𝜎𝜎𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐). When 𝜎𝜎𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐is applied, the model computes the sampling error estimates 𝜀𝜀𝑚𝑚′ without any inputs from a Level 4 reference field and thus is predictive. Error Parameterization Figure 1. Sampling errors generated using MUR SST. Upper: temporal averaging of [0.25, mon]; lower: spatial averaging of [5, 1d]. Boreal seasons are denoted on the Asian continent. The sampling errors quantified using MUR SST anomalies calculated using the OISST climatology (Banzon et al, 2014) as the reference are smaller, especially in spatial averages that both mean error and RMSE are substantially (>90% mean ERR and >50% RMSE) reduced after the seasonal signals are removed. However, the RMSE and the mean error barely change magnitudes after the removal of the climatology. Banzon, V.F., Reynolds, R.W., Stokes, D., & Xue, Y. (2014). A 1/4°-Spatial-Resolution Daily Sea Surface Temperature Climatology Based on a Blended Satellite and in situ Analysis. Journal of Climate, 27, 8221- 8228 Chin, T.M., Jorge, V., & Armstrong, E. (2010). Algorithm Theoretic Basis Document: Multi-scale, motion- compensated analysis of sea surface temperature Version 1.1. Liu, Y., & Minnett, P.J. (2016). Sampling errors in satellite-derived infrared sea-surface temperatures. Part I: Global and regional MODIS fields. Remote Sensing of Environment, 177, 48-64 SST Sampling Errors The Climatology Component Figure 4. The 60ºN-80ºN summer month error estimates using 𝜀𝜀𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 (first column), 𝜀𝜀𝑚𝑚′ (second column), and 𝜀𝜀𝑚𝑚(third column). r denotes the correlation coefficients; N is the total number of grids at the resolution [0.25º, mon]; RMSE and the fitting line are de
poster
†, June Young Lee, JinhyukYun, SejungAhnKorea Institute of Science and Technology Information, 66 Hoegi-ro, Dongdaemoon-gu, Seoul 02456, Korea 23rd International Conference on Science and Technology Indicators The application of scientometric indicators is crucial to both research evaluation and the advance of science itself also (Chen & Song, 2017). In keeping with these factors, Korea Institute of Science and Technology Information(KISTI) developed the ‘Insightful Integrated Indicators Metrics(i*Metrics)’ to calculate scientometric indicators for journal publications patents. In this study, we investigate the research profiling of gender studies for 11 countries using journal papers published from 1999 to 2017 by KISTI i*Metrics system with 11 indicators INTRODUCTION 01 Using SCOPUS Database developed at KISTI from the SCOPUS XML Custom Data provided by Elsevier 02 Creating the dataset applying ASJC codes classification system of SCOPUS for the ‘gender studies’ 03 Analysing 29,219 research publications with document type of article, conference paper, and review published by 11 countries (1999 ~ 2016 ) DATA & METHODS Number of Publication (NP) Activity Index (AI) Compound Annual Growth Rate (CAGR) Times Cited (TC) Citations per Paper (CPP) Attractivity Index (AAI) Mean Normalized Citation Score (MNCS) Excellent Journal Rate (EJR) Paper Collaboration Size (PCS) Diffusion Rate to Others (DRO) Absorption Rate from Others (ARO) 11 INDICATORS Chen, C., & Song, M. (2017). Measuring Scholarly Impact. In Representing Scientific Knowledge: The Role of Uncertainty (pp.139- 204). Springer. Tsay, M. Y., & Li, C. N. (2017). Bibliometric analysis of the journal literature on women’s studies. Scientometrics, 113(2), 705-734. REFERENCESSCIENTOMETRICINDICATORSOFGENDERSTUDIESBYCOUNTRYThe table shows the overall results of the scientometricindicators analysis of gender studies in 11 countries. All figures for each indicators excluding NP and TC are averaged for the yearly calculated values. The USA has the highest NP and AI, and Sweden and the Netherlands are strong at EJR and MNCS values. Meanwhile, Asian countries such as India and South Korea were particularly vulnerable to productivity and influence & excellence indicators such as NP, AI, CPP and so forth. (a) (b)RESEARCHACTIVITIESANALYSISOF11COUNTRIESThe USA published the most journal papers with share of 57.45% of total NP, and had the highest values for AI and AAI indicators also. It shows that the USA has played a leading role in terms of research activities in gender studies. As a result of analysing the qualitative aspect of publications applying MNCS and EJR (Figure (a)), Sweden and the Netherlands were outstanding countries. Figure (b) shows the degree of collaboration by discipline and country through DRO, ARO and PCS. Generally, gender studies is referred as interdisciplinary field (Tsay& Li, 2017), and the analysis result partially support this opinion for gender studies. (a) (b)INDICATORCOMBINATIONANALYSISOF11COUNTRIES †Corresponding author: kisti0746@kisti.re.kr
poster
Postinfectious Bronchiolitis Obliterans (PIBO) Among Children In Malaysia: A Retrospective Study LUI Sze Chiang1, CHE DAUD Che Zubaidah1, MOHAMMAD N.Fafwati Faridatul Akmar1, KAMAL Maria1, MUSA Azizah1, SHARIBUDIN Nor Khailawati1, ZAINUDDIN Hafizah2, MOHD HAIRI Farizah3, GAN Eu Ann4, KAILASAM Pavithira Devi5, NG Eunice Chin Nien6, GAN Cheng Guang7, GANAPATHI Rasheila8, ABU OSMAN Mohd Shahrulfahmi9, A RAZAK Hasliza10, ABD RAHIM Siti Aishah11, NG Wen Ying12, KASSIM Asiah1 1. Hospital Tunku Azizah, Kuala Lumpur 7. Hospital Segamat 2. UiTM Selayang Campus, Kuala Lumpur 8. Hospital Enche’ Besar Hajjah Kalthom, Kluang 3. Faculty of Medicine, University of Malaya 9. Hospital Duchess of Kent, Sandakan 4. Hospital Melaka 10.Hospital Sultanah Aminah, Johor Bahru 5. Hospital Pakar Sultanah Fatimah, Muar 11.Hospital Wanita dan Kanak-kanak, Likas, Kota Kinabalu 6. Hospital Sultanah Nora Ismail, Batu Pahat 12. Hospital Sultan Ismail, Johor Bahru. Table 1: Demographic Characteristics of the Study Population ID : P-115 NMRR-19- 3699-49459 INTRODUCTION: Postinfectious bronchiolitis obliterans (PIBO) is an irreversible obstructive lung disease following lower respiratory tract infection (LRTI) in children, especially during the first three years of life1. PIBO diagnosis is based on clinical and radiological changes. It is estimated that 1% of LRTI in children will develop PIBO. However, the local data are scarce. OBJECTIVE: To describe the characteristics of the children less than five years old diagnosed with PIBO from the year 2015-2019 (5 years period). METHODOLOGY: A retrospective study involved ten hospitals under MOH visited by a Respiratory Paediatrician. Children less than five years old diagnosed with PIBO between 1st January 2015 and 31st December 2019 were recruited. The diagnosis was made by the Visiting Respiratory paediatrician from clinical and radio-imaging. For the study, all radiological images were reported by a designated paediatric radiologist. Data were entered into SPSS version 26 and analyzed. Table 2: Clinical history among patients with PIBO in Malaysia Figure 3: Pathogens causing LRTI and PIBO in children ACKNOWLEDGMENT: We would like to thank the Director-General of Health Malaysia for his permission to present this poster. REFERENCES: 1. Yu J. Postinfectious bronchiolitis obliterans in children: lessons from bronchiolitis obliterans after lung transplantation and hematopoietic stem cell transplantation. Korean J Pediatr. 2015;58(12):459-465. doi:10.3345/kjp.2015.58.12.459. 2. Li YN, Liu L, Qiao HM, Cheng H, Cheng HJ. Post-infectious bronchiolitis obliterans in children: a review of 42 cases. BMC Pediatr. 2014;14:238. Published 2014 Sep 25. doi:10.1186/1471-2431-14-238. 3. Chan KC, Yu MW, Cheung TWY, et al. Childhood bronchiolitis obliterans in Hong Kong-case series over a 20-year period. Pediatr Pulmonol. 2021;56(1):153-161. doi:10.1002/ppul.25166. 4. Eber CD, Stark P, Bertozzi P. Bronchiolitis obliterans on high-resolution CT: a pattern of mosaic oligemia. J Comput Assist Tomogr. 1993;17(6):853-856. doi:10.1097/00004728-199311000-00003. DISCUSSION: In this study, children with PIBO were young (median age=15 months), male, and Adenovirus was the commonest pathogen, similar to the previous study2,3. Diagnosis of PIBO can be made as early as one-month post lung infection(Median age=2 months). Prematurity and history of ventilation during the neonatal period were found in at least 28% and 16% of children, respectively, which may be due to abnormal growth of a premature lung postnatally. Environmental factors like tobacco smoke exposure in 50% of children may increase the risk for the development of PIBO. At least 30% of them had a family history of atopy that may be related to an immunological response following lung infection in this group of children. More than 70% of the children required respiratory support during their first respiratory infection leading to PIBO. This may indicate the severity of infection, which
poster
Objective helene.hoffmann@tu-dresden.de http://tu-dresden.de/med/imb Contact Helene Hoffmann Institute for Medical Informatics and Biometry Medizinische Fakultät Carl Gustav Carus Technische Universität Dresden HaematoOPT is funded via the BMBF 0 500 1000 1500 2000 time [days] 10-2 100 102 NPM1/Abl [%] 0 500 1000 time [days] 10-2 100 102 NPM1/Abl [%] Correlation of relapse time and parameters Measurement error and time Relapse prediction A Ω LA HA LO HO KA KO ↯ X ⟳ph ↯ X ⟳pl tlo therapy action - leukemic - healthy tla tho tha dl dh cl ch quiescent stem cells cycling stem cells A Mathematical Model for Relapse Prediction in AML Patients Based on Continuing NPM1 Measurements H. Hoffmann1, I. Glauche1, C. Thiede2, M. Bornhäuser2, M. Kramer2, C. Röllig2, I. Roeder1,3 1Institute for Medical Informatics and Biometry, Faculty of Medicine Carl Gustav Carus, TU Dresden, Germany 2Medizinische Klinik und Poliklinik I, Faculty of Medicine Carl Gustav Carus, TU Dresden, Germany 3National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany In order to detect minimal residual disease (MRD) with high sensitivity in AML patients it is necessary to monitor molecular leukemia markers, such as mutations in the NPM1 gene (Shayegi 2013, Blood), which occur in about one third of AML patients (Falini 2005, N Engl J Med). Today the relative NPM1 abundance in the bone marrow is routinely used for relapse detection (Gorello 2006, Leukemia; Papadaki 2009, Br J Haematol.). For more precise relapse time predictions to improve individual treatment strategies and final outcome we developed a mathematical model describing the effected stem cell compartment in the bone marrow and used the NPM1 measurement data for individual relapse predictions. Furthermore, we analysed the impact of measurement error and time point of measurements on the prediction accuracy. Clinical data Example fits Available data set for our study: Number of patients 271 Number of relapsed patients 74 Median Age 53 (20-79) Median number of measurements 5 (3-21) Median number of therapy cycles 4 (1-10) - AML2003 (NCT00180102) - AML60+ (NCT00180167) Clinical trials: Bone marrow NPM1 time courses of AML patients: of the Study Alliance Leukemia Conclusions The developed mathematical model is able to reproduce the decline of leukemic burden, remission and molecular relapse of AML patients. The individual patients' model parameters are strongly connected with their time of relapse and they can be used for estimating relapse probabilities. Theoretical analyses showed that individual relapse time prediction can be strongly improved by reducing the measurement error and by increasing the measurement frequency. to - activation rate ta - inactivation rate p - proliferation rate c - therapeutic kill rate d - differentiation rate K - capacity Mathematical model ˙XA = Inactivation −Activation ˙XO = Activation−Inactivation+Proliferation−Differentiation−TherapyKill • two bone marrow compartments: one with quiescent (A) and one with cycling cells (Ω) • cells switch between compartments • therapy kill acts only on cells in Ω • leukemic and healthy cells compete for bone marrow space • leukemic cells proliferate faster, are faster activated and slower inactivated Model assumptions: - AMLRegistry High vs. low parameters: 0 5 10 15 20 25 tl o/pl 0 0.2 0.4 0.6 0.8 1 one-year relapse probability no relapse relapse Probability of relapsing within the first year using relapse information from data: Low activation and high proliferation of leukemic cells is connected to earlier relapse. Using the leukemic activation to proliferation ratio a probability of relapsing within one year can be estimated. Low pl High pl 0 500 1000 1500 2000 relapse time [days] Low tl o High tl o 0 500 1000 1500 2000 relapse time [days] p-value < 0.001 p-value < 0.001 U-test Fitting leukemic activation (tlo) and proliferation (pl) to the data: best fit to complete data best fit to first 9 months corresp. 95% confidence
poster
(RFG) (LFG) (SF*) Probing Nuclear Effects Using Transverse Kinematic Imbalance (for the Collaboration) stephen.dolan@llr.in2p3.fr Result disfavours an RFG-like cliff Clear requirement for 2p2h contribution in tail Preference for SF model The “bump” is not predicted by simulation (but could just be a statistical fluctuation) Stephen Dolan Measuring the Single Transverse Variables • For simple 𝜈-𝑁scattering, any deviation from 𝛿𝑝𝑇= 0, 𝛿𝜙𝑇= 0 is caused by nuclear effects. • A naivety of these nuclear effects is the cause of dominant systematics in 𝜈 oscillation analyses. Measuring STV provides a direct probe of nuclear effects. Conclusions • Measurements of the STV allows a novel probe of the effects most pertinent for 𝝂-oscillation analyses: • Fermi motion: rejection of the widely used RFG model • 2p2h: contribution clearly required to describe result, definite preference for =0 initial state Single Transverse Variables (STV) [Phys. Rev. C 94, 015503] • T2K measures a fiducial differential 𝐶𝐶0𝜋+ 𝑁𝑝(N≥1) cross section on CH using its near detector 𝑝𝜇> 250 𝑀𝑒𝑉, cos 𝜃𝜇> −0.6, 𝑝𝑝> 450 𝑀𝑒𝑉, 𝑝𝑝< 1 𝐺𝑒𝑉, cos 𝜃𝑝> 0.4 arXiv: 1802.05078, 1804.09488 Preference for slightly stronger FSI NEUT, Super-K 1 ring 𝝁-like selection But we measure Φ × 𝜎 Oscillation analysis requires knowledge of oscillated Φ - Need good idea of 𝜎 Φ →𝜈-flux, 𝜎→𝜈-interaction cross section Diagrams by Patrick Stowell Reconstruction of 𝐸𝜈depends on contribution of interaction modes Even reconstructing 𝐸𝜈 from simple CCQE requires knowledge of nuclear effects CCQE 2p2h • Aims to characterise 2p2h, understand the role of FSI and distinguish different models of Fermi Motion RFG is the model used in NOvA and T2K oscillation analyses. *[Phys. Rev. C 62, 034304] arXiv 1802.05078 Characterising 2p2h with GiBUU [Phys. Rept. 512, 1-124] • GiBUU is a theory framework that has been successful at predicting a wide variety of nuclear reactions. • Describes 2p2h in 𝜈𝑁interactions by likening to known e𝑁scattering. • Link is via simple factors where the only free parameter is the isospin ( ) of the initial state. • The physical ground state of Carbon is =0, but old data (weakly) prefer (2p2h) = 1 = 0 Definite preference for =0 configuration arXiv 1804.09488 • FSI: slight preference for stronger FSI (in NEUT) =1. [Phys. Rev. C 94, 035502] GiBUU 2017 GiBUU 2017 [JPS Conf. Proc. 12, 010032] [JINST 12, P01016] Simulation comparisons made using NUISANCE Inlays are same plots on a log-scale Data ~6 × 1020 protons on target NEUT: Acta Physica Polonica B 40, 2477 NuWro: Phys. Rev. C 86, 015505
poster
@ F A C E I TA rc t ic @ F A C E I TA rc t ic @ fa c e _i t _a rc t ic @ The F A C E -I T Pro j e c t www.face-it-project.eu FACE-IT has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 869154. Capture_d_écran_2023-02-20_à _14.29.31-removebg-preview.png Tanks of mixed kelp communities were maintained for 2 months in Ny-Ålesund, Svalbard, via a flow-through automated seawater delivery system that mixed heated and cooled seawater from Kongsfjorden. In each of 12 tanks, about 4 kg of kelp (3 species) and various small animals (e.g., sea urchins) were exposed to 4 different future scenarios. Even higher temperature (+5 °C) + even less light + even less salty water Higher temperature (+3 °C) + less light + less salty water Just higher temperature (+5 °C) Same conditions as Fjord Experimental study suggests tolerance of Arctic kelp to future climate changeCale A. Miller1, Anaïs Lebrun1, Jean-Pierre Gattuso1,2, Pierre Urrutti1, Samir Alliouane1, FrédéricGazeau1, SteeveComeau1 1Sorbonne Université, CNRS, Laboratoire d’Océanographie de Villefranche, 06230 Villefranche-sur-Mer, France 2Institute for Sustainable Development and International Relations, Science Po, Paris, France cale.miller@imev-mer.fr The fjords of Svalbard are undergoing rapid climate change due to incoming Atlantic water mixing with cold Arctic water in combination with a warming atmosphere. Making this process worse is the rapid retreat of sea-ice and glaciers, which historically have helped cool the region by reflecting sunlight. These environmental changes are restructuring the habitats of Arctic fjords such as kelp (macroalgae) communities. To examine the response of habitat forming kelp communities to a future Arctic, experimentally changing the physical and chemical conditions reflects the future Arctic ecosystem. Photosynthesis did not change in response to decreased salinity or increased temperature. Kelp production decreased slightly as a result of continuous low light conditions. Kelp photosynthesis and biomass production appear tolerant under future Arctic conditions but expansion of kelp may be reduced if light is perpetually reduced. Winged kelp Sugar kelp Digitate kelp Experimental set-up in Ny-Ålesund, Kongsfjorden, Svalbard
poster
UNIVERSIDADE FEDERAL FLUMINENSE UNIVERSIDADE FEDERAL FLUMINENSE CURSO DE GRADUAÇÃO EM ENFERMAGEM CURSO DE GRADUAÇÃO EM ENFERMAGEM DEPARTAMENTO DE FUNDAMENTOS DE DEPARTAMENTO DE FUNDAMENTOS DE ENFERMAGEM E ADMINISTRAÇÃO ENFERMAGEM E ADMINISTRAÇÃO DISCIPLINA DE GERÊNCIA DE ENFERMAGEM I DISCIPLINA DE GERÊNCIA DE ENFERMAGEM I PROFESSORES: MARITZA CONSUELO ORTIZ SANCHEZ, MIRIAM MARINHO CHRIZOSTIMO E PEDRO RUIZ BARBOSA NASSAR. ALUNAS: ANA LUIZA FERREIRA PEREIRA, ANA BEATRIZ SANTANA RAMOS DE ALMEIDA, ANDRESSA MARTINS ALVES DE OLIVEIRA, AYRA SOUSA DE AGUIAR TEIXEIRA, CLARA PEREZ DA CRUZ ULHOA TENORIO, ELEONORA SILVEIRA SCATOLINI, ISABELLE PEREIRA DOS SANTOS, JULIANA GARCIA DE MELLO, KAREN CALLEGARIO GALVÃO, LETÍCIA D'LUCCA FELIX MONTEIRO E MARIANA FERRAZ CABRAL. EXEMPLOS EXEMPLOS POP – O Procedimento Operacional Padrão Os profissionais de saúde devem atualizar seu calendário vacinal periodicamente A enfermagem deve zelar pelo cumprimento de suas atividades privativas Na escola de enfermagem aurora de afonso costa os discentes devem preservar a estrutura da faculdade
poster
UKDS Data Product Builder The Path to Open Access for Restricted Data Machine Learning for Key Variable Identification Current machine learning (ML) focus: Machine-assisted Disclosure Risk Analysis An essential prerequisite of this is accurately identifying ‘key variables’ – socio-demographic variables which when combined could be potentially disclosive. Example: Age, Geographic region, Sex, Occupation Harnessing the power of Machine Learning (ML) makes scaling to the entire collection feasible Workflow orchestration software will allow us to have a ‘human-in-the-loop’ step to validate the ML outcome Input features: Variable characteristics: label, question text, group, categories ML Challenges: The number of tags/combinations of tags can be very large The difference between variables can be very subtle and thus hard to solve computationally for a single model Deirdre Lungley, Principal Developer dmlung@essex.ac.uk Thomas Gilders, Data Engineer tg21783@essex.ac.uk Enhanced Combination Frequency Calculations • Example of scale of computation required: • 15 key variables – 1365 4-way combinations to be calculated • Recalculate 4-way combinations for each mitigation offered • The existing tool we use is sdcMicro: • GUI would not allow load of dataset as large as QLFS • With scripting – possible but quite slow • We use C++ bitmask operations in place of the original R code • Makes real-time disclosure analysis feasible Population-level Combination Frequencies to aid Automated Disclosure Risk Analysis (DRA) • The GSS Guidance for Microdata from Social Surveys states in relation to DRA: “those with extensive knowledge of the data should take the lead..” • We’re currently exploring the feasibility of using Census aggregate data to inform automated DRA • Population-level combination frequencies can be checked when sample frequencies are below threshold • Example – female, 60-65 age group, N. East, Veterinarians • Dataset frequency: 1 • Census frequency: 5 • This necessitates harmonizing key variable study representation (categories) with census representation Key DDI-CDI Components for our DPB Why? Data Depositor • Transparent disclosure risk methods Data Curator • Allows increased volume of data Researcher • Reduce time to access data • Reduce workload • Bespoke
poster
      NANO PATTERNS FOR INDUSTRIAL APPLICATIONS This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement N°768636. APPLICATIONS FOR MEDICAL COMPONENTS Boosting the healing process of dental implants with the help of laser induced periodic surface structures (LIPPS) Bone cells develop better on a nano structured surface.
poster
Measuring Protoplanetary Disk Alignment in Young Binary Systems Eric Jensen1, Rachel Akeson2, and Aaron Hersch1, 1Swarthmore College, 2Caltech/IPAC, NExSci Acknowledgments We thank Geoff Blake for permission to use the DoAr 24E data in advance of publication. This work makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.00637.S and ADS/JAO.ALMA#2015.1.00168.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associat- ed Universities, Inc. Figure 4: First-moment maps of the primary (GK) and secondary (GI) disks in the binary system GK/GI Tau. Although this system has the largest projected binary separation in our sample at 13.2’’ (1700 AU), its disks show similar position angles. Figure 5: Work in progress: we are exploring the fitting of the first-moment maps, along with constraints on the stellar mass, to determine disk inclinations in order to find the full three-dimensional orientation difference in these systems. Above, left to right: data, model, and residuals for a fit to GK Tau using eddy (Teague 2019). Observations and Methods We used ALMA to observe continuum and CO(3-2) emission from 14 young binary systems in Taurus-Auriga, Ophiuchus, and Lupus. The kinematics of the CO emission allows us to deduce the spatial orientation of the disks, even for disks that are near our resolution limit. Six systems had strong enough emission for both components to be detected in CO. We used CASA to image the sources and create velocity maps. We also included in our analysis the previously measured position angles of the disks in V2434 Ori (Williams et al. 2014) and HK Tau (Jensen & Akeson 2014) . Most of the disks are unresolved in the continuum, but the channel-to-channel movement of the CO emission allows us to determine the disk position angle. We measured the CO emission centroid in each channel and fit a straight line to the points to find the disk PA. References Bohm, K. H. & Solf, J. A sub-arcsecond-scale spectroscopic study of the complex mass outflows in the vicinity of T Tauri. Astrophys. J. 430, 277-290 (1994). Foucart, F., & Lai, D., Evolution of linear warps in accretion discs and applications to protoplanetary discs in binaries. MNRAS 445, 1731-1744 (2014). Jensen, E. L. N. & Akeson, R. Misaligned Protoplanetary Disks in a Young Binary System. Nature, 511, 576 (2014). Jensen, E. L. N., Mathieu, R. D., Donar, A. X. & Dullighan, A. Testing Protoplanetary Disk Alignment in Young Binaries. Astrophys. J. 600, 789-803 (2004). Kurtovic, N. T. et al. DSHARP IV. Characterizing Substructures and Interactions in Disks around Multiple Star Systems Astrophys. J. Lett. 869, L44 (2018). Lubow, S. & Ogilvie, G. On the Tilting of Protostellar Disks by Resonant Tidal Effects, Astrophys. J 538, 326-340 (2000). Monin, J. L., Menard, F. & Duchêne, G. Using polarimetry to check rotation alignment in PMS binary stars. Principles of the method and first results. Astron. Astrophys. 339, 113-122 (1998). Ratzka, T. et al. Spatially resolved mid-infrared observations of the triple system T Tauri. Astron. Astrophys. 502, 623-646 (2009). Roccatagliata, V. et al. Multi-wavelength observations of the young binary system Haro 6-10: The case of misaligned discs. Astron. Astrophys. 534, A33 (2011). Salyk, C. et al. ALMA Observations of the T Tauri Binary AS 205: Evidence for Molecular Winds and/or Binary Interactions. Astrophys. J. 792:68 (2014). Skemer, A. J. et al. Evidence for Misaligned Disks in the T Tauri Triple System: 10 μm Superresolution with MMTAO and Markov Chains. Astrophys. J. 676, 1082-1087 (2008). Stapelfeldt, K. R. et al. An Edge-on Circumstellar Disk in the Young Binary System HK Tau
poster
4/29/13 Frontiers | Manage Abstracts | My Submissions | All Abstracts www.frontiersin.org/Journal/MySubmissionViewDetails.aspx?stage=100&articleid=54810&submissionid=54913 1/2 Towards Automated Analysis of Connectomes: The Configurable Pipeline for the Analysis of Connectomes (C-­PAC) Cameron Craddock1, 2*, Sharad Sikka2, 3, Brian Cheung1, Ranjeet Khanuja1, Satrajit S. Ghosh4, Chaogan Yan2, Qingyang Li1, Daniel Lurie1, Joshua Vogelstein1, 5, Randal Burns6, Stanley Colcombe2, Maarten Mennes7, Clare Kelly3, Adriana Di Martino3, Francisco X. Castellanos3 and Michael Milham1, 2* 1 Center for the Developing Brain, Child Mind Institute, USA 2 Nathan Kline Institute for Psychiatric Research, USA 3 Phyllis Green and Randolph Cowen Institute for Pediatric Neuroscience, New York University Child Study Center, USA 4 McGovern Institute for Brain Research, Massachusetts Institute of Technology, USA 5 Departments of Statistical Sciences, Duke University, USA 6 Department of Computer Science, John Hopkins University, USA 7 Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Netherlands Introduction To successfully examine the brain’s functional architecture (connectome) and its behavioral associations, researchers need tools that facilitate reliable, replicable connectivity analyses. Here we introduce C-­PAC, a configurable, open-­source, automated processing pipeline for functional MRI data that builds upon a robust set of existing software packages. Users can rapidly orchestrate automated large-­scale pre-­processing and data analyses, and can easily explore the impact of processing decisions on their findings by specifying multiple analysis pipelines to be run simultaneously. C-­PAC can reliably process hundreds or thousands of subjects through a variety of preprocessing strategies in a single run. Thus C-­PAC has been optimized for use on large data sets such as those made public by the International Neuroimaging Data-­sharing Initiative (INDI, http://fcon_1000.projects.nitrc.org/). Methods C-­PAC has been implemented in Python using the Nipype pipelining library. Nipype provides C-­PAC with mechanisms to automatically detect and exploit parallelism present in a pipeline, iterate over several parameter settings, and to restart a pipeline without having to recompute previously completed processing steps. C-­PAC extends Nipype functionality by providing workflows specific to connectivity analyses, functional connectivity derivatives and analyses not present in other neuroimaging packages, and a simplified interface for specifying and running pipelines. The CPAC workflows are built from AFNI and FSL tools, as well as algorithms coded in Python using Scipy, Numpy and scikit-­learn. The C-­PAC processing and analysis pipeline (fig. 1) is configured through a simple configuration file, which permits the inclusion and exclusion of different steps, and setting of a variety of parameters. A variety of input data organization schemes and subject specific acquisition parameters (slice acquisition, slice timing information, time point censoring) are easily configured through a subject configuration file. Available preprocessing options include: motion correction, anatomical/functional coregistration, spatial normalization, spatial and temporal filtering, tissue segmentation, slice-­timing correction, several variations of nuisance signal removal and volume censoring (motion “scrubbing”). C-­PAC also includes a number of advanced analysis methods that facilitate detailed exploration of connectivity patterns, network structure, and brain-­behavior relationships. Individual-­level measures include: Seed-­ based Correlation Analysis, Amplitude of Low Frequency Fluctuations (ALFF) and Fractional ALFF, Regional Homogeneity, Voxel-­Mirrored Homotopic Connectivity, and Network Centrality (Degree and Eigenvector). At the group level, C-­PAC features Connectome-­Wide Association Studies, Bootstrap Analysis of Stable Clusters, and
poster
Process to obtain the differential Library of semi-empirical stellar spectra with varying abundance element ratio Iveth A. Gaspar-Gorostieta1,2, Alexandre Vazdekis1,2, Carlos Allende Prieto1,2, Ariane Lançon3, Adam T. Knowles1,2 1 Instituto de Astrofísica de Canarias, 2 Universidad de la Laguna, 3 Observatoire Astronomique de Strasbourg . Contact: igaspar@iac.es SEA 2022. Tenerife, Islas Canarias, España Why do we want a semi-empirical stellar library? The data Stellar libraries we are working on: X-shooter Spectral Library (XSL): Gonneau et al. (2020), Lançon et al. (2021) MILES: Sánchez-Blázquez et al. (2006) To build up the semi-empirical library, these choices are made: Stellar mixture: Caffau (C, N and O) and Lodders (all other elements) Stellar atmosphere models: MARCS (Gustafsson 2008) Kurucz (Meszaros et al. 2012) The adopted element partition shows a significant similarity with BaSTI stellar evolution models and isochrones. Conclusions New and old molecular line lists were tested and contrasted with new and old versions of synple (spectral synthesis codes, a python wrapper to the fortran code Synspec), so the best fit of each spectrum can be made. The new line lists and the new version of the code have been used to make the spectra. The parameters to compute the synthetic spectrum are are adopted from Arentsen et al. (2019) for the XSL and Garcia-Perez et al. (2021) for MILES. A number of small but key model grids with varying stellar parameteres. Some grids with different values in the parameters (effective temperature, gravity, metallicity and the different chemical abundance ratios) were made to test the interpolated spectra made by using the code FERRE, a fitting code (Allende Prieto et al. 2006). We succesfully tested our models with 8 solar-like stellar spectra from the XSL and the grid with the parameters of this subsample is changing as follows: effective temperature (5250<Teff<6000), gravity (3.5<logg<4.5), metallicity (-2<[M/H]<2), α abundance (-0.25<[α/H]<0.25). Examples: semi-empirical spectra in the visual and NIR Differential spectral responses from varying the abundance ratios in the theoretical stars will be used to correct the empirical spectra and obtain a set of semi-empirical XSL and MILES stellar libraries with varying abundances. These new libraries will be very useful to feed stellar population synthesis models predicting galaxy spectra (Vazdekis et al. 2015, 2016, Verro et al. 2022). References Allende Prieto et al., 2018, A&A, 618, A25 Knowles et al., 2021, MNRAS, 504, 2286 Gonneau et al., 2020, A&A, 634, A133 Lançon et al., 2021, A&A, 649, A97 Sánchez-Blázquez et al., 2006, MNRAS, 371, 703 Lodders et al. 2003, AJ, 591,1220 Caffau et al. 2008, Nature, 477, 67 Gustafsson et al. 2008, A&A, 951-970 Meszaros et al. 2012, AJ, 144, 120 Vazdekis et al., 2015, MNRAS, 449, 1177 Vazdekis et al., 2016, MNRAS, 463, 3409 Verro et al. 2022, A&A, 661, A50 Arentsen et al., 2019, A&A, 627, A138 García-Pérez et al., 2021, MNRAS, 505, 4496-4514 Allende Prieto et al., 2006, ApJ, 636, 804A External galaxies show differing abundance element ratios with respect to the solar neighbourhood. Such patterns are intimately linked to the Star Formation History experienced by a galaxy, as dying stars release their freshly- forged elements in characteristic time scales that depend on their masses. These scales reflect on galaxy spectra. Such responses will be applied to the real stars to create semi- empirical spectral libraries, which will feed our population synthesis models to compute stellar population spectra with the corresponding abundance ratios. Process to obtain the differential correction We aim to build up a set of semi-empirical stellar libraries from the Near-UV to the Near-IR, with varying abundance ratios of α-elements, among others. This requires computing extensive libraries of theoretical stars that will be used to derive differential spectral responses to the varying abundances (Allende Prieto et al. 2018
poster
HYBRAIN Shaping the future of Artificial Intelligence Key Exploitable Results Main technology domains Cross- domain impact Edge Computing Photonics Artificial Intelligence HYBRAIN aims to develop a hybrid electronic- photonic super-fast and energy-efficient computing system inspired by the human brain to enable innovative Edge Computing and Artificial Intelligence solutions. Techniques on how to define the interfaces to AIMC cores to facilitate ultra-low latency inference Brains-Py platform Improved disease detection and more accurate and personalised medicine. Health Cybersecurity Improved novel cybersecurity threats prevention, increased cyber threat detection rates. Improved soil and crop monitoring and automated plant disease detection. Reduced carbon emissions through value chain insights and optimisation, optimised energy grid and household energy management. Efficient car assembly robots, more accurate supply chain forecasts and accidents prevention. Automotive Financial Agriculture Climate change Customised financial advice, improved credit scoring through cash flow predictions.
poster
• IRAF/PHOT AstroImageJ Short and Long-term Observations of X-ray Binaries with Neutron Star Components Tuğçe İçli 1, Dolunay Koçak 1 and Kadri Yakut 1 1University of Ege, Faculty of Science, Department of Astronomy and Space Science, 35100, İzmir-Turkey Robotic Telescope -T60 • Feb. 2015- Jan.2021 • V, R, and I filters • 5-20-60-120s exp. • 3 months obs. period • (V-R); (R-I) T100 • December 8-9 2015 • Feb 7, April 3-4 2016 • March 6-20 2018, • April 4 2018 • R filter • 4 months obs. Period TUG(2015-2021) Long-term Acknowledgements.This study is supported by the Turkish Scientific and Research Council (TUBITAK - 117F188-119F077). We thank TUBITAK for partial support in using the T60 telescope with project numbers 15AT60-776 and 18AT160-1298. TI thanks TUBITAK-BIDEB for its 2211-C and 2214-A fellowship, Max-Planck Institute for Astrophysics for their support during her scientific visiting, and S. De Mink. New Observations & Results 14th – 18th Nov.2022, Munich “The Impact of Binaries on Stellar Evolution” IRAF/PHOT AstrolmageJ New long and short-term photometric variation results of the selected high-mass (HMXB) and low-mass (LMXB) X-ray binaries with neutron star component (PSR J1023 +0038, BQ Cam, XTE J1946+274) have been presented in this study. We obtained new observations of the systems during 2015-2021 with V, R, and I filter at TÜBİTAK National Observatory (TUG) with 60cm and 100cm telescopes. All optical datasets of the systems have analyzed, combine with some X-ray light variations. The studies' results got new periods for selected binary systems and multiple periods for some systems (Icli et al. 2020). References Collins, K. A., Kielkopf, J. F., Stassun, K. G., et al. 2017, Astron. J., 153, 77 Icli, T., Kocak, D., Yakut, K., 2020, CosKa, 50, 2, 499p Bogdanov, S., et al. 2015, ApJ, 806, 148, 23p Reig, P., Fabregat, J., 2015, A&A, 574, A33, 14p Müller, S., et al. 2012, A&A, 546, 125p
poster
Estimation of the Moisture Content in Solanaceae Seedlings using Hyperspectral Image Sae-rom Jun*1, Chan-seok Ryu1, Seong-heon Kim1, Jeong-gyun Kang1, Ye-seong Kang1, Won-jun Kim1, Tapash Kumar Sarkar1, Si-hyeong Jang1, Dong-hyeon Kang2, Yang-gyu Ku3, Dong-eok Kim4 1Department of Bio-Systems Engineering, College of Agriculture and Life Science, Gyeongsang National University (Institute of Agriculture and Life Science), Jinju 52828, Republic of Korea 2Department of Agricultural Engineering, National Academy of Agricultural Science, RDA, Jeonju 54875, Republic of Korea 3Department of Horticulture Industry, College of Life Science and Resource, Wonkwang University, Iksan 54538, Republic of Korea 4Department of General Education, Korea National College of Agriculture and Fisheries, Jeonju, 54874, Republic of Korea Introduction This research was performed to develop the moisture prediction model for salanaceae seedlings such as chili pepper and tomato based on hyperspectral imagery. Materials and Methods The reflectance of chili pepper(n=45) and tomato (n=45) seedlings were calculated using the hyperspectral imagery, and the moisture content of all seedlings was measured. Then the predicting models for estimating moisture content were developed using PLS-Regression analysis with two factors Equipment Sample Schedule Image Processing Materials and Methods Statistical Analysis Results and Discussion The chilli model showed 0.68 of R2, 1.43% of RMSE and 1.61% of RE, which indicate accuracy and precision respectively. The tomato model showed 0.74 of R2, 2.77% of RMSE and 3.09% of RE. Combining all samples (n=90), the solanaceae model showed 0.67 of R2, 2.53% of RMSE and 1.61% of RE. Finally the full-cross validation showed 0.59 of R2, 2.83% RMSE and 3.17% of RE. Moisture Content Reflectance Ratio PLS Regression Model For the normalization, hyperspectral images of chili pepper and tomato seedlings were calibrated by the values of reference board in same image and camera noises were removed by dark current image. For the segmentation, The images were divided into the crop and background area by GNDVI (Green normalized difference vegetation index) - NDVI (normalized difference vegetation index). For the extraction, The images were calculated by image processing program (ENVI 4.7, Exelis Visual Information, USA). Variety Chilly Pepper (n=45) Tomato (n=45) Solanaceae (n=90) Moisture Content[%] (Mean±S.D.) 91.6 ± 2.83 89.6 ± 5.48 89.2 ± 4.27 LV 5 5 5 Cal. R² 0.68 0.74 0.67 RMSE [%] 1.43 2.77 2.53 RE [%] 1.61 3.09 2.84 Val. R² 0.45 0.60 0.59 RMSE [%] 1.88 3.44 2.83 RE [%] 2.12 3.84 3.17 Tomato (n=45) Chilly Pepper(n=45) Moisture Content [%] (Mean±S.D.) 89.9±5.60 88.8±2.56 Acknowledgement This work was carried out with the support of "Cooperative Research Program for Agriculture Science & Technology Development (Project tile: Development of Heating and Cooling Energy Reduction Technology using Seedling Quality Monitoring for Nursery, Project No. PJ0116932017)" Rural Development Administration, Republic of Korea.Image ProcessingStatistical Analysis Hyperspectral Camera Estimation of Moisture Content 75 80 85 90 95 100 75 80 85 90 95 100 Measured moisture content [%] Predicted moisture content [%] Calibration Validation Fig. 1. Estimating Mechanism - Hyperspectral Camera Device name : VNIR spectral camera PS Manufacturer : Specim Spectral imaging Ltd, Finland Photographed Wavelength Range : 400 ~ 1000nm (resolving power : 2nm) - Dry Oven Device name : VS-4048D Manufacturer : Vision Scientific Co. Ltd., Korea Temperature range : Ambient +5 ℃ ~ 220 ℃ Capacity : 48L - Software Spectral DAQ ENVI 5.2 R project 3.3.3 Fig. 2. Hyperspectral Camera Fig. 3. Dry Oven Fig. 4. Software - Chili Pepper Vienna (53 growth days) - Tomato Noggwang (37 growth days) Fig. 5. Hyperspectral Image of Chili Pepper Fig. 6. Hyperspectral Image of Tomato Fig. 7. Schedule of Scanning and Sampling Reflectance Ratio PLS Regression Analysis Moisture Content R Full-Cross Validat
poster
Integer facilisis arcu Cras imperdiet sapien nec sem Seasonal Variation of Solar Neutrino Flux at Super-Kamiokande Susana Molina Sedgwick REFERENCES http://www-sk.icrr.u-tokyo.ac.jp http://t2k-experiment.org http://www.hyper-k.org Super-KamiokaNDE Tank made of stainless steel ~ 50,000 tons of ultra-pure water ~ 13,000 Photo-Multiplier Tubes ✦Neutrino observatory and water Cherenkov detector situated in the Kamioka area of Hida, Japan. ✦International collaboration of ~ 150 people from 40 institutions in 10 countries. Specifications The flux of solar neutrinos detectable on Earth would be expected to fall proportionally to 1/r 2, due to spherical symmetry. Seasonal Variation 𝝋 Since the distance r from the Earth to the Sun changes as a yearly cycle due to the orbital trajectory, it follows that there will be a seasonal variation in the flux. Previous Method Variation consistent ✴ SK-I: 1996 - 2001 ✴ SK-III: 2006 - 2008 ✴ SK-II: 2002 - 2005 ✴ SK-IV: 2008 - 2018 Variation out of phase Month 1 2 3 4 5 6 7 8 9 10 11 12 DATA/MC(unoscillated) 0.35 0.4 0.45 0.5 0.55 0.6 SK-1 ALL SK-I Month 1 2 3 4 5 6 7 8 9 10 11 12 DATA/MC(unoscillated) 0.35 0.4 0.45 0.5 0.55 0.6 SK-2 ALL SK-II Month 1 2 3 4 5 6 7 8 9 10 11 12 DATA/MC(unoscillated) 0.35 0.4 0.45 0.5 0.55 0.6 SK-3 ALL SK-III Month 1 2 3 4 5 6 7 8 9 10 11 12 DATA/MC(unoscillated) 0.35 0.4 0.45 0.5 0.55 0.6 SK-4 ALL SK-IV & & 𝜈PHYS 2019 - POSTER SESSION Conclusions ✦The two methods now verify each other and function as a cross-check. ✦Possible applications to future direct dark matter experiments. ✦SK-III reanalysis in progress. Oct 2008 May 2018 New Method phi rebinned 0 10 20 30 40 50 60 70 Data/SSM 0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 0.52 phi rebinned 0 1 2 3 4 5 6 7 8 Data/SSM 0.35 0.4 0.45 0.5 0.55 0.6 E range = 4.5 - 20 MeV Zenith angle = all Uses astronomical distribution rather than analysing by date (72 azimuthal bins). 𝝌2 value = 64.03 / 72 Fitting Amplitude, Phase + Amplitude
poster
Ecuadorian Glass Frogs (Centrolenidae) Current state of knowled8e, new research trends and conservation 0 Smithsonian National Museum of Nmural History Di11isio11 of Amphibians and Hcptiles Diego F. Cisnero s-Heredia & Roy W. McDiarmid (DFCH) Laboratory of Amphibians (!;f Reptiles, Museoo de Ciencias de la u&FQUniversidad San Francisco de C)uilo. Ave. Inleroceanica, (RWM) u&c& Paluxenl Wddlife Research Center, Smithsonian Institution, PO I'>ox 37012, National Museum of Natural lli!,lory, (DFCH) E-mail: dieyiranci!>co ci!.neroo@yahoo.com herpelolqcia@mail.usfg.edu.ec Ecuador ha s the bigge st number of amphibian specie s per unit of area in the w orld (425 specie s in 276 ,840 km2), which is almo st three time s the den sit y of specie s from Colombia and 21 time s that from Bra sil (Coloma & Quiguan go 2000-2002). In the la st decade several source s of data sugge st that a significant proportion of thi s biodi ver sit y is endan gered. Con servati ve estimate s indicate that at lea st 26 specie s of Ecuadorian amphibian s ha ve declined or gone extinct (Ron et al. 2000, Coloma per s. comm. 2003). The rea son s NEW RE&>EJ\RCH TREND&> for thi s cri sis are not clear but ha ve been related to habitat de struction, climate chan ge, and / or di sea se such a s Chytridiom yco sis (Young et al 2001). Centrolene prosoblepon (Boettger 1892) Centrolene geckoideum Jimenez de la Espada 1872 Gla ss Fro gs belon g to the Neotropical famil y Centrolenidae , which currentl y compri ses three genera: Centrolene, Cochranella, and Hyalinobatrachium (Ruiz-Carranza & Lynch 1991). Mo st of them are small frog s (20-30 mm) (L ynch & Duellman 1973) and differ from other Anura by the fu sion of the a stra galu s and calcaneum, T -shaped terminal phalange s, and a proc ess on th e third metacarpal. The fir st E cuadorian centrolenid fro g de scribed w a s Centrolene geckoideum Gimenez de la E spada 1872). In 1973, Lynch & Duellman revie w ed the nineteen E cuadorian specie s then known from the country. Currentl y, there are 31 specie s of gla ss frog s reported from E cuador, nine (9) of that are endemic. Two characters of the Family Centrolenidae: a = process on the third metacarpal; b = T-shaped terminal phalanges. Some of the new species from the western lowlands of Ecuador: (a) New Hyalinobatrachium previously confused with H. fleischmanni, (b) new Centrolene known just form the type locality that was destroyed by the agricultural expansion and transformation of the forest remnants into oil palm and banana plantations, the species is critically endangered, ( c) new Hyalinobatrachium previously confused with H. valerioi, ( d) new species of Centrolene very similar to another new species but differed by the yellow iris. Ch aracte r s th at h ave b ee n u se d to sep ara te glass frogs sp ecies dorsa l coloration patterns color of visceral membranes, bones and iris skin texture sno ut profiles webbing between fingers and toes nostri ls and loreal region tympanum post-cep halic constrictio n humeral and prepollica l spine anal ornamentation nuptia l excrescences number, color and form of eggs calls However, some are useless because are subjected to preservation artifacts, intraspecific variation or sexual dimorphism. The most useful characters are: ( a )Dorsal Coloration Patterns with caution ( some key patterns fade on alcohol, and sometimes ,vhat seems to be an ocelli is just an random aggregation of dots (false ocelli). (b )Humeral and prepollical spines, the best way to see its form is to dissect or get ax-ray radiograph. (c)Nuptial excrescence, it is useful to differ between individual disperse glands, glandular aggregations, and granular pads. ( d)Profiles of the snout, on well-preserved specimens. (e)Anal ornamentation (f)Color of visceral membranes, bones and iris, best taken just after the specimen presenration (g)Calls, (h) Behavior. BE HAVIOR FALSE OCELLI HUM ERAL SPINE BULLA, a much more widespread feature The bulla was described by Myers
poster
“97.4% of the world's top one million websites don't offer full accessibility” - Cudd Scope & Maturity Nature of Violation & Disability Nations, including Qatar incorporated web accessibility laws Many governments worldwide passed or revised legislation protecting disabled people Mada is an Assistive Technology Center Qatar 2010 Aims at promoting digital inclusion and building a technology-based community that meets the needs of PWD Improving ICT accessibility for PWD in Qatar and worldwide Qatar National E-Accessibility Policy - ensures equitable access to technology for PWD What is the current state of accessibility regulations? Are these regulations widely adopted globally in response to the CRPD, particularly for the digital context? How effective are such regulations, in terms of adoption, impact and enforcement, in ensuring equity for people with disabilities? Significance: Influence of CRPD The UN Convention on the Rights of People with Disabilities (CRPD) enforces web accessibility laws. The research aims to understand the scope and maturity of accessibility regulations worldwide and their enforcement. Relevance to Qatar Questions: CRPD Effectiveness Awareness & Adoption State of Regulations Effectiveness of Regulations Created six regions (European Union, Americas, Australia, Asia-Pacific, Africa) Complied accessibility regulations for each region From a sample of 120 accessibility lawsuits, found ~40 organizations that were Fortune 500 Studied Fortune 500 Companies Annual reports and accessibility statements shows inconsistent interest around the topic of accessibility Twitter advanced search tool Conducted a web audit using WCAG criteria (Level A and Level AA) Scope Maturity Nature of Disability Nature of Violation Results Web Accessibility Regulations How are they effective for people with disabilities? Methodology 2003 European Year of Person with Disabilities 1990 ADA accessibility regulations missed digital context 1980s Australia activism for disability rights continues Asia-Pacific development of regulations are still in progress Africa, accessibility regulations established but not effective ADA had the most violated violation for most lawsuits Inaccessible websites have affected visually impaired users Hearing impaired users faced troubles with some educational and healthcare websites Annual report analysis shows inconsistent interest around the topic of accessibility Accessibility statements had accessibility relevant words, which are not compliant with the WCAG standards Work still needed towards effective accessibility regulations and compliance by organizations Accessibility regulations are known, but organizations aren't following them to improve user experience Conclusion Sara Al-Emadi | Advisor: Divakaran Liginlal Information Systems Program
poster
5GMED Connected and Automated Mobility and Future Railway Mobile Communication Use Cases in the Mediterranean Cross-Border Corridor Jad Nasreddine∗, Ricard Vilalta‡, Giovanni Rigazzi§, Philippe Veyssiere†, Ra¨ul Gonz´alez§ and Francisco V´azquez-Gallego∗ ∗i2CAT Foundation, Barcelona, Spain †IRT Saint Exup´ery, Toulouse, France ‡ Centre Tecnol`ogic de Telecomunicacions de Catalunya (CTTC/CERCA), Castelldefels, Spain § Cellnex Telecom, Barcelona, Spain Abstract—The 5GMED project will demonstrate advanced Cooperative, Connected and Automated Mobility (CCAM) and Future Railway Mobile Communications System (FRMCS) use cases along the cross-border corridor between Figueres (Spain) and Perpignan (France), enabled by a multi-stakeholder compute and network infrastructure based on 5G and offering support for AI functions, and deployed by MNOs, neutral hosts, and road and rail operators. Several large-scale trials will be conducted to evaluate the capabilities of 5G to meet the requirements of the use cases in the cross-border mobility scenario. In this paper, we present the 5GMED project, its use cases and the proposed 5G network architecture. Keywords—5G, use cases, CCAM, FRMCS, cross-border, trials. I. INTRODUCTION A number of critical Cooperative, Connected and Auto- mated Mobility (CCAM) use cases imply stringent real-time connectivity requirements, leading to a scenario where best- effort connectivity provisioning does not comply. The majority of current 5G deployments are not designed to meet those requirements, which include the support for ultra-reliable low- latency communication (URLLC)-type services as well as availability in sparsely populated areas for delivering high data-rates. To achieve such objectives, dense cell deployments, including necessary optical fiber roll-out as well as Multi- access Edge Computing (MEC) will be important [1]. In cross-border corridors, Mobile Network Operators (MNO), neutral hosts, and road/rail operators usually own separate network and compute infrastructures, each with their own service orchestration capabilities tailored to the business of each specific stakeholder [2]. As 5G is also expected to be instrumental for railway communications, there is the main objective to adopt novel network architectures that consider all the rail communication services over a single network, instead of the expensive mixture of access networks used today, which leads to the design principles of the Future Railway Mobile Communication System (FRMCS) [3]. Motivated by this context, the 5GMED project aims at the deployment of a multi-stakeholder network and compute infrastructure based on 5G to demonstrate advanced CCAM and FRMCS use cases along the Mediterranian cross-border corridor between Figueres (Spain) and Perpignan (France). In 5GMED, several novel technologies are under investigation to meet the requirements of the use cases. These technologies include enhancements to accelerate 5G roaming procedures across MNOs and neutral hosts, multi-connectivity solutions supporting vehicles and high-speed trains, novel broadband ac- cess network architectures for railways, cross-operator service orchestration, and the ability to execute AI-enabled functions at the edge of the network. This paper introduces the 5GMED project, the proposed use cases, and the main features of the network architecture to be deployed along the Mediterranian cross-border corridor. II. 5GMED PROJECT OVERVIEW The 5GMED project is an innovation action funded by the European Union’s Horizon 2020 research and innovation programme under the 5G Public Private Partnership (5G- PPP). 5GMED will deploy a 5G Stand-Alone (SA) network infrastructure along 65 km of the European highway E-15 and the high-speed rail track of the Mediterranean cross- border corridor. It will conduct large-scale trials to evaluate the capabilities of 5G to meet the requirements of CCAM and FRMCS use cases under mobility conditions across the border. Before the executio
poster
Automatic Lidar and Ceilometer Framework (ALCF) ALCs Lidar data processing Model evaluation using ALCs Available at alcf-lidar.github.io Noise removal Cloud detection Mie scattering Absolute calibration Peter Kuma¹, Adrian McDonald¹, Olaf Morgenstern² Acknowledgments Lufft CHM 15k Sigma Space MiniMPL Vaisala CL51 and CL31 ¹) University of Canterbury, Christchurch, New Zealand, ²) NIWA, Wellington, New Zealand ALCs are ground-based lidars operating by sending pulses of laser radiation in the near-infrared or visible spectrum and measuring received radiation. ALCs can measure cloud base, cloud layers, cloud phase, backscatter, boundary layer height and aerosol concentration. Closed firmware processing of raw signal hinders comparison between instruments, GCMs and NWP models. What? ALCF is an open source lidar processing tool and a lidar simulator written in Python and Fortran. Why? A large number of automatic lidar and ceilometers (ALC) are deployed worldwide, but there is a lack of open tools for processing lidar data and comparison with general circulation models (GCM) and numerical weather prediction (NWP) models. How? ALCF processes data from lidars and runs a lidar simulator on model atmospheric fields to enable 1:1 comparison between observations and the model. The ALCF lidar simulator is based on a lidar simulator of the COSP project provided under a BSD license. Lufft CHM 15k is a near-infrared (1064 nm) ceilometer with a range of 15 km. Vaisala CL51 (CL31) is a near-infrared (910 nm) ceilometer with a range of 15 km (7.6 km). MiniMPL is a visible spectrum (532 nm) dual-polarisation lidar with a range of 30 km. Model cloud fields can be compared to cloud measured by ALCs. Due to strong laser signal attenuation in thick clouds, a lidar simulator has to be used to achieve 1:1 comparison with model fields. The COSP simulator suite has been used for the past decade with the space lidar CALIPSO, with no equivalent available for ground-based lidars. ALCF extends and integrates the lidar simulator in COSP with additional processing to allow for easy 1:1 comparison with off-the-shelf ALCs. The simulator transforms model cloud liquid (clw) and ice (cli) fields to backscatter profiles. The simulator works "offline" on model output in NetCDF in standard format such as the CMIP5 format. The same post-processing steps are applied on measured and simulated backscatter. Lidar backscatter is affected by noise which scales with the square of range. Noise can obscure thin cloud above approx. 4 km in some instruments. ALCF removes noise by estimating noise distribution mean and standard deviation. Cloud detection in ALCF is performed by applying a n-sigma (3-sigma by default) threshold on the backscatter. This way false detection is avoided and comparison with models can be unbiased. Models do not repreduce vertical gradients of backscatter well due to low resolution. Absolute threshold based comparison is most likely to work comparably across models and observations. Absolute calibration can be performed using the vertically integrated backscatter method in thick stratocumulus clouds, which is equal to a known value of lidar ratio. ALCF can plot this value along backscatter profiles, which allows instrument absolute calibration factor to be determined. We have used ALCF for Southern Ocean cloud evaluation in the HadGEM3 GCM and MERRA-2 reanalysis (Kuma et al., 2019). We compared the models with ship observations collected over 4 years. ALCF was used to compare vertical cloud occurrence and the total cloud cover between the Lufft CHM 15k and Vaisala CL51 observations and models. Thanks to this approach we found that the models underestimate low cloud below 500 m and fog and the total cloud cover is underestimated by up to 18%. The laser radiation interacts with cloud droplets and ice crystals via Mie scattering, which depends on the laser wavelength and particle size distribution. We have modified the lidar simulator in COSP to account for different ALC l
poster
Widespread synuclein pathology in hyposmics precedes dopamine transporter deficit in PARS PARS Introduction Conclusions Support for this study is provided by the Department of Defense award number W81XWH-06-067, The Helen Graham Foundation and by the Michael J Fox Foundation for Parkinson Research PARS Funding α-syn SAA is positive in 47% of PARS hyposmics K Marek, D Russell, L Concha, SH Choi, D Jennings, M Brumm, C Coffey, E Brown, M Stern, J Seibyl, C Soto, A Siderowf for the PARS Investigators The Parkinson Associated Risk Syndrome Study (PARS) is a longitudinal, observational study designed to investigate a sequential biomarker strategy using hyposmia followed by dopamine transporter (DAT) imaging to identify individuals from community populations at high risk of developing PD and related disorders. Data from PARS have shown that individuals with hyposmia are enriched for DAT deficit and in turn those individuals with hyposmia and DAT deficit are at high risk to develop a diagnosis of clinical PD . We now report α-syn SAA data (in collaboration with Amprion). from the PARS study on the frequency of positive results with and without concurrent hyposmia. We further examine the temporal relationship between α-syn SAA status, DAT deficit and onset of clinical PD symptoms. We propose that a biomarker signature for PD can be detected that may enable screening and therapeutic interventions earlier in the pathological process at a point prior to onset of classical motor or cognitive symptoms. Clinical PD PARS Cohort Ascertainment PART 1 - Community ascertainment – remote data collection PART 2 - In clinic longitudinal clinical and biomarker data collection N = 10,139 Returned screening and background questionnaire N = 4330 Normosmic CSF was acquired in 100 PARS participants, 27 normosmic and 73 hyposmic. Participants were followed for up to 10 years. Investigators were unaware of UPSIT or imaging data. In this analysis the PARS study visit at which CSF was obtained has been designated as a reference visit time 0. N = 4999 Completed UPSIT N = 669 Hyposmic Hyposmia <15% by UPSIT N = 100 Normosmic N = 203 Hyposmic N = 27 Normosmic N = 73 Hyposmic CSF acquired during follow-up Comparison of PARS CSF cohort clinical and biomarker characteristics by olfactory and SAA status SAA Status and DAT binding in PARS Hyposmics vs Normosmics % age-expected lowest putamen SBR UPSIT Percentile Longitudinal DAT in SAA Hyposmic Positive and Negative Time relative to CSF collection (years) Circles (o) refer to time of clinical PD diagnosis (if time of DAT imaging and clinical PD diagnosis overlapped). In the two remaining cases, the last imaging visit occurred 4 years prior to clinical diagnosis • Most SAA positive PARS participants have normal DAT at baseline. • Most SAA positive PARS participants have normal DAT for at lease 4 years. • Approx 30% of SAA+ PARS hyposmics compared to 8% of of SAA- PARS hyposmics show reduction of DAT at baseline and/or follow-up. • Only PARS hyposmic SAA+ participants developed symptoms of NSD (8 of 12 DAT deficit). None of the SAA – hyposmics or normosmics developed symptoms. lowest putamen SBR % age-expected lowest putamen SBR Proposed Model for PD Biomarker Signature prior to PD symptoms Clinical PD SAA NEG SAA POS ENRICH These data suggest: •in a community population > 60 years, ~7.5% are asyn SAA+ •hyposmia enriches for synuclein pathology •there is a temporal progression to DAT deficit and clinical PD is some individuals • These PARS data suggest that olfactory function testing may be a simple and easily accessible tool to identify α-syn SAA positive individuals before DAT imaging abnormalities or symptoms of PD occur. • Further the proposed PD biomarker signature provides a new roadmap for therapeutic development to prevent or slow disease onset and will enable further studies to elucidate molecular disease subsets and ultimately more precise therapeutic targets based on disease biology. ABSTRACT 253
poster
Community Cluster Provenance Research Formate Neben den vom Community Cluster initiierten Temporary Working Groups (TWGs) bietet das CC Provenance Research weitere Möglichkeiten zum fachspezifischen Austausch. Quartalsweise finden Clustertreffen statt, die sich einem Themen- schwerpunkt widmen und Mitgliedern die Chance geben, eigene Projekte vorzustellen. Weiterhin werden regelmäßig Workshop-Formate angeboten, die sich in nächster Zeit z. B. mit der Gemeinsamen Normdatei (GND) und der Modellierung von Unsicherheiten und Vagheiten in Provenienzangaben beschäftigen werden. TWG “Prototypische Modellie- rung eines Provenienzgazetteers zur FAIRen Bereitstellung in einer Wikibase” Ziel dieser TWG ist es, Personen und Körperschaften prototypisch in einer Wikibase-Instanz mit ihnen zugeschriebenen Objekten zu verknüpfen. Zudem sollen Informationen über Personen und Körperschaften, die über keinen externen Identifier verfügen, offen bereitgestellt werden. Die Ergebnisse sollen über ein automatisiertes Skript aus der Wikibase-Instanz abgefragt und in eine SKOS-Darstellung zur Integration des Provenienzgazetteers in DANTE umgewandelt werden. TWG “Tools and Services for Research Collections and Provenance Research” Die TWG identifiziert Dienste und Werkzeuge für Forschungssammlungen sowie für die Provenienzforschung. Ziel ist es, einen internationalen Online-Katalog zu erstellen, der domänenspezifische Bedürfnisse widerspiegelt. Somit wird ein Status Quo der momentanen digitalen Arbeitsmittel in der Provenienzforschung erhoben. Ein besonderer Fokus wird dabei auf die objektspezifische Verwaltung von (Namens-) Normdaten gelegt, also Daten zu Vorbesitzer*innen und Veräußernden, wobei es sich einerseits um Personen und andererseits um Körperschaften handelt. Spezifisch sollen hierbei Charakteristika zu archäologischen und ethnologischen Objekten sowie deren immanente, rechtliche Komponenten Berücksichtigung finden. Wer sind wir? Ansprechpartnerinnen ●Angela Berthold (Münzkabinett, Staatliche Museen zu Berlin - Stiftung Preußischer Kulturbesitz) a.berthold@smb.spk-berlin.de ●Katharina Martin (Münzkabinett, Staatliche Museen zu Berlin - Stiftung Preußischer Kulturbesitz) k.martin@smb.spk-berlin.de Chairs ●Meike Hopp (Technische Universität Berlin) ●Petra Winter (Zentralarchiv, Stiftung Preußischer Kulturbesitz) Digitale Provenienzforschung Das Cluster richtet sich an Wissenschaftler*innen, die digitale Provenienzforschung in Museen, Archiven, Bibliotheken etc. betreiben. In diesem CC bildet das einzelne Objekt mit seiner Erwerbungsgeschichte den Ausgangspunkt der Provenienzforschung. Das CC beschäftigt sich mit der Etablierung der Verwendung von Normdaten bei der digitalen Publikation von Vorbesitzer*innen und Veräußernden. Diese möglichst international anerkannten Normdaten sollen helfen, Personen und Körperschaften im Zusammenhang mit Erwerbungs- vorgängen sammlungsübergreifend zu identifizieren und Netzwerke zu rekonstruieren. Durch die Verwendung gemeinsamer, standardisierter Normdaten für Personen und Körperschaften und den dadurch entstehenden Informationsgewinn können, ausgehend vom Einzelobjekt, über Datenbanken verschiedener Institutionen hinweg Objektbiographien nachvollzogen und ehemals zu einer Sammlung oder Person gehörige Objekte in ihre Ursprungszusammenhänge gebracht werden. About the Poster NFDI4Objects Community Meeting 2024, Mainz. Stiftung Preußischer Kulturbesitz (SPK) Technische Universität Berlin Zitation Berthold et al. (2024). Community Cluster Provenance Research. Zenodo. DOI: 10.5281/zenodo.13798294. Vernetzung Das CC arbeitet mit den Beteiligten innerhalb von NFDI4Objects, mit weiteren NFDI-Konsortien sowie mit der Community der Provenienzforschenden eng zusammen. Eine solche großflächige Vernetzung ermöglicht einen Überblick über den Status Quo der digitalen Provenienzforschung sowie eine passgenaue Bedarfserhebung im Bereich des Forschungs- datenmanagements für die Provenienzforschung. Beteiligte Organe innerhalb von NFDI4
poster
Amanda Truitt & Patrick A. Young Arizona State University, School of Earth and Space Exploration Using TYCHO [13] we have modeled the evolutionary tracks for a total of 1232 stars, each with a different combination of mass, scaled metallicity, and speciLic elemental composition. TYCHO is a 1D stellar evolution code with a hydrodynamic formulation of the stellar evolution equations. It uses OPAL opacities [14, 15, 16], and a combined OPAL and Timmes equation of state [17]. TYCHO outputs data on stellar surface quantities for each time-step of the evolution, which we use to produce a predicted radius for the inner and outer edges of the HZ as a function of the star’s age. To calculate the HZ for an individual star at each point in its evolution, we use equations from [6], combined with TYCHO outputs. Our program CHAD (Calculating HAbitable Distances) then determines the inner and outer HZ boundaries, discussed in [7]. The original grid of 376 models [18] includes “solar-type” stars, with masses ranging from 0.5–1.2 M¤. Models are also calculated for scaled metallicity values of 0.1–1.5 Z¤, and four models for oxygen abundance values (0.44, 0.67, 1.28, and 2.28 O/Fe¤) calculated at Z¤. The end-member oxygen models are also calculated at each scaled Z value. The grid is complete for the MS until hydrogen is exhausted in the core. Since it has been determined observationally that carbon and magnesium make the most difference (after oxygen) to the stellar evolution [19, 20, 21], we have simulated tracks to represent variations in these elements as well. The spread in values we use reLlects the diversity in abundances that have been directly measured in nearby stars [12, 22, 23, 24, 25]. This more recent grid includes a total of 480 models: 240 models each for variations in carbon and magnesium (ranging from 0.58–1.72 C/Fe¤ and 0.54–1.84 Mg/Fe¤) [paper in prep]. 1. Catanzarite, J. & Shao, M. 2011, ApJ, 738, 151 2. Petigura, E.A., Howard, A.W., & Marcy, G.W. 2013, PNAS, 110, 19273 3. Gaidos, E. 2013, ApJ, 770, 90 4. Batalha, N.M., Rowe, J.F., Bryson, S.T., et al. 2013, ApJS, 204, 24 5. Borucki, W.J., Koch, D., Basri, G., et al. 2011, ApJ, 736, 1 6. Kopparapu, R.K., Ramírez, R.M., Kasting, J.F., et al. 2013, ApJ, 765, 131 7. Kopparapu, R.K., Ramírez, R.M., SchottelKotte, J., et al. 2014, ApJL, 787, L29 8. Kasting, J.F., Whitmire, D.P., & Reynolds, R.T. 1993, Icarus, 101, 108 9. Selsis, F., Kasting, J.F., Levrard, B., et al. 2007, A&A, 476, 1373 10. Anbar, A.D., Duan, Y., Lyons, T.W., et al. 2007, Science, 317, 1903 11. Young, P.A., Liebst, K., & Pagano, M. 2012, ApJ, 755, 31 12. Hinkel, N.R., Young, P.A., Timmes, F.X., et al. 2014, ApJ, 148, 54 13. Young, P.A. & Arnett, D. 2005, ApJ, 618, 908 14. Iglesias, C.A. & Rogers, F.J. 1996, ApJ, 464, 943 15. Alexander, D.R. & Ferguson, J.W. 1994, ApJ, 437, 879 16. Rogers, F.J. & Nayfonov, A. 2002, ApJ, 576, 1064 17. Timmes, F.X. & Arnett, D. 1999, ApJS, 125, 277 18. Truitt, A., Young, P.A., Spacek, A., et al. 2015, ApJ, 804, 145 19. Neves, V., Santos, N.C., Sousa, S.G., et al. 2009, A&A, 497, 563 20. Mishenina, T.V., Soubiran, C., Bienaymè, O., et al. 2008, A&A, 489, 923 21. Takeda, Y. 2007, PASJ, 59, 335 22. Bond, J.C., Tinney, C.G., Butler, R.P., et al. 2006, MNRAS, 370, 163 23. Bond, J.C., Lauretta, D.S., Tinney, C.G., et al. 2008, ApJ, 682, 1234 24. Ramírez, I., Allende Prieto, C., & Lambert, D.L. 2007, A&A, 465, 271 25. González Hernández, J.I., Israelian, G., Santos, N.C., et al. 2010, ApJ, 720, 1592 26. Pagano, M.D., Truitt, A., Young, P.A., Shim, D. 2015, ApJ, 803, 90 27. http://bahamut.sese.asu.edu/~payoung/AST_522/ Evolutionary_Tracks_Database.html 28. Blackman, E.G. & Owen, J.E., 2016, MNRAS, 458, 2 We are working to understand how stars of different mass and composition evolve, and how stellar evolution directly inOluences the location of the habitable zone (HZ) around a star. It is now estimated that more than 20% of all Sun-like stars and 50% of M-dwarfs may host a planet in the H
poster
ALMA allows us to study these regions buried within thick gas layers since mm-wave emission is almost unaffected by dust up to column densities up to Also, ALMA allows us to resolve each nucleus, thanks to its unprecedented resolution of 0.03 arcsec. MRK463 NGC6240 ARP220 LEDA17883 ESO509 ESO253 RESULTS SUPERMASSIVE BLACK HOLE PAIRS AT SEPARATIONS <1 KPC AND THEIR ROLE IN BLACK HOLE GROWTH Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile Centro de Astrofísica y Tecnologías Afines National RadioAstronomy Observatory This investigation aims to study the growth in black holes undergoing a merging process, specifically those at separations below 1 Kpc. This is not a straightforward task since it has been found that merging-triggered AGNs are likely to become heavily obscured by gas and dust (Ricci et al., 2017, 2021) Objective Method Millimetric and X-ray emission relation It has been studied that X-rays are a good tracer for black hole growth. A relation between x-rays and mm-emission has recently been found for isolated AGNs (Kawamuro et al. 2022). If this relation holds for dual AGNs it could be used to trace the black hole growth at these separations. Macarena Droguett Callejas , Ezequiel Treister , Loreto Barcos-Muñoz CONCLUSIONS MJDROGUETT@UC.CL SAMPLE AND OBSERVATIONS INTRODUCTION 3 The sample of this study can be divided into two groups. The first one is a selection of known dual AGNs and dual nuclei (Mrk463, NGC6240, ARP220 and UGC4211); this is ALMA archival data with observations from the main array. The second group is a selection from the GOALS survey. This is a survey of the most luminous infrared galaxies in the local universe, which is a great place to look for dual AGNs. The observations of this group correspond to ALMA archival from the ACA configuration. 1,2 1,2 Figure 1: ALMA continuum images of two dual AGN candidates and one dual nuclei. The Magenta line depicts 0.1 arcsec. Each nucleus of confirmed dual AGNs follows the studied relation. Isolated AGNs in the GOALS sample also follow the relation. In-band spectral indexes were computed, indicating emission coming from dust from the sources distant from the relation. This value indicates that if an AGN was present, it would not be dominant. The correlation between mm and X-ray emission established for isolated AGN also holds for dual systems. The relation between mm and x-ray emission can be used to identify heavily obscured AGN in previously unidentified systems. A better understanding of the black hole merging process will be key in predicting the number of gravitational wave events. Ricci, C. et al. MNRAS 468(2):1273–1299, June 2017. doi:10.1093/mnras/stx173 Ricci, C. et al, MNRAS. 506(4):5935–5950, October 2021. doi:10.1093/mnras/stab2052 Kawamuro, T. et al., The Astrophysical Journal, 938(1):87, October 2022. doi: 10.3847/1538-4357/ac8794 Koss, M. et al., The Astrophysical Journal, 942(1):L24, January 2023. doi: 10.3847/2041-8213/aca8f0. 1 2 3 Figure 2: The blue-filled circles are the data that Kawamuro et al. (2023) used to compute the relation (red line) between mm-emission and x-ray emission for isolated AGNs. Black dots are our selection from GOALS. Confirmed dual AGNs and other sources of interest are appropriately labelled. For sources without an exact value of column density, repeated labels filled with colours indicate the change in the position of the source when corrected by column densities of 10 (light purple) or 10 (dark purple). The black arrow indicates a lower limit. 24 REFERENCES FUTURE WORK Perform the same analysis for dual AGN candidates observed with ALMA’s cycle 9. This program aims to determine the incidence of dual AGN in late state-merger systems by observing 12 sources. Project Code: 2022.1.01348.S Using the prescription of Murphy et al. (2012) and IR luminosities reported by Sanders et al. (2003), we estimated that the contribution of star formation to the mm and x-ray emission could account
poster
WALK IN STYLE Received By 240 Institutions UNESCO SOCIAL / HEALTH CARE PRIMERY / SECONDARY SCHOOLS Positive Vote on Its Impact 99% Songwriter / Project Lead | Mosi Dorbayani About: Artists from 25 countries, spanning 5 continents united to record Walk in Style, a message song to raise awareness on bullying. A MESSAGE SONG TO VICTIMS OF BULLYING Project | Recorded in 10 different genres What It Entailed: The project became a subject of discussion and dialogue, received national / international media coverage – providing a thinking tool, and comfort to very many. With over 7.2 million streams/downloads and having broadcast to 700,000,000 listeners, this message song is considered as one of the most impactful artistic projects for humanity in the world. Hollywood Music in Media Awards| Nominated Social Impact Song: The project received two nominations for best Lyrics and best RnB style in category of social impact songs. To find out more about this project, consult my publication titled: Message Song, or refer to my critique (Cultural Diplomacy), which is available from University of Salford’s research repository. ONE OF THE BIGGEST MUSICAL PROJECTS FOR HUMANITY IN THE WORLD AMERICA AFRICA ASIA EUROPE Broadcast to over 700,000,000 Listeners Around the Globe AUSTRALIA To Stream The Latest Version Click on HERE Global Out-reach: Continents with most number of streams/downloads/broadcast America Europe Africa Asia Australia America 40% Europe 36% Africa 8% Asia 11% Australia 5% Source: Publisher/Release Metadata/Nielsen Audio/Play Count Across All Digital And Social Media Platforms Over 1,000,000 Streams on Its First Month of Release from the Project’s Players Such as SoundCloud / Google… Estimated 3,000,000 Streams Across Artists’ Pages / Their Public Players on Its First Months of Release Over 3,200,000 Free Downloads on Its First Six Months of Release from the Project’s Sites/MTV site & Players Such as Google/ WixPlayer /SoundCloud
poster
LMAS Report C I Mendes1, P Vila-Cerqueira1, Y Motro2, J Moran-Gilad2, J A Carriço1, M Ramirez1 1. Instituto de Microbiologia, Instituto de Medicina Molecular, Faculdade de Medicina, Universidade de Lisboa, Lisboa, Portugal 2. Faculty of Health Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel Email: cimendes@medicina.ulisboa.pt Twitter: @ines_cim ABPHM’21 Abstract The de novo assembly of raw sequence data is a key process when analysing data from shotgun metagenomic sequencing. It also represents one of the greatest bottlenecks when obtaining trustworthy, reproducible results. LMAS is an automated workflow enabling the benchmarking of traditional and metagenomic prokaryotic de novo assembly software using defined mock communities. Several steps were implemented to ensure the transparency and reproducibility of the results. The mock communities can be provided by the user to better reflect the samples of interest. New assemblers can be added with minimal changes to the pipeline, so that LMAS can be expanded as novel algorithms are developed. LMAS accepts as input raw short-read paired-end sequencing data and tripled reference sequences. The resulting assembled sequences are processed and the global and per reference quality assessment is performed. The results are presented in an interactive HTML report where selected global and reference specific performance metrics can be explored. LMAS is implemented in Nextflow using Docker containers to provide flexibility. The use of Docker containers for each assembler allows versions to be tracked, and the use of Nextflow, a workflow management software, allows the effortless deployment of LMAS in any UNIX-based system, from local machines to high-performance computing clusters with a container engine installation. Usage Implementation ZymoBIOMICS Community Standards The eight bacterial genomes and four plasmids of the ZymoBIOMICS Microbial Community Standards were used as reference, and raw sequence data of the mock communities, with an even and logarithmic distribution of species, and a simulated sample of the evenly distributed reads generated from the reference genomes were used as input for LMAS. Our results show that the choice of a de novo assembler depends greatly on the computational resources available and the species of interest, with the performance of each assembler varying greatly with the abundance in the sample. Overall, multiple k-mer De Bruijn graph assemblers outperform the alternatives but no major performance gains were obtained when using dedicated metagenomic assemblers. No single assembler emerged as an undisputed ideal choice. The resulting report is available at https://lmas-demo.herokuapp.com. LMAS is open-source and freely available at https://github.com/cimendes/LMAS. Contig size distribution for log (A), even (B) and mock (C) ZymoBIOMICS samples A B C Misassemblies for log (A), even (B) and mock (C) ZymoBIOMICS samples A B C PLS Metrics for log (A), even (B) and mock (C) ZymoBIOMICS samples A B C
poster
P-67 DISCUSSION / CONCLUSION The median IP of COVID-19 of 5 days is within the range of IP estimated by the World Health Organization. A higher infecting dose as well as higher virulence of the strain could possibly lead to a shorter IP. Based on the longest median incubation period in our study (cluster religious), medical observation or quarantine period should be of a minimum of 8 days to halt the spread of disease. Therefore, our recommendation is to maintain current practice of quarantine of 14 days which will suffice to curb the spread of disease. NMRR-20-793-54770 The Incubation Period of Coronavirus Disease 2019 (COVID-19) in Petaling District, Malaysia Acknowledgements The authors would like to thank the Director General of Health Malaysia in the support of publishing this poster. Our special thanks goes to the staff of the Petaling District Health Office. References 1. Brookmeyer, R. (2015). Incubation Period of Infectious Diseases.Wiley StatsRef: Statistics Reference Online, 1–8 2. Ministry of Health Malaysia. (2020a). Guidelines on COVID-19 Management in Malaysia No. 5/2020. 3. Petaling District Health Office. (2020). Laporan Harian Wabak COVID-19, Daerah Petaling, 22 April 2020. INTRODUCTION A total of 219 cases were included in this study. Four main clusters were identified; corporate (n=44, 20·1%), religious (n=43, 19·6%), imported (n=74, 33·8%) and others (n=58, 26·5%). The median IP of COVID-19 among the cases was 5·0 days (interquartile range 3·0-8·0). The longest median IP was found in religious cluster (8·0 days, IQR 4·0-11·0), while the shortest median was corporate cluster (3·5 days, IQR 3·0-6·8). Significant difference was observed between corporate and religious clusters (p=0·001) (Table 1). 219 Cases Median IP = 5 days N = 74 33.8% Median IP = 6 days Imported N = 44 20.1% Median IP = 3.5 days Corporate N = 43 19.6% Median IP = 8 days Religious N = 58 26.5% Median IP = 5 days Others Male Female 4 Clusters N = 114 52.1% Median IP = 5 days Cluster n Median IQR X2 statistic* (df) p Corporatea 44 3.5 3.0 – 6.8 15.2(3) 0.002 Religiousb 43 8.0 4.0 – 11.0 Importedc 74 6.0 3.8 – 8.0 Othersd 58 5.0 3.8 – 8.0 *Kruskal Wallis test Post Hoc tests: abp =0.001; acp =0.153; adp =0.478; bcp =0.203; bdp =0.100; cdp =1.00 Table 1: The median incubation period of COVID-19 by clusters N = 105 47.9% Median IP = 6 days RESULTS Ridwan Sanaudi1, Ainul Nadziha Mohd Hanafiah2, Lee Soo Cheng3, Lim Kuang Kuay4, Waramlah Ramlan5, Diana Raj5, Shiehafiel Fieqri Hussin5, Roslinda Abu Sapian1, Nurul Syarbani Eliana Musa1, Mohamed Paid Yusof3 MATERIALS & METHODS A cross-sectional study was conducted between 3 February 2020 and 13 April 2020 by using secondary data from the Petaling District Health Office. The IP was classified as the time elapsed between exposure to a confirmed case and the date of onset of symptoms. The Kruskal-Wallis test was performed to compare the differences of IP between the groups. Globally, over 2 million population have been affected by the COVID-19 outbreak and over five thousand in Malaysia, with almost 100 deaths as of 22 April 2020. Knowledge on the incubation period (IP) of COVID-19 is scarce due to the novelty of the virus. The aim of this study was to determine the IP of COVID-19 infection in the district of Petaling, Selangor, Malaysia. 1 National Institutes of Health (NIH), Ministry of Health, Malaysia 2 Institute for Health Systems Research, NIH, Ministry of Health, Malaysia 3 Petaling District Health Office, Selangor, Ministry of Health, Malaysia 4 Institute for Public Health, NIH, Ministry of Health, Malaysia 5 Ministry of Health, Malaysia
poster
Investigating health disparities and AI bias in models to predict development of chronic kidney disease in patients with Type II Diabetes Introduction ●Chronic kidney disease (CKD) is estimated to affect more than 1 in 7 adults in the US and is more prevalent in non- Hispanic Black adults than in other races/ethnicities. ●One of the most common preventable risk factors for CKD is type 2 diabetes (T2DM), also shown to have higher prevalence in racial and ethnic minorities and low-income populations. ●Machine learning (ML) can help to identify patients at risk of developing CKD who may benefit from early interventions. ●However, ML models have been found to demonstrate bias in their predictive performance, which has the potential to exacerbate existing health disparities or create new ones. AIMS: Investigate health disparities in development of CKD in adult patients with T2DM and develop fair models for predicting CKD, with a focus on race groups. Methods ●Criteria: Adult patients in the OCHIN database from 2012-2021 with a diagnosis of T2DM based on ICD9 and ICD10 codes, documented date of T2DM onset and CKD onset where applicable, and no CKD prior to T2DM. ●Multivariable logistic regression to quantify health disparities ●Random Forest (RF) and XGBoost (XGB) for ML predictive modeling. Table 3. XGB model fairness results Conclusion ●In a population of patients with T2DM, Black and Asian patients had a higher likelihood of developing CKD compared to White patients. ●Predictive models did not perform equally across race groups. ●High performing models may not perform equitably for all patient populations, therefore evaluation and reporting should extend beyond overall performance metrics to include fairness assessment and reporting of any biases in model performance across demographic groups. Ongoing / future work: - time to event analysis to evaluate disparities in progression to CKD; - more comprehensive evaluation of model biases, including gender, age, and intersectional biases; - Incorporate ML explainability methods; - investigate potential approaches for bias mitigation to develop fair models that perform equitably across groups. Mary M. Lucas1, Christopher C. Yang1, Mario Schootman2 1Drexel University; 2University of Arkansas for Medical Sciences This work was conducted with the Accelerating Data Value Across a National Community Health Center Network (ADVANCE) Clinical Research Network (CRN). ADVANCE is a CRN in PCORnet®, the National Patient-Centered Clinical Research Network. ADVANCE is led by OCHIN in partnership with Health Choice Network, Fenway Health, and Oregon Health & Science University. ADVANCE’s participation in PCORnet® is funded through the Patient-Centered Outcomes Research Institute (PCORI), contract number RI-OCHIN-01-MC. This research was, in part, funded by the National Institutes of Health (NIH) Agreement NO. 1OT2OD032581-01. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the NIH. Results Table 1. Patient cohort distribution by race and CKD Table 2. Multivariable logistic regression results for race Figure 1. Predictive models performance Race Overall N = 343,347 No CKD N = 308,933 CKD N = 34,414 White 199,729 (58%) 180,749 (59%) 18,980 (55%) Asian 20,707 (6.0%) 18,152 (5.9%) 2,555 (7.4%) Black/African American 70,525 (21%) 62,392 (20%) 8,133 (24%) Other or Not Reported 52,386 (15%) 47,640 (15%) 4,746 (14%) Race aOR 95% CI p-value White — — Asian 1.10 1.05, 1.16 <0.001 Black/African American 1.39 1.35, 1.44 <0.001 Other or Not Reported 0.96 0.92, 0.99 0.024 FNR Difference FPR Difference White-Black/AA 0.094 (p=3.69e-05) -0.085 (p=1.159e-09) White-Asian 0.118 (p=7.84e-05) -0.108 (p=1.684e-07) Black/AA-Asian 0.024 (p=0.099) -0.023 (p=0.001)
poster
Female limited X chromosome evolution and its effect on sperm competitiveness Yesbol Manat*, Georgios K., Katrine K. H., and Jessica K. Abbott Department of Biology, Lund University, 223 62 Lund, Sweden Contact: * yesbolmanat@biol.lu.se Conclusion vThe increasing trend in male attractiveness may due to increased body size, and the decreased offence ability may trade-off against pre copulatory success.
poster
Fig 2 - WHERE TO FIND LINKED MICROSATELLITES Fig 1 - STAR-LIKE GENEALOGIES Demographic expansions produce characteristic genealogies where most coalescent events occurred at the time just before the expansion, when population size was small. The mismatch distribution expanded to linked microsatellites Miguel Navascués* & Olivier Hardy Laboratoire Eco-Ethologie Évolutive, Faculté des Sciences, Université Libre de Bruxelles, Avenue F.D. Roosevelt 50, Campus du Solbosch, 1050 Brussels, Belgium * Current address: Laboratoire d'Ecologie, École Normale Supérieure, 46 rue d'Ulm 75230 Paris Cedex 05, France. E-mail: m.navascues@gmail.com Pi ;0 ,1 ,=∑ j=i ∞ Pj ;0 ,1 ,×Pi ; j The use of genetic data to make inferences on demographic dynamics is one of the common applications of population genetics. Many statistical methods employed on this task are based on coalescent theory which describes the genealogical relationships among a group of genes as a function of population demography (fig 1). The theoretical background provided by the coalescent describes the distribution of mutations among the sample of genes under different demographic scenarios. However, the applicability of these predictions to empirical genetic data will be hampered by the mutational mechanisms that can erase or distort some of the useful information with back or parallel mutations. The inclusion of mutation models in theoretical predictions can improve the performance of statistical methods. Li (1977) described the probability distribution for the number of mutations (j) between a pair of haplotyes evolving under a demographic expansion. Schneider & Excoffier (1999) introduced a correction on that equation that allowed describing the distribution of the number of genetic differences (i) for mutation model for DNA sequence polymorphisms (see equation). Polymorpshim on linked microsatellites (repetitive DNA sequences of short repeat motif, fig 2) is sensitive to demographic expansions (Navascués et al. 2006). However large number of homoplasious mutations are expected on these markers which affects the estimation of demographic parameters. In order to improve the accuracy on these estimates we have developed a correction similar to that of Schneider & Excoffier (1999) but using the stepwise mutation model to describe microsatellite evolution. REFERENCES ● Li (1977) Genetics 85:331–337 ● Navascués et al. (2006) Molecular Ecology 15:2691–2698 ● Schneider & Excoffier (1999) Genetics 152:1079–1089 ACKNOWLEDGMENTS I am grateful to Brent Emerson & Concetta Burgarella for their support and discussion. This study was partially funded by the European Science Foundation ConGen Exchange Grant 1142 0 2000 4000 6000 8000 10000 12000 1 6 11 16 21 26 31 36 41 46 generations population size -0.216 2.725 6.784 -2.639 7.328 4.361 Sim τ = 7 -0.100 0.872 4.900 -1.498 2.398 3.502 Sim τ = 5 -0.144 0.493 2.856 -0.654 0.564 2.346 Sim τ = 3 0.025 0.154 1.024 -0.115 0.083 0.885 Sim τ = 1 bias mean squared error mean among estimates bias mean squared error mean among estimates Maximum likelihood estimator, model with homoplasy Maximum likelihood estimator, model without homoplasy Table 1 θ0=2N0μ θ1=2N1μ τ=2tμ Data was simulated for a set of linked loci evolving under a stepwise mutation model in a population going through a demographic expansion. Four different ages (τ) for the expansion were simulated and maximum likelihood estimates were obtained for the time of expansion using a model without homoplasy [using only Li's P(j;θ0,θ1,τ)] and a model accounting for homoplasy [using P(i;θ0,θ1,τ) with the correction based on the stepwise mutation model]. Table 1 shows how bias and error decrease when using the model corrected for homoplasy, specially for older expansions. probability distribution of the number genetic differences (i) given a demographic model (θ0,θ1,τ) probability distribution of the number of mutations (j) given a demographic model (θ0,θ1,τ) probability distribution of
poster
The NCORES program: HARPS follow-up of TESS discoveries near the photoevaporation gap David Armstrong, NCORES consortium University of Warwick, LAM, IA-Porto, Geneva Observatory, CAB, ESO, INAF Torino, IAP, IAFE d.j.armstrong@warwick.ac.uk The Survey Targets and Current Results TOI-849b The NCORES program uses HARPS data to measure the masses of TESS planets near the photoevaporation gap. Our goal is to investigate the mass dependence of the gap at small planet-star separation. We are coming to the end of the survey, with 26 precise (<20% error) masses published or in prep, some in collaboration with other teams such as KESPRINT and Chile-MPIA. We are a member of TFOP, and almost all of our targets will contribute to the TESS Level 1 science requirement to measure masses for planets smaller than 4R⨁. Current targets in the mass-radius diagram. Published or submitted NCORES targets are in red, in prep is purple. The background of TESS planets is in blue. Planets span the range from terrestrial to Neptune-like, and include planets with masses of 3- 10M_Earth in the ‘fully evaporated’ region below the gap. Current targets as a function of insolation and planet radius, as compared to the original photoevaporation gap detection in Fulton et al (2017), AJ 154, 3. Planets in multiple systems are joined by dashed lines, some of which cross the gap. TOI-849b is a standout discovery from the program, the remnant core of a giant planet found inside the Neptunian desert. The planetary core may have been exposed through tidal disruption, planetary impacts, or extremely gas-poor formation. See Armstrong et al (2020), Nature 583, 39-42 for details. TOI-849b in the context of the Neptunian desert. The planet is near the size of Neptune but as dense as the Earth, implying an unusual formation or evolution pathway, particularly combined with its desert location.. 1 4 10 40 0ass (M⊕) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5adius (R⊕) 100% H2O 50% H2O (arth-like 100% Fe TESS discovery Follow-up NGTS photometry Precise HARPS radial velocities High resolution imaging Rare planets revealed in the desert a b c d e Multi-instrument follow-up TFOP and data from multiple instruments like the NGTS facility is critical to the program. This figure shows the combination of data used to characterize TOI-849b.
poster
Fitting equations of state User defines a start guess for mineral properties, the parameters they wish to change the pressure, temperature and volume data and optional uncertainties. Function returns fitted parameters and the associated covariance matrix. The authors are thankful for input from (alphabetically by surname): Wolfgang Bangerth, Zack Geballe, Marc Hirschmann, Yu Huang, Jessica Irving, JiaChao Liu, Valentina Magni, Bill McDonough, Motohiko Murakami, Wendy Panero, Barbara Romanowicz and Quentin Williams. We thank Lars Stixrude for providing benchmarking calculations. This project was initiated at the CIDER workshop in 2012 (www.deepearth.org). The project is funded and supported by: BurnMan: Towards a multidisciplinary toolkit for reproducible deep Earth science Sanne Cottaar1, Robert Myhill2, Timo Heister3, Ian Rose4, Cayman Unterborn5, Juliane Dannberg6 1 University of Cambridge, 2 University of Bristol, 3 Clemson University, 4 University of California, Berkeley, 5 Arizona State University, 6 Colorado State University/Texas A&M University We present BurnMan[1], an open-source mineral physics python toolbox to determine thermodynamic and elastic properties for Earth minerals and rocks. BurnMan currently includes: Several equations of state (including Vinet, Birch Murnaghan, Mie-Debye-Grueneisen, Modified Tait) Popular solid solution models (ideal, symmetric, asymmetric, subregular) Published databases of minerals (Holland and Powell, 2011 [2]; Stixrude & Lithgow-Bertelloni, 2005; 2011[3]) Different averaging schemes for seismic velocities (Reuss, Voigt, VRH, Hashin-Shtrikman) Reference seismic models- 1D and soon 3D Reference geotherms Repository of work published using BurnMan A wide range of examples, benchmarks and unit tests BurnMan is designed to be simple and intuitive to use. For example, calculating and printing the properties of Mg0.9Fe0.1SiO3-bridgmanite at 30 GPa and 2000 K is just four lines of code: BurnMan is available at www.burnman.org and is being openly developed at https://github.com/geodynamics/burnman. Latest mineral physics additions Summary Interface with seismology Interface with planetary science Interface with geodynamics $ burnman.tools.fit_PVT_data(mineral, fit_params, PT, V, V_sigma) Hugoniot calculations (for shock experiments) User defines the reference conditions at the start of the shock, the set of pressures along which to calculate the Hugoniot, and an optional reference mineral. Function returns the volumes and temperatures along the Hugoniot. $ burnman.tools.hugoniot(mineral, P_ref, T_ref, pressures, reference_mineral) Equilibrium assemblage calculations (work in progress) User defines a bulk composition, a set of phases (endmembers or solutions), two constraints (compositional or P-T), and optional compositional guesses. Function updates assemblage with equilibrium compositions and conditions. [right] Olivine phase diagram constructed using BurnMan. $ burnman.equilibriumassemblage.equilibrate(composition, assemblage, constraints, guesses) $ bdg = burnman.minerals.SLB_2011.mg_fe_al_bridgmanite() $ bdg.set_composition([0.9, 0.1, 0.0]) $ bdg.set_state(30.e9, 2000.) $ print bdg.gibbs, bdg.S, bdg.K_T, bdg.heat_capacity_p Example applications • High pressure experimental data analysis • Geodynamic model interrogation for seismic properties • Forward-modelling for seismic profiles using bulk/mineral composition and temperature constraints • Self-consistent planetary mass/moment of inertia/pressure-depth calculations An example of using BurnMan in an MCMC inversion of outer core elastic properties will be presented at DI43A-2660 3D reference models So far, BurnMan has provided a range of seismically derived 1D radial models of the elastic properties of the Earth. An average 1D model however is not a true representation of the Earth. Constraints from fully 3D seismic models are needed to test, which thermo-chemical models fit the velocity variations observed. Currently, we are working on provi
poster
SST analysesIn In--situ Measurementssitu MeasurementsError Analysis Methodology Error Analysis Methodology Table. Information on the sea surface temperature (SST) analyses used in this study, where the input data were investigated based on the SST analysis information on January 1, 2014, or on information from the respective database website. where 𝑥𝑖represents the SST analysis, 𝑦𝑖stands for the buoy temperature, and 𝑥and 𝑦represent the average values of the SST analysis and buoy temperature, respectively. N represents the number of the data • The data used in this study covered the period 2014–2018. • During the study period, water temperatures were measured at 35 coastal wave buoys located around the Korean Peninsula, with 10 of these located in the EJS, 15 buoys in the southern region, and 10 buoys in the Yellow Sea. • The precision and temporal resolution of their water temperature measurements were 0.1 K and 1 hour, respectively. Kyung-Ae Park1), Hye-Jin Woo1), Hee-Young Kim2) 1) Dep. of Earth Science Education, Seoul National University, Seoul, Korea, Email: kapark@snu.ac.kr 2) Dep. of Science Education, Seoul National University, Seoul, Korea. This study presents results from the validation of seven global blended sea surface temperature (SST) analyses using the in- situ temperature measured from the coastal wave buoy and inter-comparison of them in the seas around the Korean Peninsula from 2014 to 2018: OSTIA (Operational SST and Sea Ice Analysis) CMC (Canadian Meteorological Centre) analysis, OISST (Optimum Interpolation SST), REMSS (Remote Sensing System) analysis, MURSST (Multi-scale Ultra-high Resolution SST), and MGDSST (Merged Satellite and In situ Data Global Daily SST). Overall, the root-mean square error of each analysis for the in-situ measurements was relatively high at a range from 1.27°C (OSTIA) to 1.74°C (REMSS). All analyses had warm biases over 0.29°C, which were distinctive in the southwestern coastal region of the Korean peninsula with remarkable SST cooling due to strong tidal currents in summer. In the comparison of temporal variability, most analyses revealed low coherency (< 0.5) in the period shorter than 10 days. The SST analyses have been compared against each other by investigating the spatial distributions of RMSE and bias errors. While most SST analyses tended to show good agreement in the open ocean, the differences had tendency to be amplified at the coastal regions and frontal regions. We discussed potential factors that cause the errors of SST analyses at the coastal regions by presenting the effects induced by grid sizes, distance from the coast, energy spectra, wavelet coherence, and thermal fronts in the marginal seas of the Northwest Pacific. Inter-comparisons of Daily Sea Surface Temperature Data and In-Situ Temperatures at Korean Coastal Regions • Our results indicated that SST analyses had a positive bias errors ranging from 0.31 K to 0.77 K and RMSEs ranging from 1.27 K to 1.76 K, in the coastal region of the Korean Peninsula. • Temporal scales and similarity between the in-situ temperatures and satellite- based SST database were examined using comparison of the wavelet coherence. This coherence was high (>0.8) for long periods (>180 days) and much less for short periods (<30 days). • This study revealed the SST differences between onshore and offshore regions and addressed the importance of using as many coastal buoy measurements as possible in the production of the SST analysis database. This is particularly important for the coastal phenomena with small spatial scales, which exist over short time-scales in the coastal regions. [Acknowledgement] This study was supported by the National Research Foundation of Korea (NRF), grant funded by the Korean government (MSIT) (No. 2020R1A2C2009464).Study Area Study Area Fig. (a) Currents in the study area, where the color blue represents water depth; (b) enlarged bathymetry map for the seas around the Korean Peninsula. Blue, green, and ye
poster
Influence of Tree Density Upon the Spatial Distribution of Mosquito Populations Vivek Athipatla, Ankit Chandra, Shayna Juimo-Kamga, Isabella List, Emmie Shockley Mentors: Rusty Low, Cassie Soeffing, Peder Nelson SEES Earth System Explorer mentors; Dr. Rusanne Low, Ms. Cassie Soeffing, Mr. Peder Nelson, Dr. Erika Podest, Andrew Clark The material contained in this poster is based upon work supported by the National Aeronautics and Space Administration (NASA) cooperative agreements NNX16AE28A to the Institute for Global Environmental Strategies (IGES) for the NASA Earth Science Education Collaborative (NESEC) and NNX16AB89A to the University of Texas Austin for the STEM Enhancement in Earth Science (SEES). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NASA. Acknowledgements These three graphs show the concentrations of different entities in the Houston area. Figure A shows the tree density throughout the area, Figure B shows the land cover percentages and urbanization, and Figure C shows the mosquito density within Houston. Figure A and Figure B are somewhat opposites because in places where urbanization is high, it usually means that the tree density is lower. In Figure C, it is also clear that there are more mosquitoes in the center of the metroplex, which is where there are more people to act as a vector for disease spread. By comparing these graphs with each other, it becomes very clear that there are more mosquitoes in places where the tree density is lower and the urbanization is higher. This completely contradicts our hypothesis. There are many explanations for this but the one that makes the most sense is that mosquitoes are more abundant in places where there are more people as disease vectors. And more people are located in the most urbanized areas, which is usually in the middle of the city and in places where construction has prohibited the growth of trees. Therefore, our conclusion is not that less trees leads to more mosquitoes, but that increased urbanization leads to more mosquitoes. And a side effect of that increased urbanization is a decrease in trees. Conclusions From this data, it can de determined that there is both a positive correlation between urbanization and mosquitoes, and a negative correlation between tree density and mosquitoes. This data can be used in determining which locations are at the highest risk for mosquito borne illnesses. Places with high levels of urbanization are more likely to host more significant mosquito populations, and thus the citizens of these areas are at higher risk. In the future, this experiment can be expanded by choosing different locations with more points (I.e. six locations in the Las Vegas area). The points of landcover could then be analyzed in a similar process to determine results. We could then cross reference this data to our current data to see if it is similar. This data refutes the previous hypothesis where it was predicted that the higher the tree density, the higher the mosquito population. Discussion Mosquitoes are vectors for pathogens such as West Nile Virus, Dengue (DV), Zika (ZIKV), and Malaria. Determining the abiotic factors that contribute to the spread of these diseases and the conditions under which mosquito habitats thrive are essential to mitigating mosquito populations and mosquitos as a public health threat. This study aims to assess the effect of tree density and tree concentration on mosquito populations and the correlation between tree density and mosquito species composition. Using photos collected from NASA Globe Observer, we investigated three different three by three- kilometer areas of interests representing various ecological settings within the surrounding Houston, Texas area. These photos were then analyzed using a GitHub program that assessed both tree number and density within each area. This data was then compared to a repres
poster
Type Location in/outside burial site Type of Burial Dating Characteristics of HR MNI I Inside Ditch During use of the cemetery Well organised All elements of the human skeleton represented 100 II Inside Ditch During use of the cemetery Some areas well organised, others disorganised All elements of the human skeleton represented 250 III Outside Pit During use of the cemetery Disorganised All elements of the human skeleton represented 11 IV Outside No discernible grave outline After abandonment of cemetery Disorganised All elements of the human skeleton represented 28 V Inside Construction ditch During use of the cemetery Arranged perpendicular to construction ditch All elements of the human skeleton represented 22 VI Inside Grave During use of the cemetery Arranged justaposed to the skull of an individual in primary burial Clavicle, humerus, lumbar vertebrae, iliac, femur & tibia recovered 1 VII Inside Pit under a grave During use of the cemetery Disorganised Skull, upper & lower limb bones, scapula & iliac recovered 2 VIII Inside On top of a coffin During use of the cemetery With a clear arrangement Mandible, upper & lower limb bones, cervical vertebrae & sacrum recovered 3 IX Inside Construction ditch After abandonment of cemetery Disorganised All elements of the human skeleton represented 6 X Inside Inside a coffin During use of the cemetery Disorganised Skulls, upper and lower limb bones & scapulae recovered 5 Figure 3: Plan with photogrammetry of the excavated area, with approximate indication of the limits of the cemetery. Introduction This research explores the secondary depositions (SD) of human remains (HR) identified in Carmo cemetery (fig. 1-3). This site operated during a period marked by war and pan/epidemic disease in Porto (1801-1869). Data & Methodological Approach SD are defined as clusters of HR resulting from disturbances after primary burial. SD were classified based on location, bone dispersal, and spatial/stratigraphic patterns. “PIECING TOGETHER PAST STORIES”: COMMINGLED HUMAN REMAINS FROM THE CEMETERY OF THE THIRD ORDER OF OUR LADY OF CARMO (PORTO, PORTUGAL) Figure 1: Location of Porto on th e map of Portugal. Zélia Rodrigues[1,2], Anne Malcherek[1], Jorge Fonseca[2], Graça Pereira[2], Vítor Fonseca[2], Francisca Alves Cardoso[1] 1 Laboratory of Biological Anthropology and Osteological Human Remains (LABOH) - Centre for Research in Anthropology, NOV A Universit y o f Lisbo n – Schoo l o f Socia l Science s an d Humaniti es branch (CRIA-NOVA FCSH) / Associate Laboratory for Research and Innovation in Heritage, Arts, Sustainability and Territory (IN2PAST); 2 Arqueologia & Património, Lda. Acknowledgments We would like to acknowledge the people whose remains we have the privilege to study and care for, without whom this work would no t be possible. We would also like to thank all the members of the fie ld and laboratory work team, in particular Rodry Mendonça for his h elp with processing the photogrammetry and orthophoto data. SD: A lens for Understanding Past Populations This study shows the diversity of SD identified. Their analysis can provide insights into site formation dynamics and sheds light on human behaviour, social interactions, and cultural practices. Funding:This research is part of the BeFRAIL project (ref.2022.02398.PTDC; DOI:https://doi.org/10.54499/2022.02398.PTDC). It was further supported by CRIA - Center for Research in Anthropology Strategic DevelopmentPlan (UID/04038/2020) and IN2PAST -Associate Laboratory for Research and Innovation in Heritage, Arts, Sustainability and Territory(ref.LA/P/0132/2020/DOI:https://doi.org/10.54499/LA/P/0132/2020). Zélia Rodrigues PhD research is funded by Fundação para a Ciência e a Tecnologia (ref.2023.00944.BDANA), Francisca Alves Cardoso is also funded by the project Life After Death(ref.2020.01014.CEECIND/CP1634/CT0002/DOI:https://doi.org/10.54499/2020.01014.CEECIND/CP1634/CT0002.Funding:This research is part of the BeFRAIL project (ref.2022.02398.PT
poster
El síndrome de deleción 22q11.2, síndrome velo-cardio-facial o de DiGeorge, ocurre aproximadamente en 1:1524-1:4000 nacidos vivos. Además de las afecciones físicas, entre el 20-40% de estos individuos acaban recibiendo el diagnóstico de esquizofrenia. Mujer de 28 años (CI:62), en seguimiento en psiquiatría desde 2013 por crisis clásticas de llanto y agitación ante pequeños estímulos estresantes o frustrantes. Acude a urgencias tras varios días de inquietud psicomotriz y alteraciones de conduta en el domicilio resultantes de la presencia de sintomatología delirante Psicopatológicamente: Consciente, orientada, abordable y colaboradora. Escaso contacto visual. Ánimo hipotímico. Ansiedad ideica y somática grave. Discurso circular centrado en ideación delirante de perjuicio. Vivencias de control, falsos reconocimientos. Resto sin alteraciones. Se filió el cuadro como trastorno psicótico agudo con predominio de ideas delirantes (F23.3) Entre los mecanismos más aceptados en la aparición de sintomatología psicótica en los pacientes con síndrome de DiGeorge se encuentran: Psicosis y Del22q11.2: a propósito de un caso. Reducción en los tractos de sustancia blanca observados en la corteza cingulada anterior -Reducción en el volumen de corteza entorrinal -Mayor activación del giro parahipocampal -Hiperactividad funcional en la corteza auditiva primaria -Maduración anormal del córtex prefrontal por disfunción dopaminérgica Risperidona 2mg/día Dosis crecientes de clozapina hasta 50mg/día Paroxetina 20mg/día La comprensión de la vulnerabilidad de estos pacientes a presentar sintomatología psicótica es crucial para el desarrollo de intervenciones centradas en su prevención, tratamiento y recuperación funcional. 1. INTRODUCCÍÓN 2. DESCRIPCIÓN DEL CASO 3. EXPLORACIÓN 4. DIAGNÓSTICO 5. TRATAMIENTO 6. DISCUSIÓN 7. CONCLUSIONES Martín García, Zahira1; Herrero Rodríguez, María Cristina2; Pedruelo Iglesias, Marina3; Domínguez Belloso, Francisco Javier4 1,2,3Residente de Psiquiatría del Hospital Universitario Río Hortega, Valladolid; 4Psiquiatra del Hospital Universitario Río Hortega, Valladolid. Bibliografía
poster
○Large imaging datasets with hierarchical structure are becoming increasingly commonplace. Examples of such structure can be found in the related subjects of the Human Connectome Project[1] (n=1,200) and in the repeated measures in the Adolescent Brain Cognitive Development study[2] (n=12,000) and the UK Biobank[3] (n=100,000). ○The Linear Mixed effects Model (LMM), a popular extension of the linear model, is used for the analysis of longitudinal, heterogeneous or unbalanced clustered data[4]. ○The LMM utilizes numerical methods for parameter estimation[5]. However, this means heavy overheads are placed on memory and computation time for neuroimaging datasets. ○In addition, variability in the patterns of missing data near the edge of the brain can become problematic as sample sizes increase[6]. This variability can cause shrinkage of the final analysis mask if only complete-data voxels are analysed. ○Here we present BLMM ("Big" Linear Mixed Models), an efficient tool for large-scale MRI Mixed Model analyses. BLMM is implemented in python and designed for use on computational clusters. Thomas Maullin-Sapey1, Thomas Nichols1, 2 1. Oxford Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, Nuffield Department of Population Health, University of Oxford, Oxford, UK. 2. Department of Statistics, University of Warwick, UK BLMM: Parallelised Computing for Big Linear Mixed Models Introduction Methods References Conclusion ○BLMM employs a Fisher Scoring algorithm for LMM parameter estimation. This allows for parameter estimation of a wide range of models to be estimated, including: - Hierarchical analyses involving thousands of input images. - Models involving multiple, unbalanced, potentially crossed, random factors. - Random intercept analyses for thousands of input images. ○BLMM accounts for missingness at a given voxel by "zero-ing" out subjects whose brain mask doesn’t include the voxel. The model at voxel v is: MvYv = MvXβv + MvZbv + Mvεv where Mv is a diagonal matrix with 0 diagonal entries for subjects missing data at voxel v. Both bv and εv are random terms with unknown variances. ○BLMM is computationally scalable as it works by partitioning X, Y and Z row-wise and having each processor calculate a part of the product matrices: X’X, X’Y, X’Z, Y’Y, Y’Z and Z’Z. Following this, the original matrices are discarded and the product matrices are used for computation. This means computation does not scale with n, but with the size of the fixed and random effects design. ○We demonstrate BLMM on UK Biobank task fMRI data, faces-shapes effect, modelling data on the 1540 subjects, with up to 2 repeated measures each, with a 2nd imaging visit, for a total of 2833 scans (some data lost to QC failures). The model included random subject intercepts, cross-sectional effect of age (avg age per subject), and longitudinal time (inter-visit time in years, centered by subject) . ○BLMM is a freely available python-based tool for large scale Neuroimaging Linear Mixed Models. It is designed specifically to account for subject mask variability, memory constraints and computational performance. For more information, see: https://github.com/TomMaullin/BLMM 1 - Matthew F. Glasser, et al. (2013). The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage. 80: 105-124. doi: 10.1016/j.neuroimage.2013.04.127 2 - Casey B.J., et al. (2018). The Adolescent Brain Cognitive Development (ABCD) study: Imaging acquisition across 21 sites. Developmental Cognitive Neuroscience. 32:43-54. doi: 10.1016/j.dcn.2018.03.001 3 - Allen N, et al. (2012). UK Biobank: Current status and what it means for epidemiology. Health Policy and Technology, 1(3); 123-126. doi: 10.1016/j.hlpt.2012.07.003] 4 - Laird NM, Ware JH. Random-effects models for longitudinal data. Biometrics. 1982;38(4):963-974. 5 - Demidenko, Eugene. (2013). Mixed Models: Theory and Applications with R. 10.1002/0471728438. 6 - Van Horn JD, et al. (2014). Human neuroimaging
poster
IMMERSIVE … AND INCLUSIVE Networking Mentoring Role Models Social Capital* UNDERREPRESENTED GROUPS IN IMMERSIVE AUDIO 2% Female members of Atmos Facebook Group 8% Number of women presenting on spatial audio at AES conferences 1% Black Members of Dolby Atmos Facebook Group 9% Women working as mixers in Hollywood 8.4% Game audio professionals in the US are women + + = There are over 70 feminist groups focused on music tech, but none specializing in immersive sound. Industry based DEI initiatives. Access to training; skills include Dante (certification), Dolby Atmos (certification), but who will provide foundational knowledge. Industry based DEI training initiatives. Make visible the leading women and underrepresented professionals working in immersive Build a pipeline of skilled, underrepresented groups who can find work in the field SOLUTIONS Access existing networks to develop an affordable mentoring program in a safe environment which showcases role models (mentors who are women are members of other underrepresented groups), thus creating a pipeline of skilled, underrepresented groups with the necessary social capital to be able to work in immersive audio within the audio industry. * Pamela Laird, “Pull”. Successful example: Women’s Audio Mission (San Francisco, California, USA) trains over 3,000 girls and women per year. Classes are taught by women, in a studio built by women. Companies like Dolby and Pixar recruit from WAM. Elevating artists by changing the makeup of award bodies (Recording Academy, BAFTA) Leslie Gaston-Bird, PhD Candidate, IoSR University of Surrey http://iosr.surrey.ac.uk/projects/inclusive/ 16% Game audio professionals in the UK are women BARRIERS TO ENTRY MICROAGRESSIONS Pras et al. reveal experiences of underrepresented groups in the recording studio AFFORDABILITY Immersive sound taught in HE leads to more opportunities than being “self taught” SYSTEMIC RACISM AMPS, AES, the Recording Academy, BAFTA, and other organizations hold themselves accountable
poster
Semantic Integration of Authoritative and Volunteered Geographic SCAN HERE Semantic Integration of Authoritative and Volunteered Geographic Information using Ontologies SCAN HERE g g Jimena Martínez Ramos, Arnaud Vandecasteele and Rodolphe Devillers Memorial University of Newfoundland A1B 3X9 St John's NL Canada [jmr125@mun ca] Memorial University of Newfoundland. A1B 3X9. St. John s. NL, Canada. [jmr125@mun.ca] abstract the method 1 3 VGI projects such as OpenStreetMap (OSM) have grown in the past few years Thi k i t d l method to handle semantic hetero eneit h i t ti VGI d VGI projects such as OpenStreetMap (OSM) have grown in the past few years, resulting in more accurate, complete and up-to-date datasets. As a result, governments are now interested in VGI as a reliable source of information. This work aims to develop a method to handle semantic heterogeneity when integrating VGI and authoritative datasets, while overcoming problems presented in other methods. • The first step is b ilding a domain ontolog th k l d t hi h Integrating geographic information requires solving semantic heterogeneity in a way that allows the identification of features in different datasets representing the same real phenomena We present a method for identifying • The first step is building a domain ontology, as the common knowledge onto which authoritative and VGI datasets can interface. It should follow standardization guidelines (ISO/TC 211 and OGC), since standards provide the structure for sharing and integrating geographic representing the same real phenomena. We present a method for identifying sets of homologous features in authoritative and VGI datasets. First, a domain ontology is used as a common knowledge among datasets. Second, ), p g g g g g p information [3] and there is still no standard to describe geographic features. We propose the General Feature Model (GFM) [4] as the starting point to develop the domain ontology. GFM is direct mapping from the datasets to the domain ontology is done, resulting in new datasets using the same vocabulary and therefore, interoperable. based on two main features, GF_FeatureType and GF_PropertyType, both are meta-classes representing, respectively, all feature classes and all their properties. We built a prototype for roads and railroads following this approach. the problem 2 roads and railroads following this approach. • The second step is performing the mappings between the datasets and the domain ontology. We choose R2RML [5] that can be used to express customized mappings from relational One of the main obstacles to a seamless integration of geographic information is semantic heterogeneity, which refers to the different ways datasets conceptualize real phenomena [1] This issue becomes even more difficult in We choose R2RML [5], that can be used to express customized mappings from relational databases to RDF [6]. A mapping file based on this standard is used to specify the mappings between each feature class to the concepts of the domain ontology. This way there is no need to conceptualize real phenomena [1]. This issue becomes even more difficult in the context of VGI datasets, due to their flexible and dynamic data models [2] where users can freely add new tags to the map. This way, semantic build an ontology for each dataset and mapping reuse is allowed. The R2RML mapping results in two RDF files populated with the individuals that originally were heterogeneity may occur inside OSM datasets: one real phenomenon could be represented with different tags for different users, and vice versa. features in the source datasets. After the mapping, individuals in the output files are representing the same real phenomena. Once mappings are defined, the output files are semantically interoperable. CAUSE EFFECT OUTCOMES METHOD Same reality, different conceptualizations CAUSE Semantic heterogeneities no interoperability EFFECT RDF outputs: interoperable files under the same conceptualization OUTCOMES Stand
poster
[1] Sally Dean, “Amerta Movement and Somatic Costume,” in Sarah Whatley, Natalie Garrett Brown, Kirsty Alexander, eds., Attending to movement: Somatic Perspectives on Living in this World (Devon, UK: Triarchy Press, 2015). [2] Moshe Feldenkrais, Awareness Through Movement; Health Exercises for Personal Growth, 1st ed. (New York, USA: Harper & Row, 1972). [3] Karin Rosenkranz and John Rothwell, “The Effect of Sensory Input and At­ tention on the Sensorimotor Organization of the Hand Area of the Human Motor Cortex,” Journal of Physiology 561, No. 1 (November 2004), pp. 307-320. [4] Richard Shusterman, Body Consciousness: A Philosophy of Mindfulness and Somaesthetics, (Cambridge; New York: Cambridge Univ. Press, 2008), p. 1. [5] Frank Wildman, The Busy Person’s Guide to Easier Movement: 50 Ways to Achieve a Healthy, Happy, Pain-Free and Intelligent Body, 3rd revised ed. (Berke­ ley, USA: Feldenkrais Movement Institute, 2006). Educational strategies used in the Feldenkrais Method for developing body awareness To increase kinesthetic and proprioceptive sensitivity, first reduce muscular activation in the extensors of the torso by lying down horizontal [2, 5]. Create a learning environment that affords safe, distraction-free somatic attentiveness. To increase sensitivity to vibrotactile stimulation, reduce muscular activation in the extensor muscles of the back by supporting the torso through clothing, furniture, or other apparatuses (Figure 1). In addition, Haplós should be used in a quiet room. Start with movements of the torso (pelvis- spine-neck), which will likely stimulate movement and sensation in the more distal parts. Apply vibrotactile stimuli on the midline of the back with emphasis on the upper thoracic spine, since it is a part of the user that they usually do not get to touch and explore themselves. Increase the student’s ability to perceive subtle differences in kinesthetic and proprioceptive stimuli by moving just enough that they notice the movement. Facilitate self-curiosity and playful self- exploration in the student so that they can attend to their experiences in a value-free way Repeat a movement several times in order to notice differences in sensation as well as when there is a gaps in their kinesthetic awareness Strength and duration of vibrotactile stimulation should be just to enough to draw attention to the vibrated body part. Space vibrotactile motors just at the threshold of two-point tactile discrimination on the torso (2-3 cm) (Figure 2) Present Haplós not as a therapeutic tool but as a playful device. Make the patterns interesting to attend to by programming them to be “musical”—e.g., finding the balance between repetition and variation. Take advantage of the user’s ability to gradually identify and anticipate elements in a pattern by playing a pattern more than once before moving on to a new pattern, in order to train the user to uniquely distinguish the motors from each other Ask the student to start with a simple movement, advance to more complex variations of that movement, and then revisit the initial movement in order to find the ease of movement Ask the student to move always within the limits of comfort and ease, resting whenever they feel tired or when they feel their attention waning, promoting reorganization in the sensorimotor cortex Take advantage of the bilateral symmetry of the body by moving only one side at a time to create differences in sensations that can be compared across midsagittal plane (a “one- sided lesson”) [2, 5] Stimulate the user’s skin with structured patterns of vibrotactile stimuli that progress from the simple to the complex, before returning to the initial simple pattern. Provide users the option to create and change patterns on the fly (Figure 3), and to silence the motors at any time in order not to overstimulate the user. Apply the vibrotactile stimuli on only one side of the torso (see Figure 4). Related design principles for the technical specifications for an
poster
sdu.dk Syddansk Universitetsbibliotek Mette Fentz Haastrupi, Our History Line Laursen Corydoni, Climate Future Fiction Year 2022+2023 Class (Stx) English Program 6-7 lessons Participants 9 high schools / 13 classes Outcome 252 Short stories / 154 Analysis Our History Year 2021+2022+2023 Class (Stx /Hhx) History Program 10 lessons Participants 7 high schools / 44 classes Outcome 250+ interviews / 300+ posters A Healthier Southern Denmark – The High School Panel Year 2019+2022+2023 Class (Stx/Hhx) Social Science Program 6 lessons Participants 8 high schools / 16 classes Outcome 59 posters Writes short stories (research data) Analyses short stories Feedback from researchers to students Collect interviews & process data (meta data) (research data) Analyses & interpret interviews Disseminates findings for researchers at a poster session Collect student data, incl. Interviewing the researchers Collect & process data Analyse & interpret Dissemination Climate Future Fiction Model for Engaging High Schools in Citizen Science Citizen Science Projects Berit Elisabeth Alvingi, Healthier Southern Denmark Analyses & interpret data Disseminates findings for researchers at a poster session Engaging High School Students ŝŶŝƟzen Science I SDU Citizen Science Knowledge Center Ref. Scientist Research Citizen Science Research & Education Program Co-designing Researchers • Present the project at a start-up event • Introduction to research area Researchers • Design research content, incl. data management plan • Communicate aim of research project Researchers • Interpret data • Dissemination Researchers • Formulate research questions • Identifying relevant subject areas in high schools Teachers • Co-creating education program (lesson plan & learning materials) • Participate in masterclass Teachers • Teaching Students • Learning (topic & methods) • CS activities • Collect & process data • Interpret data • Present findings at a closing event Data & new ideas for research areas Principals / Teachers • Knowledge coalition & Students Participations Evaluation education program & data Research ideation
poster
Biodiversity in Knowledge Graphs Topics co-occurring with biodiversity in the scholarly literature, as visualized using SPARQL queries to Wikidata (left) and to MaRDI (right). Biodiversity in NFDI Knowledge Graphs Daniel Mietchen, Leibniz-IGB FU Berlin & Leibniz-FIZ & MaRDI & Wikidata DOI: 10.5281/zenodo.10997504 Abstract: Knowledge graphs are a mechanism to address the challenge of leveraging data from diverse sources for use cases not served by the individual sources alone. Already well established in some specific parts of the research ecosystem, they are currently experiencing a trend towards broader adoption, including in the context of the German National Research Data Infrastructure (NFDI). A number of knowledge graphs are run by or otherwise closely associated with NFDI consortia or their members. This work provides an overview of the landscape of NFDI-associated Knowledge Graphs and explores it from the perspective of biodiversity-related use cases. Knowledge Graphs used in NFDI contexts See also Shigapov et al. (2023) Dataset: an overview of knowledge graphs in NFDI 10.5281/zenodo.8124286 . Knowledge Graphs for NFDI4Biodiversity? Potential paths forward ●Leverage internal expertise, e.g. iKnow ●Consider use cases ●Leverage existing NFDI KGs ●Leverage existing biodiversity KGs ●Identify gaps in coverage/ usability ●Set up one or more NFDI4Biodiversity KGs? ●Curation by experts and/ or crowd? Key issues ●Ontology gaps ●Ontology mapping ●Licensing ●Consistency across use cases ●Scalability ●Sustainability of KGs and related infrastructures Knowledge Graphs covering biodiversity KG name KG URL Sample user FactGrid database.factgrid.de NFDI4Objects NFDI4Culture KG nfdi4culture.de/resour ces/knowledge-graph NFDI4Culture Wikidata wikidata.org NFDI4DS MaRDI portal portal.mardi4nfdi.de MaRDI ORKG orkg.org NFDI4Chem NFDI4Earth KG nfdi4earth-knowledge hub.geo.tu-dresden.de NFDI4Earth EU Data Portal data.europa.eu KonsortSWD KGI4NFDI@ BASE4NFDI base4nfdi.de/projects/ kgi4nfdi NFDI4 Microbiota See also Babalou et al. (2023) Reproducible Domain-Specific Knowledge Graphs in the Life Sciences: a Systematic Literature Review KG name Sample content OpenBioDiv publications, taxa, journals, people Plazi taxonomic treatments, publications UniProt taxa, proteins, protein functions Ozymandias taxa, publications, people, journals GloBI taxa, biotic interactions Gene Ontology taxa, genes, gene functions Wikidata taxa, genes, publications, expeditions ORKG publications, methods, people DBpedia taxa, people, processes, genes SPARQL Microservices taxa, publications, images MeSH diseases, taxa, genes, processes Knowledge graphs can be queried in several ways, and some query types can be federated across KGs. Multiple queries involving the same entity types can then be combined into a profile for that entity type, as demoed here with a Wikidata-based taxon profile for Caenorhabditis elegans at https://scholia.toolforge.org/taxon/Q91703. See also Nielsen et al. (2017) Scholia and scientometrics with Wikidata. The work presented here was supported by the enKORE project under VolkswagenStiftung grant number 97 863, by the MaRDI project under DFG grant number 460135501 and in general terms by the Knowledge Graph community.
poster
1. Lightkurve Collaboration et al., 2018 ascl:1812.013. 2. Hippke M., et al., 2019, AJ, 158, 143. 3. Günther M. N., Daylan T., 2021, ApJSS, 254, 13. 4. Southworth J., 2013, A&A, 557, A119. 5. Claret A., 2004, A&A, 428.1001C. 6. Zucker S., Mazeh T., 1994, ApJ, 420, 806. 7. Konacki M., et al., 2010, ApJ, 719, 1293. 8. Miller N. J., et al., 2020, MNRAS, 497, 2899. 9. Hełminiak et al., 2009, MNRAS,400.969H. Modeling Codes: We performed two runs using the following codes for LC modeling: ●ALLESFITTER (Günther & Daylan, 2021, 2019) ●JKTEBOP (Southworth 2013) Limb-darkening Laws: Initial values estimated using JKTLD based on Claret (2004) and used the following two laws for the two runs: ●Quadratic law in ALLESFITTER ●Logarithmic law in JKTEBOP Error Estimation: Two runs used different error estimation algorithms: ●MCMC with ALLESFITTER ●MC with JKTEBOP RV Modeling: We used to TODCOR algorithm (Zucker & Mazeh 1994) to extract the radial velocities and modelled them in V2FIT (Konacki et al. 2010). Photometric observations: Photometric observations from TESS covering the continuous observing zone i.e. 25 sectors. Fig. on the top-right shows lightcurves (LC) from sector 1 to 62 (1, 2, 4-12, 27-32, 34-39, 61 & 62). We used Lightkurve package (Lightkurve Collaboration et al., 2011) to extract the lightcurves with customize aperture and removed the scattered background light. We used Biweight method from WOTAN (Hippke et al., 2019) for detrending. Spectroscopic observations: Spectroscopic observations were taken with HARPS, FEROS, and CORALIE, under the CRÉME project and from the ESO archive, as well as with UCLES from Hełminiak et al. (2009). Long-term Stability of Solutions in Benchmark Stars: UX Mensae Ganesh Pawar∗1, Krzysztof Hełminiak1, Ayush Moharana1, and Tilaksingh Pawar1 1Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences, Toruń, Poland Eclipsing binaries (EBs) provide a direct determination of fundamental parameters, and detached eclipsing binaries (DEBs) offer highly accurate and precise stellar parameters. DEBs with accurate and precise fundamental parameters serve as ideal benchmark systems for testing stellar models and data analysis algorithms of PLATO. In this project we have utilized space-based photometry data from the Transiting Exoplanet Survey Satellite (TESS), observing the system UX Men for 25 sectors and time-series high-resolution spectroscopic data from the CRÉME project. We are analyzing the stability of lightcurve solutions and investigating factors influencing error rates. The primary goal is to determine the conditions for a DEB to be considered a reliable benchmark in stellar parameter determination. Ra/a →Radius of Primary Rb/a →Radius of Secondary (Ra+Rb)/a→ Sum of Radii i →Inclination of the Orbit k →Ratio of Radii J →Surface Brightness Ratio L3 →Third Light Ratio e → Eccentricity ω → Argument of Periastron M1→Mass of Primary M2→Mass of Secondary ●Errors cannot be solely estimated from single sector observation. We need to accommodate variations over sectors to obtain proper errors for a benchmark system. ●The cause of these variation is uncertain, it can be from instrumental or numerical origin and/or stellar activity. ●We plan to do a stability check on photometric estimation of effective temperatures (using Miller et al. 2020) compared to spectroscopic estimates (disentangling + spectral analysis). ●A detailed inspection will be conducted to investigate the probable cause of variations. Abstract: Analysis: Observations: Parameters: Results & Future Work: Parameters Ra/a Rb/a Inc. (i) k J L3 (Ra+Rb)/a e cos ω e sin ω M1 M2 0.092746 0.087309 89.6287 0.9414 0.9722 0.0208 0.1801 0.0198 0.0202 1.23 1.195 +1σ 0.000092 0.000583 0.007785 0.001047 0.001192 0.008153 0.000081 0.002137 0.007855 0.00149 0.00167 -1σ 0.00008 0.000487 0.00734 0.000866 0.001277 0.008918 0.000076 0.002331 0.008529 0.00149 0.00167 About the author: Ganesh Pawar is a 1st year PhD student at the Nicolaus Copernicus A
poster
48 PACLITAXEL LOADED LIPID NANOEMULSIONS FOR THE TREATMENT OF BRAIN TUMOUR Mohammad Najlah1, Alisha Kadam2, Ka-Wai Wan2, Waqar Ahmed2 & Abdelbary Elhissi3 Background: Lipid nanoemulsions have been increasingly used as carriers for poorly soluble drugs owing to their biocompatibility and biodegradability. Paclitaxel (PTX) is an anticancer drug with wide activity against many types of cancer. However, the poor solubility in water is a serious limitation of this drug. Taxol is an established marketed formulation of PTX, which represents the drug dissolved in a vehicle consisting of ethanol and Cremophor EL (polyoxyethylated castor oil). Unfortunately, the toxic effects of Cremophor EL (nephrotoxicity, neurotoxicity, hypersensitivity, etc.) represent a significant drawback. In this study, we investigated commercially available Total Parenteral Nutrition (TPN) nanoemulsions, namely Intralipid 20% (Fresenius Kabi, Germany) and Clinoleic 20% (Baxter Healthcare, USA) nanoemulsions as vehicles and solubilizers of PTX and studied the efficacy of formulations against glioma cell lines and normal glial cells. Methods: PTX was loaded into the nanoemulsions via vortex-mixing for 5 min followed by bath-sonication for 2 h at 40°C (Drug concentrations of 0-6 mg/mL). Size analysis and zeta potential measurements were performed using dynamic light scattering and electrophoretic mobility respectively. The entrapped fraction of PTX was calculated using UV by subtraction of the non-entrapped fraction from the total drug amount after forcing the emulsions through 400 nm syringe filters and quantifying the drug retained in the filter. MTT studies were conducted to investigate cytotoxicity of the formulations against U87-MG (grade 4 glioma) and SVG-P12 (normal glial) cell lines. Results: Size was highly dependent on nanoemulsion type, being in the range of 254 – 264 nm for Clinololeic and 283 – 295 nm for Intralipid, depending on PTX concentration and polydispersity was generally higher for the Intralipid emulsion. Zeta potential values were negative for both emulsions with more intense charge for the Clinoleic formulations. Drug entrapment values were in the range of 70–80% and 44– 57% using for the Clinoleic and Intralipid formulations respectively. PTX-loaded Clinoleic decreased the viability of U87-MG glioma cells to 6.4%, compared to only 21.29% using PTX-loaded Intralipid nanoemulsion. Both nanoemulsions were less toxic to normal glial cells (SVG-P12), indicating selectivity of the emulsions against malignant cells. The higher entrapment in the Clinoleic emulsion correlated with the higher activity of its formulations against malignant cells. The difference in activity between the two emulsions is attributed to their different composition. Conclusions: Nanoemulsions are applicable vehicles for solubilizing PTX and acting selectively against malignant glioma cells. Moreover, the enhanced cancer targetability of nanoemulsions might be attributed to the nutritive value of lipids present in the nanoemulsions. 33 1 Faculty of Medical Science, Anglia Ruskin University, CM1 1SQ, UK; 2 Institute of Nanotechnology and Bioengineering, School of Pharmacy and Biomedical Sciences, University of Central Lancashire, PR1 2HE, UK; 3 College of Pharmacy, Qatar University, Doha, Qatar
poster
Ilknur Icke1,2, Raquel Sevilla1, Gain Robinson1, Corin Miller1 (1) Merck Research Laboratories (MRL), USA (2) Boston University, Bioimaging, USA Generative Models for measuring pH using CEST MRI Abstract Non-invasive measurement of pH provides multiple potential benefits for identification of disease states such as in cancer, inflammation, hypoxia, and tissue injury. In oncology, non- invasive pH measurements may help identify proper chemotherapeutics, characterize tumors that are more likely to either respond to therapy or metastasize, and may also help better assess treatment response. AcidoCEST MRI techniques have been developed over the recent years to perform tumor pH measurements by utilizing a contrast agent for which chemical exchange with tissue water depends on the pH of the local microenvironment. Quantitative analysis of the CEST MRI signals is generally done via modeling of the Bloch- McConnell equations by incorporating chemical exchange as a parameter, or by fitting Lorentzian line shapes to observed Z-spectra and then computing a log ratio of the CEST effects from multiple labile protons within the same molecule (ratiometric method). Modeling using Bloch-McConnell equations requires careful inclusion of many scan parameters to infer pH, while the ratiometric method requires contrast agents with multiple labile protons, thus making it unsuitable for molecules with a single labile proton. Furthermore, depending on the pH, sometimes it might not be possible to accurately calculate the ratio due to the low signal to noise ratio (SNR) of the CEST signal for certain labile protons. To overcome these limitations, we developed a generative machine learning algorithm to predict in vivo tumor pH where the observed Z-spectra after contrast agent infusion is modeled as a perturbation on the observed Z-spectra before the contrast agent was introduced. In this scheme, perturbation is modulated by the contrast agent. A sub-network of the architecture is pre-trained on a separate phantom dataset where temperature and contrast agent concentrations were controlled prior to CEST MRI scanning and subsequently used as prior knowledge that modulates the perturbation caused by the infusion of the contrast agent into the body. Our results on an experimental study using animal models (mice) indicate that our machine learning method provides a more general and accurate prediction of pH in comparison to the ratiometric method. Our method is also more general in the sense that it does not require explicit modeling of signal peaks that are dependent on the type of contrast agent. Finally, this framework can be extended into use-cases where new probing materials and sensing technologies can be designed and tested in order to better understand the biological entities being studied. References [1] W. R. Hanahan D, Hallmarks of cancer: the next generation, Cell, 2011. [2] O. Warburg, On the origin of cancer cells, Science, 1956 [3] R. e. a. Martınez-Zaguilan, Acidic pH enhances the invasive behavior of human melanoma cells, Clinical and Experimental Metastasis, 1996 [4] Y. e. a. Kato, Acidic extracellular microenvironment and cancer, Cancer Cell International, 2013 [5] J. M. Goldenberg and M. D. Pagel, Assessments of tumor metabolism with CEST MRI, NMR in Biomedicine, vol. 32, no. 10, p. e3943, 2019. e3943 NBM-18-0007.R2. [6] C. LQ, H. CM, J. JJ, R. IF, K. PH, and P. MD, Evaluations of extracellular pH within in vivo tumors using AcidoCEST MRI, Magn. Reson. Med, 2014. Conclusions CEST MRI based tumor microenvironment characterization can benefit from generative machine learning methods that can be trained in an unbiased manner, independent of the contrast agent chemical properties. Background PhantomNET InVivoNET Methods Altered cell metabolism is one of the hallmarks of cancer [1]. It has been known since the 1930s that cancer cells “prefer” aerobic glycolysis to generate the energy they need (Warburg effect [2]), which increases the lactic a
poster
The Common Fund Data Ecosystem (CFDE) Biomarker Data partnership project, involving six DCCs, is focused on the unifying concept of biomarkers, with scientific and clinical relevance, and tasked with organizing and harmonizing biomarker knowledge from numerous diverse sources, facilitating integrative data science use cases for CFDE researchers and the scientific community at large. The Illuminating the Druggable Genome (IDG) Data Coordinating Center (DCC) team at UNM is highly focused on clinical relevance as inferred from the large and diverse de-identified EHR databases from Cerner, namely (1) Health Facts and (2) Real World Data, with a focus on LOINC-encoded chemical lab tests. Specialized natural language processing (NLM) named entity recognition (NER) software from NextMove Software is employed for genes and proteins, resolved to standard IDs from HGNC or UniProt. Introduction CFDE Biomarker Partnership References Methods and Tools Discussion Conclusions and Next Steps RESULTS Clinically relevant precision molecular biomarker illumination powered by Cerner Real World Data Jeremy J. Yang, Vincent T. Metzger, Cristian G. Bologa, and Christophe G. Lambert University of New Mexico, School of Medicine, Department of Internal Medicine, Translational Informatics Division, Albuquerque, New Mexico, USA Common Fund Data Ecosystem (CFDE) All-Hands Meeting -- Bethesda, Maryland -- March 19-20, 2024 POWERED BY The central goal of the CFDE Biomarker Partnership, "… to develop a community-based biomarker-centric data model to harvest and organize biomarker knowledge for diverse biological data types," for which ontology development provides "semantic organization and integration of biomarker data to provide FAIR data representation." Within this overall goal, the IDG team is focused on clinical relevance, meaning the illumination and development of biomarkers with evidence for use by clinicians and patients and improved health outcomes. Existing lab test results are understood to be useful as biomarkers, supported by research findings, but often with great uncertainty and imprecision. Through retrospective, observational study of EHR data, labs can be associated with patient profiles and diagnoses, for improved applications as precision medicine tools. In addition, study of EHR data can ensure that the labs are in use and thus readily available. IDG-DCC Sub-project For the one-year CFDE Biomarker Partnership project, the objectives of the IDG sub-project are tightly scoped and focus on molecular biomarkers and almost exclusively human proteins measured in blood plasma. From the Cerner HealthFacts and RealWorldData databases, a set of molecular biomarkers is extracted and associated with evidence for clinical relevance, for inclusion in the partnership knowledge-base, and informing ontology development. Race Female Fpct Male Mpct African American 361145 14.42 240126 12.46 Asian 53718 2.14 38158 1.98 Asian/Pacific Islander 588 0.02 539 0.03 Biracial 3284 0.13 2741 0.14 Caucasian 1760688 70.30 1381007 71.64 Hispanic 15329 0.61 12479 0.65 Mid Eastern Indian 540 0.02 540 0.03 Native American 17233 0.69 13168 0.68 Not Mapped 26445 1.06 19805 1.03 Pacific Islander 4848 0.19 3585 0.19 Other 100728 4.02 75418 3.91 Unknown 69126 2.76 60177 3.12 NULL 90805 3.63 79996 4.15 TOTAL 2504477 100.00 1927739 100.00 Figure 1: Histogram of encounter count vs. age by gender for representative year 2016. Figure 2: Histogram of encounters vs. PSA by age group for males, representative year 2016. Figure 3: Mean encounter and patient count vs. year, for LOINC encoded laboratory test results in Cerner Real World Data. Figure 4: Schematic of biomarker-centric analytics integrating Common Fund and external data, with example human protein PSA. diagnosis_id diagnosis_co de diagnosis_description n_encounter 132782 I10 Essential (primary) hypertension 2360453 139829 E78.5 Hyperlipidemia, unspecified 1281935 142085 E11.9 Type 2 diabetes mellitus without complications
poster
A workflow to observe single-cell morphogenetic features of developing mucociliary epidermis Mari Tolonen1, Varun Kapoor2, Jakub Sedzinski1 1 Novo Nordisk Center For Stem Cell Medicine (reNEW), Department of Health and Medical Sciences, University of Copenhagen, Denmark 2 Kapoorlabs, Paris, France During embryo development, cells undergo morphogenetic processes that shape tissues, involving various tissue extrinsic and intrinsic forces. These forces lead to cell rearrangements, but their impact on differentiation remains unclear. In the mucociliary epithelium, both collective and individual cell movements lead to shaping a functional epithelium. Cells collectively move, yet single cells in the deep layers intercalate as individuals. Neighboring cells adopt different fates due while intercalating, while also signaling reciprocally to regulate the differentiation process. To study these processes over time on both individual-collective scales, we analyze single cell behaviors in Xenopus embryonic mucociliary epithelium. Our goal is to reveal detailed developmental dynamics and gain insights into airway epithelia pathological conditions arising from defective epithelial development. U N I V E R S I T Y O F C O P E N H A G E N Novo Nordisk Center for Stem Cell Medicine (reNEW) Developmental stage blastula neurula tailbud gastrula Development of Xenopus mucociliary epithelium basal cell goblet cell multiciliated cell ionocyte progenitor cell smell secretory cell MariTolonen mari.tolonen@sund.ku.dk animal pole vegetal pole animal cap fibronectin coated coverslip Live imaging setup Segmentation Tracking Post-live labelling  backtracking acquisition start acquisition end (~17h) H2B-RFP E-Cad- GFP C-Cad- GFP raw signal StarDist nuclei segmentation 3D measurements by ellipsoid fitting with ImageJ plugin MorphoLibJ Cellpose cell segmentation object voxels kinetic features morphological features t Single-cell features for cell fate predictive models Nuclei labels TrackMate objects TrackMate trajectories dividing cell major axis minor axis daughter cell daughter cell detection zone Branching point – daughter cell detection TrackMate-Oneat branch correction ~5000 trajectories nuclei ROI goblet cell MCC basal cell top merge Cell type specific in situ HCR FISH cell type labelled spots ROI intensity quantification cell type labelled tracks Intensity quantification cell type labels Outline
poster
RESEARCH POSTER PRESENTATION DESIGN © 2015 www.PosterPresentations.com Abstract The Alpha Magnetic Spectrometer (AMS) is a particle physics experiment installed and operating on board of the International Space Station (ISS) from May 2011 and expected to last through 2024 and beyond. The AMS offline software is used for data reconstruction, Monte-Carlo simulation and physics analysis. This paper presents how we manage the offline software, including the version control, the building, the documentation, the functional and performance testing, etc. As raw files are of raw events, physics analysis cannot be performed directly on them, instead, reconstruction is required to convert the original detector read outs to physical arguments. AMS Offline Software The AMS physics data format is based on CERN ROOT package. Each reconstructed AMS event is represented by ROOT tree object and contains the event header, with general information of the event (around 100 bytes), 8 bytes status word and (referenced) arrays (C++ STL vectors) of reconstructed objects like particle(s), track(s), cluster(s), etc. parameters (typically of 8 kBytes total size). On top of that, for every run the time based information is available, such as average geomagnetic cutoff, detector live time, ISS position parameters, temperatures and voltages of AMS electronics, etc. The timing precision of such data is about 100 milliseconds. The reconstructed event files are grouped together in an event summary file, usually one file per run. Dedicated AMS offline software is designed to serve both reconstruction and simulation: Reconstruction: raw events recorded by AMS detector on board of the ISS are converted to reconstructed events, containing all necessary parameters of the particle(s) crossing the detector and being ready for the further physics analysis. Simulation: Special data cards are given as the input of the offline software, to evaluate AMS detector performance using simulated events, taking into account the cosmic ray fluxes, the detector geometry and all relevant physics processes. The simulation contains two stages, simulating stage uses modified Geant4 library to generate raw events, and reconstruction stage shares exactly the same processing behavior as in the flight data reconstruction. Version Control AMS is using Concurrent Versions System (CVS) as the version control system for its offline software. CVS is a central controlled version control system and the repository (CVSROOT) 1Massachusetts Institute of Technology 2Beihang University V. Choutko1, A. Egorov1, A. Eline1, B. Shan2 Offline Software Management of the AMS experiment is located at a shared path where the clients can access. CVSROOT of AMS offline software is located on CERN AFS storage and accessible by all LXPLUS hosts. Contents: The AMS offline software contains not only the code to build the AMS reconstruction/simulation software, but also the scripts required for the production, for example, the production validation, the production job requesting, etc., and the code of various tools related to AMS offline computing, for example, the “event display”, the “fast ROOT reading”, “Slow Control Database”, etc. Access Control List (ACL): CVS has a simple access control mechanism relying on the ACL of the file system. AFS uses pts to define the group, so a pts group is defined to allow a group of user to be able to commit changes. Branches: The reconstruction and simulation software of AMS has several different branches. The main branch “vdev” is the latest version for simulation, and the branch “BXXX_patches” is the stable versions for flight data reconstruction, to keep unique data properties for data analysis. Migrating to Git Although CVS is a stable, simple, and efficient version control system, it has quite some limitations/issues compared to the modern competitors. Git a distributed version control system which was designed to be an opposite to CVS, which means, to take CVS as an examp
poster
THIS STUDY ADDRESSES TWO QUESTIONS: 1) How much can dynamic & adaptive informal water tranfers affect overall water availability? 2) What are their limits in an increasingly stressed basin? Simple transfer mechanisms that operate within the current doctrine of prior appropriation can be used to ensure water deliveries for the users that engage in them. This study explores how such transfers affect water availability in the Upper Colorado River Basin within the state of Colorado, if applied to an increasingly larger scale. We explore this using an ensemble of scaling ‘rules’ that differ in when they are triggered, the number of rights they include and the degree of scaling they consider. Under an increasingly stressed future–where dry conditions are becoming longer and more frequent–we would like to understand the capacity of such adaptive behavior to modulate more extreme droughts. More specifically, we’d like to identify if and under what conditions adaptive water transfers become insufficient tools in ensuring both water availability for rights’ holders and deliveries downstream. Preliminary results show that while adaptively scaling of individual water demands generally increases water availability in the basin, the effectiveness of this mechanism varies as a result of its interactions with the hydrologic conditions it’s applied to. AUTHORS Antonia Hadjimichael1,2, Patrick M. Reed1, Chris R. Vernon3, Travis Thurber3 EXPLORING THE CONSISTENCY OF INFERRED WATER SHORTAGE VULNERABILITIES IN A MULTI-ACTOR, MULTI-SECTOR RIVER BASIN THROUGH THE USE OF DYNAMIC AND ADAPTIVE WATER RIGHT TRANSFERS RIGHTS’ HOLDERS IN WESTERN RIVER BASINS ARE ENGAGING IN FORMAL AND INFORMAL RIGHTS TRADING Such transfers modulate the effects of drought for downstream water users. Financial compensation from state agencies could also help augment the basin’s downstream deliveries. 1 School of Civil and Environmental Engineering, Cornell University, Ithaca, NY, USA. Email: ah986@cornell.edu 2 Department of Geosciences, Penn State University, University Park, PA, 16802, USA 3 Pacific Northwest National Laboratory, Richland, WA, USA. This research was supported by the U.S. Department of Energy, Office of Science, as part of research in MultiSector Dynamics, Earth and Environmental System Modeling Program. MSD SCOPE AND INNOVATION OF EXPERIMENT 1 Create 600 exploratory adaptive demand scaling rules tailored to each user 2 3 4 Low Flow Trigger Reflects risk aversion to streamflow levels that have historically caused severe shortages Each rule is designed based on the values of three control variables: Percent of rights Scaling amount Percentage of rights in the basin that scaling is applied to Amount of scaling applied to each right Each rule affects each user differently depending on the seniority of their rights and the hydrologic conditions they are facing Test rules under increasingly stressed hydrologic conditions 1000 synthetic streamflows x 600 demand scaling rules = 600,000 model runs Perform computational experiment on high-performance computing resources Use 20,000 core hours to perform 600,000 model executions Looking at data for 350 users across all moder runs results in 259 billion records Compress outputs to much smaller storage volume using compressed .parquet file formatsing Manage and analyze dataset with SQL and DuckDB Use queries to process and store data in tabular formats (e.g., dataframes), while allowing for interactive analysis and visualization. INTEGRATED MULTISECTOR MULTISCALE MODELING Water user A Water user B Water demand at this location When scaling rules are applied they reduce water demand Effects on streamflow leaving the basin Streamflow under one hydrologic realization Effect of adaptive scaling rules on water availability Historical range of flows in the basin since 1908 Synthetically generated stochastic flows for the basin, assuming stationary conditions Synthetically generated stochastic flows for the basin, based on CMIP5
poster
SI2-SSE: Development of a Software Framework for Formalizing ForceField Atom-Typing for Molecular Simulation Christopher R. Iacovella1 and Janos Sallai2 1 Department of Chemical and Bimolecular Engineering, Vanderbilt University, 2 Institute for Software Integrated Systems, Vanderbilt University Problem Domain References Acknowledgements This research is supported by the National Science Foundation through awards ACI #1535150. The availability of forcefields for molecular simulation has reduced the effort researchers must devote to the difficult and costly task of determining the interactions between species, allowing them to instead focus on the motivating scientific questions. Determining which parameters in a forcefield to use is often a tedious and error prone task. This difficulty is related to the fact that forcefields are often quite expansive and can contain tens or hundreds of different parameters for the same element where each parameter set represents the element in a different chemical context. The chemical context can refer to: •the local bonded environment of an atom in a molecule •The local environment of neighboring atoms •The type of molecule(s) being considered •The phase of the molecule(s), etc. This is often complicated by the fact that a uniform standard for expressing chemical context does not yet exist, and thus parameter usage may be unclear. Typically, the first step in determining chemical context involves examining the local bonded environment of an atom. For example, consider the chemical context of the terminal methyl group in an alkane (this is defined as atom type opls_135 in the OPLS library4 provided with GROMACS5). This carbon atom has: •4 total bonds •1 bond with a carbon •3 bonds with hydrogens Thus the chemical context of this could be thus be expressed as a set of logic statements that all collectively describe the local environment, effectively creating a rule. That is, if the statements collectively evaluate to True, then we add atom type 135 to our Whitelist, i.e., our list possible atom types. Consider the chemical context of the methyl group in toluene. The local bonding environment is essentially the same as the terminal methyl group in an alkane, even though different parameters must be used. Thus the rule for the methyl group in toluene, requires us to be more specific. E.g., since we know that the carbon in the ring structure is OPLS atom type 145, we define: Here, atom type opls_148 is added to the Whitelist, and atom type opls_ 135 is added to the Blacklist, overriding the alkane rule. Logic-based Chemical Context Evaluating Rules Rules are evaluated via an fixed-point iterative process until the Whitelists and Blacklists no longer change. The final atom type is determined by identifying the difference between the Whitelist and the Blacklist for an atom, i.e., the single atom type that exists on the Whitelist but not the Blacklist. E.g., consider the methyl group in toluene of type opls_148: Whitelist: opls_135 opls_148 Blacklist: opls_135 This approach has several benefits over hierarchical approaches: • all rules will be fully evaluated • rules do not have to be evaluated in a specific order • if the Whitelist contains multiple possible atom types at the end of evaluation, it allows a user to know that the rules are too general, and which rules are conflicting • if the Whitelist and Blacklist contain the same entries, i.e., no atom type can be determined, then it tells the user the element of interest is a new atom type for which a rule does not yet exist. Verification and Testing The proposed approach is unique in that it provides two very distinct types of testing: Verification. It is possible to programmatically verify that the set of logic-based annotations of a forcefield are free from ambiguities and inconsistencies (without using specific test cases). We provide a verifier tool to prove that the rules of the annotated forcefield will always resolve all atoms of any input system to a single forcefiel
poster
www.avant-project.eu www.avant-project.eu Alternatives to Veterinary ANTimicrobials This project has received funding from the European Union‘s Horizon 2020 research and innovation programme under grant agreement No 862829. @avantproject­ @project­-avant GROUP PARTNERS CONTACTS OBJECTIVES ABOUT Coordination: University of Copenhagen, lg@sund.ku.dk Communication: RTDS Group, avant@rtds-group.com Picture credits: ©2020 Shutterstock/RTDS Responsible for layout & content: RTDS Group AVANT aims to produce a complementary set of alternatives to treat or prevent post-weaning diarrhoea and respiratory infections: • Targeted treatment of enterotoxigenic E.coli infections • Alternative feeding strategies for sows & piglets • Gut-stabilising interventions based on a synbiotic (pre- & probiotic) feed additives • Faecal microbiota transplantation • Immunostimulating products (injectable & oral) • Novel veterinary medicinal products containing bacteriophages & polymers AVANT is a multi-actor inter-sectorial project aimed at developing alternatives to antimicrobials for the management of bacterial infections in pigs, especially diarrhoea during the weaning period. Use of antimicrobials in animals adds to the public health threat of multidrug-resistant bacterial infections in humans. Alternatives for the treatment of major bacterial diseases in livestock will help to mitigate the public health risk. ECONOMIC IMPACT INCREASED ANIMAL WELFARE PUBLIC HEALTH REDUCES ANTIMICROBIAL USAGE
poster
This research was supported by the Deutsche Forschungsgemeinschaft, project number: 522896622. Semantic Dependency in Preposition Omission Miriam L. Schiele miriam.schiele@uni-tuebingen.de 1. Preposition Omission 4. Stimuli ●In elliptical clauses, prepositions are either kept (pied-piping) or dropped (P-omission): ●Merchant (2001) argues that P-omission is only licensed in languages which allow prepositions to be stranded, e.g., English: (1) Who did John go to the party with? ●In contrast, German does not allow such preposition-stranding: (2) *Wem hat sie mit gesprochen? ●Merchant’s generalization seems to be borne out for German, as pied- piping is preferred over P-omission (Molimpakis, 2019, Lemke 2021) Who did Peter talk to? (To) Bill. ●Example stimulus with semantic dependency: (3) A: Paul hat mit seinem Bruder auf etwas gewartet. ‘Paul waited for something with his brother.’ B: Ja, auf den Zug. (dependent, pied-piping) B′: Ja, den Zug. (dependent, P-omission) ‘Yes, (for) the train.’ ●Example stimulus without semantic dependency: (4) A: Paul hat mit jemandem auf den Zug gewartet. ‘Paul waited for the train with someone.’ B: Ja, mit seinem Bruder. (optional, pied-piping) B′: Ja, seinem Bruder. (optional, P-omission) ‘Yes, (with) his brother.’ 2. Semantic Dependency 5. Results ●Semantic dependency refers to the relation between the verb and the PP (Hawkins 2004) −the PP is required (e.g., believe in sth./sb., look after sb.) or −the PP is optional (e.g., swim with sb., smile at sb.) ●A corpus study on English revealed that P-omission is nearly four times more likely than pied-piping in the presence of semantic dependencies (Nykiel & Hawkins 2020) →The acceptability of P-omission seems to be influenced by whether the PP is semantically required or not ●However, whether semantic dependencies between the verb and the PP interfere with P-omission has not been tested for German so far Does German pattern with English? 3. Study Design ●2 x 2 factorial within subject design ●SEMANTIC DEPENDENCY: dependent or optional ●PREPOSITION: pied-piping or P-omission ●12 critical items plus 24 filler items and 3 attention items ●Stimuli are presented in a Latin square design in pseudo-random order ●32 native German speakers (29 used for analysis), recruited via Prolific ●Participants are asked to rate the naturalness of speaker B’s answer on a scale from 1 ‘fully natural’ to 7 ‘fully unnatural’ ●Pied-piping is significantly more acceptable than P-omission (t = 5.25, p < 0.01), but P-omission is not rated as ungrammatical (𝑥= 4.94), contra Merchant‘s generalization ●Significant interaction between PREPOSITION and SEMANTIC DEPENDENCY (t = 2.61, p = 0.03) →The degradation caused by P-omission is significantly lower in the presence of semantic dependencies References: Hawkins, J. A. 2004. Efficiency and Complexity in Grammars. OUP. • Lemke, R. 2021. Experimental investigations on the syntax and usage of fragments. Language Science Press. • Merchant, J. 2001. The Syntax of Silence: Sluicing, islands, and the theory of ellipsis. OUP. • Molimpakis, E. 2019. Accepting Preposition-Stranding under Sluicing Cross-linguistically; a Noisy-Channel Approach. Doctoral thesis. UCL. • Nykiel, J. & J. A. Hawkins. 2020. English fragments, minimize domains, and minimize forms. Language and Cognition 12(3), 411-443. 6. Conclusions ●Findings pattern with results from corpus study on English ●Semantic subcategorization influences syntactic form ●Cross-linguistic research is needed to fully validate/falsify Merchant‘s generalization Main takeaway: The strength of the semantic dependency between the verb and the PP seems to be a more reliable predictor of P-omission than a language’s ability to strand its prepositions Pied piping P omission Preposition aw ratings dependent optional
poster
SI2-SSE: GraphPack: Unified Graph Processing with Parallel Boost Graph Library, GraphBLAS and High- Level Generic Algorithm Interfaces Andrew Lumsdaine (PI), Kevin DeWeese, University of Washington {al75,deweeskg}@uw.edu Motivation >Graphs are a powerful abstraction that represent (arbitrary) relationships between (arbitrary) entities >Graph theory does not depend on particular entities or relationships (abstract algorithms) >Software does depend on particular entities and relationships (concrete data structures) >A generic library allows domain abstractions to be used directly with library algorithms (no conversion to library types needed) Algorithms (Current List) Performance: Triangle Counting Tech Transfer Concepts / Generic API Example: BFS Award 1716828 Parallelization Architecture >Open Source Release April 2020 https://https://gitlab.com/al75/bgl17 >Proposal to ISO C++ Standards Committee SG19: P1709 >Parameterized parallelism with C++ execution policies >Sequential, shared memory threads, accelerators >(Explicit parallelism with C++ tasks) >“There are no graphs” >Generic algorithms specify requirements on their input types >Generic libraries contain useful algorithms organized around classifications of these requirements (“concepts”) >Satisfied, e.g., by >Only need standard library containers to use graph algorithms >Breadth First Search >Depth First Search >Connected Components >Triangle Counting >Bellman-Ford >Page Rank >Maximal Independent Set >Dijkstra >Edmonds-Karp Max Flow >Boykov-Kolmogorov Max Flow >K-core >Jones-Plassmann Coloring >Brandes Betweenness Centrality >(More to come) >Fully generic library C++ library matches or exceeds performance of benchmark code (Beamer GAP benchmark) >32-core dual-socket Xeon adjacency<0> A { /* initializer... */ }; std::vector<size_t> distance (A.size()); std::vector<vertex_id_t> predecessor(A.size()); for (auto&& [u, v] : bfs_edge_range(A)) { distance[u] = distance[v] + 1; predecessor[u] = v; } >Range adapters allow traversal in specified (algorithmic) order >Build on significant prior work by PI (BGL, PBGL, AM++, GBTL) >Refactor and modernize for C++17 and beyond
poster
Internet Technology and Data Science Lab (IDLab), Ghent University - imec Jelle Vanhaeverbeke, Maarten Slembrouck and Steven Verstockt jelle.vanhaeverbeke@ugent.be / http://idlab.ugent.be / http://idlab.technology HELICOPTER VIDEO GEOLOCALIZATION FOR CYCLING RACES • Retrieve road path • For satellite by race course GPX • For helicopter by road segmentation model (U-Net style) • For each pixel on road: interpolate pixels perpendicular along the path • Slide helicopter representation over satellite representation • Calculate normalized cross correlation • Future: build ML model for better correlation matching Straight Road Transformation Correlation Matching helicopter satellite correlation (blue ➡red) best match ✅ helicopter satellite correlation (blue ➡red) best match ✅ helicopter satellite correlation (blue ➡red) best match ❌
poster
New Zealand ORCID consortium Our Integrations orcid@royalsociety.org.nz In-house integration - one of our main funders is now collecting authenticated iD and requesting permissions for future use, with the aim of writing successful funding to recipients in the near future. 24 Hub users One PURE user At least 6 Symplectic Elements users At least 3 other integrations Our ORCID numbers Through the NZ ORCID Hub we have written 2154 Affiliations across 24 organisations 244 Funding items for our blue-skies research fund One organisation writing works, and several trialling this. One organisation using and one trialling the Hub's own API. NZ ORCID Hub is our national integration: a simple user interface allowing organisations to collect authenticated ORCID iDs, gather permissions and read from and write to all parts of the ORCID record. API 3.0 functionality coming soon... NZ National Academy encompassing science, technology and the humanities. Nationally recognised organisation. Works across the whole research sector. Seen as a 'neutral organisation'. Our Consortium Lead Our advisory committee is a crucial link between our member organisations and the lead agency. We have one representative from each sector of the research community on our committee. Our Advisory Committee Contact us We are committed to building a community of practice: Quarterly consortium newsletter with regular 'success stories'. Fortnightly NZ Hub demos showcasing the latest developments. Sharespace on our website where members can share their ORCID engagement resources. Town Hall meeting for our members Lead agency supports ORCID drives - design and printing of resources and ORCID swag. Currently, 51 member organisations in the consortium. Our Community kiwis are endangered kiwis have whiskers kiwis are nocturnal kiwis have tiny wings but can't fly kiwis' feathers are more like fur kiwis have nostrils at the end of their beak Our kiwis
poster
EEG CORRELATES OF HUMAN-RHYTHM INTERACTION DEPARTMENT OF INFORMATION TECHNOLOGY, WAVES Wannes Van Ransbeeck, Dick Botteldooren, Sarah Verhulst, Marc Leman The scene Within the metaverse a tendency exists towards a seamless and intuitive interaction in the virtual world by an immersive and interactive environment. Capturing user sensation and experience during the interaction can help to build and maintain this immersion in any interaction or context, including music and rhythm. Furthermore, it may extend or improve an interaction into creating a lasting meaningful interaction. Besides physical user inputs or pupillometry, the brain shines a relevant light on this interaction. Hence brain monitoring and bio-synchronization can be a relevant tool for user analysis of the experience. The issue The main techniques used for neural analysis in relation to music perception and experience encompass two fields being frequency tagging and phase synchronisation. The former deals with the manifestation of the stimuli frequencies while the latter considers band activity as it synchronised to presented rhythms and its prolonged continuation. The syncing of brain, neural entrainment as you will, has opened a lot of research into evaluating rhythm and music perception, prediction and also the evoked emotion. However, despite being a topic of major interest in neuroscience, correct non-invasive quantification of this entrainment seems difficult. Especially when it comes to defining the emotion, experience and sensation without providing misinterpretation [1]. A fundamental linkage between user experience, interaction and neural markers has still not been fully explored and pilot data of a suggested approach is presented here. Plan Pilot data and experiment setup are presented in this poster. This aimed at exploring potential brain markers in relation to synchronicity of a performance and associated questionnaire responses to identify bio- indicators of a good human-rhythm interaction and through it identify entrainment and user experience. Figure 1: Illustration of band activity fluctuation for 6 beats and 3 taps. Setup A finger-taping experiment with variable beat patterns will be conducted on a cohort of 15 subjects, in which a subject synchronizes its tapping to the auditory presented rhythm or a variation of it. The accompanying auditory rhythm can be either a binary, ternary or metronome rhythm in a 6 beat measure controlled by a computer or second participant. The subject will either perform the binary or ternary rhythm. During the synchronization both spectral band activity and the low rhythm-frequency content of the EEG activity will be evaluated locally in time and across the experiment. Two spatial filters, based on localizer trials, will allow separation of sensorimotor and perceptual activity to study their response separately. The quality of the rhythm interaction quality is defined on the basis of the physical performance (mean asynchrony and resultant vector), questionnaires and additionally builds on the theory of human embodiment within music interaction. Figure 2: Spectral activity at stimulation frequencies for 2 beats and 3 taps (polyrhythm), for motor components (MOT IC3) and auditory responses(AUD IC16) as well as individual electrodes. Figure 3: Localiser for motor (left) and auditory(right) activity. Figure 4: Illustration of location of stimulation frequency in the brain. Results & Discussion Initial analysis was conducted and allowed extraction of both spectral band activity in windows around tapping instances (Figure 1), frequency activity across the entire experiment at stimulation frequencies (Figure 2), localizer data on separate motor and auditory components (Figure 3) and identification of frequency specific activation regions in the brain (Figure 4). Spectral influence of the sphere visualization illustrated clear effects on spectral content at stimulation frequencies and accuracy throughout the experiment prov
poster
Innovative Research for a Sustainable Future www.epa.gov/research Yue Ge l ge.yue@epa.gov l 919-541-2202 Investigating Steatosis Susceptibility in Pancreas Tissue of Mouse in Response to Vinyl Chloride Exposure aYue Ge, aMaribel Bruno, aBrian N. Chorley, bMatthew C. Cave, cJuliane I. Beier a US Environmental Protection Agency; RTP, NC 27711; b University of Louisville; c University of Pittsburgh Objective Methods Results Conclusions Alterations in physiological processes in pancreas have been associated with various metabolic problems and fatty disease syndromes, such as inflammation, fibrosis, necrosis, abdominal obesity, and steatosis. The alterations of normal pancreatic functions can result from environmental exposures, such as chemicals and diets, which can impact susceptibility to metabolic problems and fatty diseases. It has been reported that environmental vinyl chloride (VC) exposure significantly increased steatosis in mice fed a high‐fat diet (HFD) but not low-fat diet (LFD). However, little is known about molecular basis such as toxicity pathways and biomarkers and toxic mechanisms underlying the VC- increased environmental susceptibility, steatosis and other fatty disease induced by HFD. The present study was undertaken to examine the protein responses to VC exposure in pancreas tissues of C57BL/6J mice fed LFD or HFD, with focus on the investigation of the changes at expression and/or phosphorylation levels of some protein biomarkers of fat metabolisms, and key regulators of steatotic pathways such as cell proliferation and differentiation, fatty acid oxidation , energy metabolism, cell adhesion and migration, oxidative stress, and inflammation. Our hypothesis is that the protein alterations may modify the environmental susceptibility of mouse to the existing steatotic pathways and adverse outcomes induced by HFD after the exposure to VC. Six-week-old C57BL/6J mice (Jackson Laboratory) fed LFD or HFD (Envigo Teklad Diets, Madison) were exposed to VC (targeted concentration: 0.1 ppm) by air inhalation in chambers for 6 hours per day, 5 days per week for 12 weeks. The pancreas tissue samples were homogenized using polytron for 20sends in the buffer containing 150 mM NaCl, 20 mM Tris pH 7.7, 1mM EDTA, 1 mM EGTA and 1% Triton X-100. After incubation in ice for one hour, the homogenized tissue samples were then centrifuged at 10,000 g for 10 minutes. The supernatants containing total tissue proteins of pancreas were analyzed for identification of changes at expression and/or phosphorylation levels of protein biomarkers using a combination of 200 Mouse cytokines array (QAM-CAA- 4000, Raybiotech, Norcross GA) and Western blot. Pancreas tissues from four groups of mice treated with LFD, HFD, LFD+VC, LFD+VC were used for the study. The accession numbers, relative fold changes, and IDs of the differentially altered proteins were tabulated and imported into ingenuity pathway analysis (IPA) for identification of top canonical pathways, toxlists, molecular and biological functions, biomarkers, and protein interaction networks that were regulated by the identified proteins. Figure 1: Hierarchical clustering of differentially expressed and phosphorylated proteins in in pancreas mice treated with LFD and HFD with or without VC. Blue areas represented downregulation and red ones represented upregulation of proteins. As shown in Fig 1, different treatment groups (LFD, LFD+VC, HFD, or HFD+VC) were distinctively separated (left side of the Figure), which suggested that protein expression and phosphorylation patterns in pancreas proteome of mice treated with LFD, LFD+VC, HFD, or HFD+VC are different, and HFD-induced protein change is similar as the one induced by HFD+VC, as compared to those induced by LFD and LFD+VC. The lower portion of the dendogram depicted seven main clusters and each cluster usually consisted of two proteins. These clusters included the cluster of TCK-1and PIGF2, Eotaxin 2 and CD48, GSTµ and pAKT, cystatin C an
poster
Fidelity(and(Uncertainty(in(Climate(Data(Records(from(Earth(Observa:on( (FIDUCEO)( Christopher(J.(Merchant(University*of*Reading,*UK*! Jonathan(MiBaz(University*of*Reading*and*Na4onal*Physical*Laboratory,*UK**! Emma(Woolliams(Na4onal*Physical*Laboratory,*UK! Overview( ( Well%characterised!uncertain0es!are!crucial!if!Climate!Data!Records!(CDRs)!are!to!be!properly!exploited.!!! Our!understanding!of!CDR!uncertain0es!is,!however,!currently!limited.!!For!example,!how!trustworthy!is!the! uncertainty!on!a!given!measurement!of!a!climate!variable?!If!we!compare!a!measurement!now!with!one! obtained!a!few!decades!earlier,!how!uncertain!is!the!apparent!change?!To!what!extent!do!instrumental!and! mul0%mission!(in)stability!limit!the!conclusions!that!can!be!drawn!about!clima0c!trends?!!! ! The!objec0ve!of!the!FIDUCEO!project!is!to!develop!new!methods!to!such!these!ques0ons!for!CDRs!derived! from!satellite!observa0ons.!The!methods!will!be!developed!by!adap0ng!insights!and!techniques!from!the! discipline!of!metrology,!the!“science!of!measurement!uncertainty”!and!of!uncertainty!traceability.!They!will! be!demonstrated!!across!microwave,!infra%red!and!visible!domains.!! Example(metrologyHbased(analysis:(( Tracing(cumula:ng(uncertain:es(in( AVHRR(thermal(radiance(data( ! An!“instrument!model”,!such!as!that!below,!captures!the! physical!effects!and!data!transforma0ons!that!determine!the! calibrated!radiance!that!is!the!“level!1”!product!from!a!sensor! such!as!the!Advanced!Very!High!Resolu0on!Radiometer.! ! In!this!example!below,!reference!measurements!from!other! sensors!are!used!for!cross%calibra0on,!to!improve!on!the! nominal!pre%launch!characterisa0on!using!data!in!flight.! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! The!magnitudes!and!correla0on!proper0es!of!error! distribu0ons!for!radiance!data!given!the!new!calibra0on!and! instrument!behaviour!can!be!characterised!by!error!modelling! and!propaga0on!through!all!elements!of!the!corresponding! “uncertainty!chain”.! ! In!FIDUCEO,!new!harmonised!level!1!datasets!in!an!easy%to%use! FCDR!format!will!be!created.!This!will!apply!state%of%the%art! calibra0on,!and!include!rigorous!uncertainty!informa0on,!using! uncertainty!chains!as!discussed!above.! ! Easy%FCDR!datasets!will!make!it!much!easier!for!uncertainty% characterised,!harmonised!geophysical!datasets!to!be!derived! from!several!important!series!of!instruments!(AVHRR,!HIRS,! AMSU,!Meteosat).!In!this!way,!FIDUCEO!is!intended!to!enable! widespread!rigorous!exploita0on!of!long%term!satellite!data!for! new!climate!variables.! Earth& Space& (electronic&clamp)& Detector& (δC)& Internal&Calibra7on&Target& Thermal& Gradients& PRT&Biases/& uncertain7es& Digi7za7on& Interpola7on/& Smoothing& SRF& (δSRF)&& Emissivity&& (nonFblackness)& ε& Solar& scaIering& Earthshine& Colloca7on&to&Reference& Geoloca7on& uncertain7es& SRF&conversion& uncertain7es& Time&differences& ΔR& (δΔR)& NonFlinear&fit& Instrument& model&error& Calibra7on& Equa7on& Top&of& Atmosphere& Reference& (A)ATSR/IASI& etc& REarth(+RInstr(E)) RInstr(sp) RICT + RRel CEarth(δC),CICT,CSpace( δC 10 ) RICT (+RInstr(ICT)) (δRICT ) TICT (δTICT ) REarth(calibrated)+δR a0,Δε,a1,a2 (δa0,δΔε,δa1,δa2) Concept(of(FIDUCEO(project( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( FIDUCEO!will!create!new!methods,!tools,!training!and!datasets.! ! The!new!methods!will!address!new!physics%based!harmonisa0on!techniques!for!series! of!key!sensors,!and!corresponding!es0ma0on!of!all!significant!components!of! uncertainty!(including!pixel!level!and!long!term!stability)!in!a!metrologically!robust! manner.! ! The!applica0on!of!these!methods!within!the!project!will!lead!to!four!new!Fundamental! Climate!Data!Records!(FCDRs)!–!Meteosat!(VIS),!AVHRR!&!HIRS!(IR)!and!microwave! humidity!sounders!(MW).! ! Based!on!these!FCDRs,!within!FIDUCEO!we!will!demonstrate!extension!of!the! metrological!approach!to!geophysical!datasets!(CDRs).!The!CDRs!will!address!upper% tropospheric!humidity,!surface!temperature!and!albedo,!and!aerosol!op0cal!depth.!! ! ! ! ! ! ! ! ! !
poster
M. LOUVIOT1, N. SUAS-DAVID2, S. KASSI3, M. REY4, V. BOUDON1, R. GEORGES2 1 Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS/Université de Bourgogne, 9 Avenue Alain Savary, BP 47870, F-21078 Dijon Cedex, France, 2 Institut de Physique de Rennes, UMR 6251, Campus de Beaulieu, Bât 11C, Université de Rennes 1/CNRS, F-35042 Rennes Cedex, France 3 Laboratoire Interdisciplinaire de Physique, Univ. Grenoble 1/CNRS, LIPhy UMR 5588, Grenoble F-38041, France 4 Groupe de Spectrométrie Moléculaire et Atmosphérique, UMR CNRS 6089, Université de Reims, U.F.R. Sciences Exactes et Naturelles, B.P.1039, 51687 Reims Cedex 2, France ! Here, we highlight the presence of two rotationally cold gas into the jet since Boltzmann plots show two significant slopes. It demonstrates that the jet is composed of two regions: the isentropic core or the silence zone and the boundary layers surrounding the core due to the residual gas in the chamber at the thermodynamic equilibrium. This second contribution is thus rotationally hotter than the core. 1. Experimental setup 2. Aims of the setup and experimental results Introduction 4. Analysis of the CH4 spectrum Conclusion and perspectives 0 1000 2000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000 13000 14000 P0: Ground State 1 P1: Dyad 2 P2: Pentad 9 P3: Octad 24 P4: Tetradecad 60 P5: Icosad 134 P6: Triacontad 280 P7: Tetracontad 538 P8: Pentacontakaipentad 996 P9: Heptacontad 1746 Simulated spectrum × 100 Huygens DISR/ULIS ï Altitude 30 km Saturn at 727 nm = 13755 cmï1 Titan, CH4 window 939 nm = 10650 cmï1 Number of vibrational sublevels Wavenumber / cmï1 This work: analysis of hot CRD spectra of the Tetradecad region of methane. ! The study of specific astronomical objects like brown dwarfs and giant exoplanets is currently relevant in the astrophysics field. ! Despite the difficulty to fully exploit the numerous spectroscopic data recorded, the presence of hot methane in the atmosphere of these bodies is well known. ! This encouraged us to use an original experimental setup developed at Institut de Physique de Rennes to record absorption spectra of a hypersonic gas expansion – initially heated at high temperature – with the high sensitive cavity ring down spectrometer (CRDS) developed by the Laboratoire LIPhy de Grenoble. ! The analysis was performed at Laboratoire Interdisciplinaire Carnot de Bourgogne in cooperation with of the Groupe de Spectroscopie Moléculaire et Atmosphérique in Reims. The polyads of methane Ar Ar Towards pumping Hollow porous graphite rod High-pressure reservoir (~1000 Torr) Low-pressure chamber (~0.09 Torr) High reflectivity mirrors (>99.99%) Copper electrode Non porous graphite pierced electrode (2 mm hole) Argon protecting flow Hypersonic gas expansion Diode laser probe beam ! Two mixtures have been used: first, a mixture of argon and carbon monoxide and second, a mixture of argon and methane. Mixtures were contained at high pressure in a reservoir (~ 1000 Torr) and heated at very high temperature (~ 2000 K), thanks to the high enthalpy source developed at IPR [1]. ! The High Enthalpy Source, developed in Rennes [1] was connected to a low pressure chamber (~ 0.09 Torr). The gas jet was performed through a 2 mm circular hole. ! The CRD spectrometer, developed by the LAME group from the LIPhy laboratory in Grenoble, has been placed perpendicularly to the axis of the jet to record the absorption spectra. ! High-resolution spectra of CO and CH4 have thus been investigated in the [5920-6030] cm-1 infrared spectral range. Five laser diodes have been used to cover this region (~ 30 cm-1 per laser diode). [1] J. Thiévin, R. Georges, S. Carles, A. Benidar, B. Rowe and J.-P. Champion, High- temperature emission spectroscopy of methane, J. Quant. Spectrosc. Radiat. Transfer, 109, 2027 —2036 (2008). ! The principle of this setup is the production of hypersonic gas expansions. This has been possible thanks to a high pressure reservoir connected to a low pressure chamb
poster
RESEARCH POSTER PRESENTATION DESIGN © 2019 www.PosterPresentations.com In 2023, Digital Humanities Research Institute at Siberian Federal University starts a working prototype of a research digital infrastructure for the aggregation, preservation, dissemination of Siberian historical and cultural heritage for historical, literary, ethnographic, art history and other kinds of research at the intersection of the humanities and computer sciences — Siberiana.online The aim of the project is to launch a long-term initiative for digitization, analysis, and curation of the different collections of cultural heritage of the Central Siberia (so-called Angara-Yenisei macroregion). The project is designed for research and education needs of the digital humanists at Siberian Federal University and world over, because judging by the current literature Siberian artifacts and collections evoke a steady interest (Kizhner et al, 2021). We took several online resources for benchmarks. Europeana (pro.europeana.eu) is the benchmark for metadata organization, Wikidata (wikidata.org) gives good examples for data identification and linking open data, Global Digital Heritage (GlobalDigitalHeritage.org) is the reference point for cultural and historical objects presentation, and PhotoGrammar(PhotoGrammar.org) is the example we’re focusing on successfully combination of retrieval, analysis, and mapping of the artifacts in different collections. SIBERIANA.online SIBERIANA PROJECT SIBERIANA’S FACETED CLASSIFICATION For the prototype reasons of Siberiana, the following database fields for faceted classification were the obligatory: what (object type), when (time period), where (location), where it is stored (institution), as well as copyright or open license information. Siberiana is plotting as a digital model of the historical, cultural, and natural heritage of the Angara-Yenisei macro-region. Our goal is to unlock the potential of this heritage resources in two dimensions. For all types of users: it's to make such content easily accessible, visually well-presented, conveniently organized, and for researchers: it’s to provide a wide coverage of collections, valuable services for exploratory data analysis, to give a platform for collecting specific research data for a significant agenda (Kizhner et al, 2022). Thus, historical, cultural, and natural heritage should be understood as a resource that allows people to make life interesting, rich, valuable, and in some respects socially significant. REFERENCES • Kizhner, Inna / Terras, Melissa / Manovich, Lev / Orekhov, Boris / Kim, Igor / Rumyantsev, Maxim / Bonch-Osmolovskaya, Anastasia (2022). “The history and context of the Digital Humanities in Russia”, in: Global Debates in the Digital Humanities (Eds Domenico Fiormonte, Sukanta Chaudhuri). University of Minnesota Press: 55-70. • Kizhner, Inna / Terras, Melissa / Rumyantsev, Maxim / Khokhlova, Valentina / Demeshkova, Elisaveta / Rudov, Ivan / Afanasieva, Julia (2021). “Digital cultural colonialism: measuring bias in aggregated digitized content held in Google Arts and Culture”, in: Digital Scholarship in the Humanities, V. 36, Is. 3: 607–640. • Kizhner, Inna / Terras, Melissa/ Rumyantsev, Maxim / Sycheva, Kristina / Rudov, Ivan (2019). “Accessing Russian culture online: The scope of digitization in museums across Russia”, in: Digital Scholarship in the Humanities, V. 34, Is. 2: 350–367. • “Siberiana”. Aggregator of historical and cultural heritage of the Yenisei Siberia. URL: https://siberiana.online/ [26.06.2023] CONTACT Digital Humanities Research Institute at Siberian Federal University: DHRI.ru Andrei Volodin – email: Volodin@hist.msu.ru | telegram: @andreivolodin | twitter: @avolon Polina Senotrusova (psenotrusova@sfu-kras.ru), Oleslav Antamoshkin (oantamoskin@sfu- kras.ru), Inna Kizhner (inna.kizhner@gmail.com), Maksim Rumyantzev (mrumyantsev@sfu- kras.ru), Nikita Pikov (npikov@sfu-kras.ru), Andrey Gruzdev (agruzdev@sfu-kras.ru) The main aim of Sibe
poster
COUNTRY Spain KEY WORDS HTC, biocoal, heating power, biomass DISCLAIMER This Practice Abstract reflects only the author’s view and the BRANCHES project is not responsi- ble for any use that may be made of the informa- tion it contains. 14 HTC of biomass Ingelia, a company dedicated to the commercialization of bi- omass and organic residues, has developed an HTC technolo- gy plant at industrial scale. The plant was built in 2010 in Va- lencia and is able to process organic residues in continuous operation mode, therefore demonstrating the feasibility of this technology. In 2015, a second reactor was installed. The technology allows on the one hand to concentrate the heating value of the input biomass on a solid biofuel (around 24 MJ/kg), and on the other hand, to produce a fertilized wa- ter. The input material can be almost any type of wet organic residue (for instance the organic fraction of the urban resi- dues, sewage sludge, agro-forestry residues or agri-food resi- dues, pruning, etc.). During the hydrothermal carbonization process, the wet bio- mass is carbonized into biocoal. The product is then refined (removing impurities such as metals, stones, glass, etc.) and dried. Finally, a powered biocoal can then undergo a pelletiz- ing or briquetting. The process also allows to extract biochem- ical compounds from some biomasses /residues. The rectors are modular, with a processing capacity between 5000 and 10000 t/year per reactor and the number of reac- tors can be adapted according to the project needs. The biocoal obtained has many advantages starting with a competitive market price, homogeneity regardless of the bi- omass introduced in the process and an increased heating value of around 30 % when compared to conventional pellets. The produced biocoal is also hydrophobic, as well as easy to transport and store. It is therefore a renewable product that can substitute fossil-based coal in different applications (thermal, metallurgy, etc.) while contributing to decrease GHG emissions. The liquid fraction produced can be used for irriga- tion purposes (parks, gardening or agriculture). DOWNLOAD www.branchesproject.eu AUTHORS Maider Gomez (Circe) mgomez@fcirce.es Daniel García (Avebiom) Pablo Rodero (Avebiom) Alicia Mira (Avebiom)
poster
Unintended Consequences of Complex Engineered Water Systems: A Case Study in a NW Costa Rican Watershed Introduction and Objectives Field instrumentation network in the Palo Verde wetland - Transfer 1,500 millions m3 /y of water from lake Arenal à double the amount of water - Hydroelectric plant: 12% electricity of Costa Rica - 252 km roads; 255 km canals - 30,000+ ha agricultural area - 155 ha Gish farms (75% Tilapia in the US) - Since 90’s: touristic boom in coastal areas 80’s: Hydroelectric and PRAT project Induced changes in the land use A complex engineered water system with classical environmental degradation and social conflicts Alice Alonso1, Rafael Muñoz-Carpena1, Miguel A. Campo-Bescós2, Greg A. Kiker1, Ray Huffaker1 1 Agricultural and Biological Engineering Department, University of Florida 2 Projects and Rural Engineering Department, Public University of Navarre, Ed. Los Olivos, Pamplona, Spain Contact: alice.alonso@ufl.edu Wetland in 1980 Method Data analysis techniques - Dynamic factor analysis - Wavelet analysis - Single spectral analysis - Phase space reconstruction - Convergent cross mapping - Time travel functions à Uncover emerging patterns à Identify causalities à Describe dynamic and mechanisms à Time series reconstruction Data Collection and Preliminary Analysis: Upper and Mid Basin Data Collection: Palo Verde Wetlands Surface water data Discussion and Conclusions • A rich database has been constructed by combining existing long-­‐term weather and hydrological time series and those from an new instrumentation network in PV; • SigniGicant trends have been detected in most of the discharge time series, assuming the existence of external drivers. Classical stochastical hydrological simulations can hence not be performed. • The detected downwards trend in most of the gauged points when the water amount is doubled with the hydroelectricity plant highlights the complexity of the system; • Advanced time series analysis tools (e.g. dynamic factor analysis, wavelet analysis, spectral decomposition, phase space reconstruction, convergent cross mapping) will be applied on the data to uncover emerging patterns, drivers and causal relationship behind the response and degradation of the system; • This will allow the reconstruction of time series, and inform the building of a coupled human-­‐ecohydrological mechanistic model to simulate and predict the impact of human activities on the hydrological signature of the watershed and eventually on the wetlands. Acknowledgement: We would like to give a special thank you to the NSF and the Organization for Tropical Studies, for the data provided and the great collaboration during the implementation of the instrumentation network and the current data recollection. 1 3 4 6 2 5 1 3 6 Source: Daniel (2004)! Conceptual model of the services and functions provided by the basin. Source: Muñoz-Carpena et al. (2013).! Pipe Hydroelectric Plant Nicoya Gulf 1975 1987 2000 Hydrological modeling Links and nodes system based on Hromadka (1983) and Muñoz-­‐Carpena et al. (2004) ( for further incorporation in the Question and Decisions modeling system (QnD) (Kiker et al., 2006)) . Palo Verde wetland in 1980 and 2010 during the wet season, showing the dense cattail invasion (Carolina Murcia).! References Daniels, A.E., 2004. Protected Area Management in the Watershed Context: A case Study of Palo Verde National Park, Costa Rica.! Muñoz-Carpena, R., 2013. WSC-Category 2- Collaborative Research. Deriving resilience within a complex water-subsidized basin: drivers, cascading decisions, and management tradeoffs.! Hromadka, T.V., 1983. Computer methods in urban hydrology: Rational methods and unit hydrograph methods, Lighthouse publication.! Muñoz-Carpena, R. and J. E. Parsons. 2004. A design procedure for vegetative filter strips using VFSMOD-W. Transactions of the ASAE 47:1933-1941. ! Situated in NW Costa Rica, the Palo Verde wetlands are facing severe degradation, probably as a consequence o
poster
The application of Semantic Publishing technologies in the Science of Science research domain for the Humanities field Presenter: Ivan Heibi Ph.D. Student Digital Humanities Advanced Research Centre (DHARC), Department of Classical Philology and Italian Studies, University of Bologna, Bologna (Italy) Email: ivan.heibi2@unibo.it Large-scale data analysis Network analysis Social science Semantic Publishing 1 3 Resources Limitations 2 Methodologies 4 Desirable Outcomes Focus on journal article-oriented study fields. Books seems to be the most cited doc-type by humanities fields, yet less available. Literature and History are two highly reasonable study fields to take in consideration • The lower proportion of journal articles cited by humanities compared to the scientific oriented domains case. • The ageing rate. • The local relevance. • The presence of a division between publications directed toward researchers and writings directed to a public audience The application of Semantic Web technologies in the scholarly publishing domain [2][3]. This approach has already been adapted and still used in many projects (e.g. OpenCitations), gaining positive impacts mostly in its application to scientific fields [4]. The Semantic Publishing methods helps us getting data out onto the Web in linked semantically form, which is a high profit, specially if open standards and open source tools are used. This will increase the collaborative nature of the community developments and rapid progress. Semantic Web technologies are a striking approach for building knowledge graphs to analyse and structure the discovered patterns. 1. What are the common formalisms and constructional patterns adopted for citations inside Humanities documents (e.g. a classification according to the document sections) ? 2. What are the reasons, i.e. the citation function [5], to cite other works and what are the most important ones ? Science of Science Quantify and predict scientific research and its resulting outcomes [1] Background and Purpose Project Workflow The Research • Facing up my research questions using SoS research empowered with Semantic Web technologies, could gain revolutionary impacts considering the low coverage it has achieved over the Humanities in the past years. Broaden the knowledge over citations and their usage will lead future researchers improve their works and effectively address their research questions. • Developing new applications that assist the community into a functional usage of the discoveries made. Hopefully the developed tools, should broaden the horizon regarding Humanities researchers knowledge, obtained from the methodologies potentials. Learning from previous approaches Defining the datasets, resources and tools to use Answering the research questions Building applications to highlight the discoveries made References 1. Fortunato, S., Bergstrom, C. T., Börner, K., Evans, J. A., Helbing, D., Milojević, S., ... & Vespignani, A. (2018). Science of science. Science, 359(6379), eaao0185. DOI: https://doi.org/10.1126/science.aao0185 2. Shotton, D. (2009). Semantic publishing: the coming revolution in scientific journal publishing. DOI: https://doi.org/10.1087/2009202 3. Shotton, D., Portwin, K., Klyne, G., & Miles, A. (2009). Adventures in semantic publishing: exemplar semantic enhancements of a research article. PLoS computational biology, 5(4), e1000361. DOI: https://doi.org/10.1371/journal.pcbi.1000361 4. Peroni, S., Dutton, A., Gray, T., & Shotton, D. (2015). Setting our bibliographic references free: towards open citation data. Journal of Documentation, 71(2), 253-277. DOI: https://doi.org/10.1108/JD-12-2013-0166 5. Teufel, S., Siddharthan, A., & Tidhar, D. (2006). An annotation scheme for citation function. 6. Heibi, I., Peroni, S., & Shotton, D. (2019). COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. arXiv preprint arXiv: 1904.06052. https://arxiv.org/abs/1904.06052 7. Heibi, I., Peroni, S., & Shotton, D. Enabling text sea
poster
Exploring the Mass-Metallicity Relation in Open Clusters inside the Milky Way S M Rafee Adnan1,2, Ahmad Al- Imtiaz1, Sumit Roy Pronoy1,3, Anock Somaddar1, Syed Badiuzzaman Faruque1 1 Shahjalal University of Science and Technology, Bangladesh, 2 University of Louisville, Kentucky, 3 Daffodil International University, Bangladesh Background & Objective Open clusters (OCs) provide a controlled setting to investigate the interplay be- tween mass and metallicity on a more localized level. However, the specific anal- ysis of this relation within individual open clusters has been sparsely investigated, despite its significance. In this study, we aim to fill this research gap by conducting a comprehensive analysis of 75-star clusters, with the objective of examining the trends and probing the underlying mechanisms governing their mass-metallicity relationship. Data Collection The majority of the OCs reside within ∼1700 pcs thus belonging to the Milky Way galaxy. Hence, most of the selected cluster members will be lying in the main sequence of the HR diagram. APSIS utilizes the extensive data collected by the Gaia spacecraft, including mean RVS spectra and mean BP/RP, positional measurements, parallaxes, and photometric data for up to 470 million sources. Seabroke et al. (2022) offer a comprehensive and in-depth explanation of the RVS data and its processing methodology. APSIS derives a range of astrophysical parameters for stars, including stellar spectroscopic and evolutionary parameters, such as mass (M) (140 million), metallicity [M/H] (6 million using RVS, 470 million using BP/RP). We mainly used the following data for our analysis. parallax, pmra, pmdec We used the parallax and proper motions for determining the members of OCs using HDBScan. mass_flame We used the star’s mass, obtained from FLAME. This value was derived by comparing the teff_gspphot and lum_flame measurements with the BASTI stellar evolution models for solar metallicity, as described in the study by Hidalgo et al. (2018). mh_gspphot We also employed the decimal logarithm of the iron-to-hydrogen number abundance ratio, compared to the corresponding ratio of solar abundances determined by GSP-Phot Aeneas. This determination was based on analyzing BP/RP spectra, apparent G magnitude, and parallax, assuming the source to be a single star. Methodology: Member Selection HDBSCAN is a clustering method that considers all values and constructs a hier- archical tree, integrating multiple results. HDBSCAN was run on each dataset that came back from the ADQL Gaia DR3 query. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 GBP GRP 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 G (mag) All sources NGC_2682 Figure 1. HR diagram of NGC 2682 130 131 132 133 134 135 (deg) 9 10 11 12 13 14 (deg) All sources NGC_2682 Figure 2. Spatial distribution of NGC 2682 In cases where HDBSCAN failed to identify a cluster, we employed an iterative approach by modifying the parameters and reattempting the clustering process. Methodology: Curve fitting After determining the members of an Open Cluster (OC), we plotted metallicity vs. mass and fitted a model for each of the OCs. A non-linear curve fitting was performed using MCMC with the emcee library. A model representing the relationship between the mass and metallicity variables was defined as: y = x + d |x + d| (x + d)b + c (1) Here, x and y represent for mass and metallicity, respectively. And b, c, d are constants. Multiple walkers were initialized with random parameter values and moved through the parameter space, guided by the likelihood function. Param- eter estimates and uncertainties were obtained from the converged walker po- sitions. The equation we used above is based on the findings of Leethochawalit et al. 2018. They found that the mass metallicity relation of galaxies follows a power-law relationship. Here is the following equation: [Fe/H] = [Fe/H]0 log[1 + M∗/M0]γ (2) where [Fe/H] is the metallicity, M∗is the mass of the star, [Fe/H]0 is a normal- ization con
poster
Lizenz-Hinweis: Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung 4.0 International Lizenz © 2022 Alexander Grossmann, Michael Reiche, Diana Tillmann Forschungsteam Prof. Dr. rer. nat. Alexander Grossmann orcid.org/0000-0001-9169-5685 Prof. Dr.-Ing. Michael Reiche orcid.org/0000-0001-8343-1997 Diana Tillmann, M. Eng. orcid.org/0000-0001-8577-3376 Open Access Strukturierte Kommunikation Forschungsprojekt OA-STRUKTKOMM HINTERGRUND Für die Veröffentlichung von Open-Access-Publikationen haben sich heterogene Workflows etabliert. Mit der Publikation der Ergebnisse des Projekts OA-HVerlag steht nun ein Modell für die Modellierung dieser Workflows zur Verfügung. Durch dessen Anwendung bei der Anpassung und Entwicklung technologischer Kompo- nenten können die benötigten Daten passgenauer strukturiert und redundanzfrei beschrieben werden. Auf Grundlage des Workflowmodells beschäftigte sich das Team von OA-STRUKTKOMM unter anderem mit der Frage, wie der Datenaustausch zwischen den workflowbeteiligten Systemen robuster und kompatibel gestaltet werden kann. Strukturierte Kommunikation Um Hochschulen und Dienstanbietern eine standardisierte Schnittstelle anzubieten, wurde ein wissenschaftlicher Ansatz einer Struktur entwickelt, welche den Daten- austausch zwischen den workflowbeteilig- ten Systemen robuster und kompatibel gestalten soll. — Entwicklung eines Glossars sowie von Kriterien zur Identifizierung von Standards. — Ein interaktives Poster fasst 102 Standards, Normen und Spezifikationen, gegliedert in zehn Kategorien. — Stakeholder-Workshop: Grundlagen und Ansätze für die Entwicklung einer standardisierten Kommunikationsstruktur für Open-Access-Publikationen. — Entwurfes einer Kommunikationsstruktur, in der alle zu kommunizierenden Daten zu Publikationssprojekten gehalten und ausgetauscht werden. Out-of-the-Box-Produktionssystem Um Hochschulbibliotheken und -verlagen ohne spezifisches herstellerisches Wissen die Möglichkeit zu bieten, Open-Access- Publikationen herzustellen, wurde ein be- reits prototypisch vorliegendes Publikati- onssystem als Open-Source-Produkt weiterentwickelt. — Das Produktionssystem wurde evaluiert und weiterentwickelt. — Die Beta-Version 0.5 wurde im August 2021 mit dem Namen „OA-Satzsystem“ unter der offenen Lizenz GNU3.0 veröffentlicht. — Im Herbst 2021 wurde eine erste interne Testphase als User-Text für die Betriebssysteme Windows und macOS abgeschlossen. — Im Winter 2021/22 erfolgte eine erste externe Testphase mit interessierten Partnern aus dem Community-Netzwerk. Open-Access-Hochschulverlag (OA-HVerlag) — Ein nachhaltiger, allgemeingültiger, medien­neutraler sowie kosten- und personal­effizienter OA-Publikationsworkflow wurde entwickelt. — Ermöglicht den Hochschulverlagen die Modellierung von Veröffent- lichungsprozessen eigener Forschungsarbeiten in digitaler Form im Open Access und als gedrucktes Buch. — Die Ergebnisse wurden im Handbuch Open-Access-Publikations- workflow für akademische Bücher | Ein Handbuch für Hochschulen und Universitäten unter der DOI 10.33968/9783966270175-00 veröffentlicht. FORSCHUNGSSCHWERPUNKTE ERGEBNISSE Kontakt Uns ist es wichtig, alle Stakeholder in das Forschungsprojekt einzubeziehen und wir freuen uns, wenn Sie mit uns in Kontakt treten und Ihre Expertise in das Projekt einbringen. Senden Sie uns gerne eine E-Mail an oa-struktkomm@htwk-leipzig.de Homepage Auf unserer Forschungswebsite www.oa-struktkomm.htwk-leipzig.de finden Sie alle Informationen zum Projekt, den Forschungszielen sowie Termine und Veröffentlichungen. Newsletter Der Newsletter mit aktuellen OA- Themen erscheint monatlich auf der Verlagsseite des OA-HVerlages. Gerne senden wir Ihnen den Newsletter auch per Mail zu. Bitte wenden Sie sich hierfür an verlag.bib@htwk-leipzig.de Förderung Das Forschungsprojekt wird durch das Bundesministerium für Bildung und Forschung (BMBF) mit einer Laufzeit von zwei Jahren (15.02.2021 bis 31.01.2023) gefördert. Projektträger: VDI/VDE Innovation + Techni
poster
Introduction: Lilah Nuggets of Hope: Using coral microfragments to determine best practice reef-rebuilding techniques Katelyn Cambridge, Sophie Cappello, Jack Kenney, Henry Kuck, Haya Firas, Austin Anderson Advisors: Valeria Pizarro and Lily Haines – Interns: Hannah Lochan and Eliza Thomas Introduction Objective Methods Results and Discussion Take-home Messages References Acknowledgments Staghorn and Mountainous star : • Nursery was most effective, with the fastest growth rates • Reef treatment showed a decrease in healing due to fragments experiencing predation or being swept away by the current Massive starlet: • Wetlab treatment was most effective, with the fastest and steadiest growth rates as well as the highest average wound healing. • Other two treatments fluctuated, showing decreases in wound healing at various points. We hypothesize that this is because massive starlet fragments struggled with algae overgrowth therefore having them in the wetlab allowed us to access and clean them more easily 0 20 40 60 80 100 1 2 3 4 5 6 0 20 40 60 80 100 1 2 3 4 5 6 Nursery Reef Wetlab 0 20 40 60 80 100 1 2 3 4 5 6 Coral reefs are coastal marine ecosystems that cover less than 1% of the ocean floor but are necessary habitats for 25% of all marine species. Many factors contribute to the degradation of coral reefs a few include pollution, climate change, legal but unsustainable fishing and the introduction of invasive species such as the lionfish. Coral restoration is the process of assisting the recovery of coral reefs. (Edwards & Gomez, 2007). micro- fragmentation is a new coral restoration technique that has never been done before in the Bahamas. Collect 10% of coral head Cut into 1cm2 fragments Grows exponentially faster Outplant The coral reef health is declining throughout the Bahamas (Dahlgren et al., 2016). The reefs are deteriorating with most of our reefs being impaired and in poor condition. A healthy coral reef should have a coral coverage over a reef of over 30%. However, in the Bahamas, coral cover is lower than 15%. Which treatment setting: coral nursery, wetlab, or open-ocean reef, best promotes the survival, wound healing and health of three coral species for a coral restoration project? • Staghorn coral and Mountainous star coral showed the best health, healing and growth in the nursery tree. • Massive Starlet coral showed the best health, healing and growth in the wet lab. • The Starlet microfragments had the highest algae coverage, suggesting that the species might not be ideal for restoration work in the Bahamas. • The reef isn’t a suitable environment for growing microfragments of any of our species. This indicates that the microfragments need to grow a bit bigger so they’re less vulnerable before out-planting them on the reef. This is the first time microfragmentation on these three species has ever been done in the Bahamas. This type of restoration has the potential to make Bahamian coral reefs more resilient to human-caused stressors in the future. This knowledge will help upscale and improve restoration efforts at CEI and the Bahamas as a whole. Times (week) Wound healing (%) Staghorn Mountainous star Massive starlet 0 10 20 30 40 1 2 3 4 5 6 Algae overgrowth (%) Time (week) Staghorn Mountainous star Massive starlet • Minimal algae overgrowth on Mountainous star coral and Staghorn coral. • Increase in algae presence in all three species after week 3, but the steepest increase in Massive starlet coral. • By the end of week 6, starlet microfragments had the most algae coverage. No scientific data has been found to explain these results. Algae overgrowth Survival Three species Staghorn Mountainaous star Massive starlet Three treatments Wetlab Nursery Reef Wound healing Results and Discussion Dahlgren, C., Sherman, K., Lang, J., Kramer, P. R., & Marks, K. (2016). Bahamas coral reef report card. Volume, 1, 2011-2013. Edwards, A. J., & Gomez, E. D. (2007). Reef restoration concepts and guidelines: making sensible m
poster
Preserving Biodiversity with IrisBG and ESRI Laura Knutson Murray1, Mari Rustan2 1Magnolia Maps, , USA, 2IrisBG, , Norway “The loss of biodiversity is permanent, with climate change one of the factors leading to this loss. We have about 12 million species on earth—not counting bacteria and other microorganisms—and we have given names to fewer than two million… Despite this gaping hole in our knowledge, we must act to preserve as many of the existing species while there is still time to do so.” - Dr. Peter H. Raven, President Emeritus of the Missouri Botanical Garden Tracking biodiversity, through documenting living plant collections, requires constant attention to detail as plants change and grow and their environments change or can be modified. For over 25 years, IrisBG has refined a software and database solution for all aspects of managing living and preserved botanical collections. IrisBG facilitates everyday work with living and preserved botanical collections and enables easy accessioning with images and mapping, sharing of plant material, tracking the wellbeing of plants and much more. When synchronized with ESRI’s suite of cartography products, users have an even more powerful set of applications that can be customized for their institution and used on any device for internal use or publicly shared. Mari Rustan of IrisBG and Laura Knutson Murray of Magnolia Maps will walk attendees through a live demonstration of how the Memphis Botanic Garden is creating digital records and sharing their collections with the world.
poster
Three Dimensional Hepatotoxicity Screening using Corning® HepatoCells, the Spheroid Microplate and the SCREEN-WELL® Hepatotoxicity library Presenter: Ute Vespermann Hilary Sherman, Hannah J. Gitschier, David H. Randle Corning Incorporated, Life Sciences, Kennebunk, Maine USA Warranty/Disclaimer: Unless otherwise specified, all products are for research use only. Not intended for use in diagnostic or therapeutic procedures. Not for use in humans. Corning Life Sciences makes no claims regarding the performance of these products for clinical or diagnostic applications Corning is a registered trademark of Corning Incorporated, One Riverfront Plaza, Corning, NY 14831-0001 All other trademarks included in the document are the property of their respective owners © 2016 Corning Incorporated Abstract Summary/Conclusions Having the right model for drug screening is essential for predicting compounds that may cause drug-induced liver injury. Three dimensional (3D) models offer significant improvements over traditional two dimensional monolayer cell culture in terms of maintaining morphological and functional characteristics of tissue, and may provide a better representation of in vitro drug toxicity1. Here we demonstrate how Corning HepatoCells, an immortalized alternative to primary human hepatocytes, in conjunction with Corning Spheroid Microplates can be a utilized for a 3D drug screen to discover potential hepatotoxins. Hepatospheres formed using Corning HepatoCells were compared to spheroids formed using alternative hepatocyte-like models. Cell viability, urea and albumin were measured to assess hepatotoxicity after exposure to the SCREEN-WELL Hepatotoxicity library from Enzo Life Sciences, a library consisting of 238 compounds with a variety of structurally and mechanistically different compound classes, as well as nontoxic controls. Selected hits identified in the 3D screen, which significantly reduced cell viability of the hepatospheres, were then assessed in a dose-dependent manner for potency analysis. These results demonstrate that Corning HepatoCells, together with Corning Spheroid Microplates, are powerful tools that can be used for reliable and reproducible 3D hepatotoxicity screening. • Corning spheroid microplates allow for the formation of consistent sized, single spheroids in each well, available in both 96 and 384 well formats, making them a useful tool for 3D screening. • Hepatospheres formed using the Corning spheroid microplate are amenable to histological analysis. • The opaque walls and clear, round well-bottom of the spheroid microplates allow for luminescent assays to be conducted in the plate without the need for a transfer step. • HepatoCells offer an ideal hepatic cell alternative to commonly used HepaRG cells, offering a larger assay window for luminescent ATP assays, which is amenable to screening. • HepatoCells displayed increased or equivalent sensitivity to known hepatotoxins tamoxifen, troglitazone and nicardipine upon potency analysis, compared to HepG2 and HepaRG cells. Representative photomicrographs of HepatoCells,HepG2, and HepaRG spheroids after 7 days, seeded at 6,000 cells/well, 200 cells/well, and 6,000 cells/well respectively (40x). Spheroid Size Optimization HepatoCells (Corning Cat. No. 354881), HepG2 cells (ATCC® Cat. No. HB-8065), and HepaRG™ cells (Life Technologies Cat. No. HPRGC10) were seeded at various concentrations in 384 well spheroid microplates (Corning Cat. No. 3830) to optimize for the subsequent screens. HepatoCells and HepG2 cells were seeded using 50 µL Corning Culture Medium for HepatoCells (Corning Cat. No. 354882) containing 10 % fetal bovine serum (Corning Cat. No. 35-010-CV). HepaRG cells were seeded using William’s medium (Life Technologies Cat. No. 12551-032) supplemented to 1x with HepaRG Thaw, Plate & General Purpose Medium Supplement (Life Technologies Cat. No. HPRG670) and 1x GlutaMAX (Life Technologies Cat. No. 35050-061). Medium was changed every other day using
poster
Hasmah Binti Mohamed Haris , Wan Shakira Rodzlan Hasani ,Jane Ling Miaw Yn , Mohd Hatta Bin Abdul Mutalip , Eida Nurhadzira Binti Muhammad , Muhammad Faiz Bin Mohd Hisham , Ahzairin Bin Ahmad , Muhammad Fadhli Bin Mohd Yusoff 1 Institute for Public Health, National Institutes of Health, Ministry of Health Malaysia. NMRR-18-3085-44207 1. Sharifa EWP. (2018). The use of e-cigarettes among university students in Malaysia. Tobacco Induced Diseases. Vol (16). Doi: 10.18332/tid/99539. 2. Palipudi KM, Mbulo L, Morton J, et al. Awareness and Current Use of Electronic Cigarettes in Indonesia, Malaysia, Qatar, and Greece: Findings From 2011-2013 Global Adult Tobacco Surveys. Nicotine Tob Res. 2016;18(4):501-507. doi:10.1093/ntr/ntv081 3. Hiong Tee, G., & Low, W. Y. (2019). Electronic Cigarettes Use Among Adults and Adolescents in Malaysia: A Public Health Concern? Asia Pacific Journal of Public Health, 31(7_suppl), 4S-5S. https://- doi.org/10.1177/1010539519878329 4. Kilibarda B, Mravcik V, Martens MS. E-cigarette use among Serbian adults: prevalence and user characteristics. Int J Public Health. 2016;61(2):167-175. doi:10.1007/s00038-016-0787-y References P-17 Electronic cigarettes (e-cigarettes) is growing a public health concern because its present threats to tobacco control and undo the success achieved. E-cigarettes and vaping are seems to be in trend nowadays especially among adults regardless of the negative health impacts. This study aimed to investigates the current prevalence of e- cigarette users and its associated factors among adults aged 15 years and above in Malaysia. • There was almost equal representation of male and female respondents from 11,111 respondents aged 15 years and older. • The prevalence was significantly highest among Malay and others ethnicity (5.8%), younger age group 15-24 years (11.2%) ,private employee (7.3%) and among current smoker (15.5%). • Current e-cigarette users were associated with male (aOR= 26.71;95% CI= 14.79, 48.23), 18 to 24 years old (aOR= 10.80; 95% CI= 7.22, 16.13), and current tobacco smoker (aOR=2.51, 95% CI=1.98, 3.17). • From other study among university in students in Malaysia done by Sharifa et al, it showed that 74.9% of the students smoke and 40.3% were cigarrette smokers too . • In 2011, a Global Adult Tobacco Survey was done in Malaysia and the current e-cigarettes user among adults at that time was 0.8% . • The prevalence of smokeless tobacco products including e-cigarette use among adult in Malaysia was 10.9% but that was including other types of smokeless tobacco products . • The current e-cigarettes smoker in Serbia was reported to be 2% which is lower than our national prevalence • Data from the National Health & Morbidity Survey (NHMS) 2019 conducted by Institute for Public Health, Ministry of Health was analyzed. • It is a cross sectional, population-based survey which employed a two-stage stratified cluster random sampling design to ensure national representativeness. • Sample consisted of 11,111 respondents aged 15 years and above responded to the smoking module. Respondents were given structured questionaires via face to face interview. Descriptive and multivariate analyses were used for analysis. This study showed that current using e-cigarettes are more frequently among smokers. Hence e-cigarette intervention strategies and policies should target at this high-prevalence group and urgent need in preventive strategies to protect younger adults and adolescent from this harmful exposure. 1 1 1 1 1 1 1 4 The prevalence and factors associated with 1 1 3 E-cigarette users among Malaysian Variables Overall Gender Male Female Ethnic Malay Chinese Indian Bumiputera Sabah & Sarawak Others Age Group (years) 15-24 25-34 35-44 45 and above Occupation Government Employee Private Employee Self Employed Retiree Student Not Working Smoking Status Current Smoker Non Smoker Prevalence (%) 4.9 9.4 0.3 5.8 3.5 1.0 5.0 5.8 11.2 5.5 3.0 0.9 5.30 7.30 5.70 0.9 5.70 3.10 15.5 2.1 Estimated P
poster
I-ImaS: Intelligent Imaging Sensor for Industry, Health & Security I-ImaS ImaS: Intelligent Imaging Sensor for Industry, Health & Security : Intelligent Imaging Sensor for Industry, Health & Security ̖ΓΔΓΌνΘ΋Η΋ ̓ΕΓΆΏφΐ΅ΘΓΖ: ̖ΓΔΓΌνΘ΋Η΋ ̓ΕΓΆΏφΐ΅ΘΓΖ: ̖Γ ΔΕϱ·Ε΅µµ΅ I-ImaS ΗΘΓΛΉϾΉ΍ ΗΘ΋Α ΅ΑΣΔΘΙΒ΋ ΐ΍΅Ζ Αν΅Ζ ·ΉΑ΍ΣΖ ȃνΒΙΔΑΝΑȄ ΅΍ΗΌ΋ΘφΕΝΑ ·΍΅ Θ΋Α Η΋ΐ΅ΑΘ΍Ύφ ΆΉΏΘϟΝΗ΋ Θ΋Ζ Έ΍΅·ΑΝΗΘ΍ΎφΖ ΔΓ΍ϱΘ΋Θ΅Ζ ΅ΎΘ΍ΑΓΏΓ·΍ΎЏΑ Ή΍ΎϱΑΝΑ, µΉ ΉΚ΅ΕΐΓ·νΖ ΗΘ΋Α Ι·Ήϟ΅, Θ΋Α Ά΍Γΐ΋Λ΅Αϟ΅ Ύ΅΍ Θ΋Α ΅ΗΚΣΏΉ΍΅. ̒΍ ΑνΓ΍ ΅ΙΘΓϟ ΅΍ΗΌ΋ΘφΕΉΖ Έ΋ΐ΍ΓΙΕ·ΓϾΑ ΐ΍΅ Αν΅ ΈΙΑ΅ΐ΍Ύφ ΗΘΓΑ ΘΕϱΔΓ ΐΉ ΘΓΑ ΓΔΓϟΓ ΘΓ ΗϾΗΘ΋ΐ΅ ΗΙΏΏν·Ή΍ Ύ΅΍ ΉΔΉΒΉΕ·ΣΊΉΘ΅΍ Θ΋Α ΓΔΘ΍Ύφ ΔΏ΋ΕΓΚΓΕϟ΅, Ώϱ·Ν Θ΋Ζ νΐΚΙΘ΋Ζ ΈΙΑ΅ΘϱΘ΋ΘΣΖ ΘΓΙΖ Α΅ ΅ΑΘ΅ΔΓΎΕϟΑΓΑΘ΅΍ ΗΘ΍Ζ ΐΉΘ΅Ά΅ΏΏϱΐΉΑΉΖ ΗΙΑΌφΎΉΖ ΘΓΙ ΔΉΕ΍ΆΣΏΏΓΑΘΓΖ Ύ΅΍ Α΅ ΆΉΏΘ΍ΗΘΓΔΓ΍ΓϾΑ Θ΋Α ΅ΔϱΈΓΗφ ΘΓΙΖ ΗΉ ΔΕ΅·ΐ΅Θ΍Ύϱ ΛΕϱΑΓ. ̏Ή ΘΓΑ ΘΕϱΔΓ ΅ΙΘϱ, ΘΓΔ΍ΎνΖ ΔΉΕ΍ΓΛνΖ ΐΉ ΙΜ΋Ώϱ ΔΏ΋ΕΓΚΓΕ΍΅Ύϱ ΔΉΕ΍ΉΛϱΐΉΑΓ Έ΍ΉΕΉΙΑЏΑΘ΅΍ ΐΉ Θ΋Α ΐν·΍ΗΘ΋ ΈΙΑ΅Θφ ΏΉΔΘΓΐνΕΉ΍΅, ΉΑЏ ΔΉΕ΍ΓΛνΖ ΐ΍ΎΕϱΘΉΕΓΙ ΉΑΈ΍΅ΚνΕΓΑΘΓΖ Ύ΅Θ΅·ΕΣΚΓΑΘ΅΍ ΐΉ Θ΋Α ΉΏΣΛ΍ΗΘ΋ ΈΙΑ΅Θφ ΅ΎΘ΍ΑΓΆϱΏ΋Η΋. ̖Γ ΅ΔΓΘνΏΉΗΐ΅ ΉϟΑ΅΍ ΋ Ύ΅Θ΅·Ε΅Κφ Θ΋Ζ ΐν·΍ΗΘ΋Ζ ΈΙΑ΅ΘφΖ ΔΏ΋ΕΓΚΓΕϟ΅Ζ ΐΉ ΘΓ ΉΏΣΛ΍ΗΘΓ ΗΙΑΉΔ΅·ϱΐΉΑΓ ΎϱΗΘΓΖ, ΔΓΙ ΗΘ΋Α ΔΉΕϟΔΘΝΗ΋ ΘΝΑ ΍΅ΘΕ΍ΎЏΑ ΉΚ΅ΕΐΓ·ЏΑ ΅ΑΘ΍ΗΘΓ΍ΛΉϟ ΗΉ ΅ΎΘ΍ΑΓΆϱΏ΋Η΋ ΉΏΣΛ΍ΗΘ΋Ζ ΈϱΗ΋Ζ Ύ΅΍ Έ΍ΣΕΎΉ΍΅Ζ ΉΒνΘ΅Η΋Ζ. ̖Γ ̖Γ ΔΕϱ·Ε΅µµ΅ ΔΕϱ·Ε΅µµ΅ I-ImaS ImaS ΗΘΓΛΉϾΉ΍ ΗΘΓΛΉϾΉ΍ ΗΘ΋Α ΗΘ΋Α ΅ΑΣΔΘΙΒ΋ ΅ΑΣΔΘΙΒ΋ ΐ΍΅Ζ ΐ΍΅Ζ Αν΅Ζ Αν΅Ζ ·ΉΑ΍ΣΖ ·ΉΑ΍ΣΖ “νΒΙΔΑΝΑ νΒΙΔΑΝΑ” ΅΍ΗΌ΋ΘφΕΝΑ ΅΍ΗΌ΋ΘφΕΝΑ ·΍΅ ·΍΅ Θ΋Α Θ΋Α Η΋ΐ΅ΑΘ΍Ύφ Η΋ΐ΅ΑΘ΍Ύφ ΆΉΏΘϟΝΗ΋ ΆΉΏΘϟΝΗ΋ Θ΋Ζ Θ΋Ζ Έ΍΅·ΑΝΗΘ΍ΎφΖ Έ΍΅·ΑΝΗΘ΍ΎφΖ ΔΓ΍ϱΘ΋Θ΅Ζ ΔΓ΍ϱΘ΋Θ΅Ζ ΅ΎΘ΍ΑΓΏΓ·΍ΎЏΑ ΅ΎΘ΍ΑΓΏΓ·΍ΎЏΑ Ή΍ΎϱΑΝΑ Ή΍ΎϱΑΝΑ, ΐΉ ΐΉ ΉΚ΅ΕΐΓ·νΖ ΉΚ΅ΕΐΓ·νΖ ΗΘ΋Α ΗΘ΋Α Ι·Ήϟ΅ Ι·Ήϟ΅, Θ΋Α Θ΋Α Ά΍Γΐ΋Λ΅Αϟ΅ Ά΍Γΐ΋Λ΅Αϟ΅ Ύ΅΍ Ύ΅΍ Θ΋Α Θ΋Α ΅ΗΚΣΏΉ΍΅ ΅ΗΚΣΏΉ΍΅. ̒΍ ̒΍ ΑνΓ΍ ΑνΓ΍ ΅ΙΘΓϟ ΅ΙΘΓϟ ΅΍ΗΌ΋ΘφΕΉΖ ΅΍ΗΌ΋ΘφΕΉΖ Έ΋ΐ΍ΓΙΕ·ΓϾΑ Έ΋ΐ΍ΓΙΕ·ΓϾΑ ΐ΍΅ ΐ΍΅ Αν΅ Αν΅ ΈΙΑ΅ΐ΍Ύφ ΈΙΑ΅ΐ΍Ύφ ΗΘΓΑ ΗΘΓΑ ΘΕϱΔΓ ΘΕϱΔΓ ΐΉ ΐΉ ΘΓΑ ΘΓΑ ΓΔΓϟΓ ΓΔΓϟΓ ΘΓ ΘΓ ΗϾΗΘ΋ΐ΅ ΗϾΗΘ΋ΐ΅ ΗΙΏΏν·Ή΍ ΗΙΏΏν·Ή΍ Ύ΅΍ Ύ΅΍ ΉΔΉΒΉΕ·ΣΊΉΘ΅΍ ΉΔΉΒΉΕ·ΣΊΉΘ΅΍ Θ΋Α Θ΋Α ΓΔΘ΍Ύφ ΓΔΘ΍Ύφ ΔΏ΋ΕΓΚΓΕϟ΅ ΔΏ΋ΕΓΚΓΕϟ΅, Ώϱ·Ν Ώϱ·Ν Θ΋Ζ Θ΋Ζ νΐΚΙΘ΋Ζ νΐΚΙΘ΋Ζ ΈΙΑ΅ΘϱΘ΋ΘΣΖ ΈΙΑ΅ΘϱΘ΋ΘΣΖ ΘΓΙΖ ΘΓΙΖ Α΅ Α΅ ΅ΑΘ΅ΔΓΎΕϟΑΓΑΘ΅΍ ΅ΑΘ΅ΔΓΎΕϟΑΓΑΘ΅΍ ΗΘ΍Ζ ΗΘ΍Ζ ΐΉΘ΅Ά΅ΏΏϱΐΉΑΉΖ ΐΉΘ΅Ά΅ΏΏϱΐΉΑΉΖ ΗΙΑΌφΎΉΖ ΗΙΑΌφΎΉΖ ΘΓΙ ΘΓΙ ΔΉΕ΍ΆΣΏΏΓΑΘΓΖ ΔΉΕ΍ΆΣΏΏΓΑΘΓΖ Ύ΅΍ Ύ΅΍ Α΅ Α΅ ΆΉΏΘ΍ΗΘΓΔΓ΍ΓϾΑ ΆΉΏΘ΍ΗΘΓΔΓ΍ΓϾΑ Θ΋Α Θ΋Α ΅ΔϱΈΓΗφ ΅ΔϱΈΓΗφ ΘΓΙΖ ΘΓΙΖ ΗΉ ΗΉ ΔΕ΅·ΐ΅Θ΍Ύϱ ΔΕ΅·ΐ΅Θ΍Ύϱ ΛΕϱΑΓ ΛΕϱΑΓ. ̏Ή ̏Ή ΘΓΑ ΘΓΑ ΘΕϱΔΓ ΘΕϱΔΓ ΅ΙΘϱ ΅ΙΘϱ, ΘΓΔ΍ΎνΖ ΘΓΔ΍ΎνΖ ΔΉΕ΍ΓΛνΖ ΔΉΕ΍ΓΛνΖ ΐΉ ΐΉ ΙΜ΋Ώϱ ΙΜ΋Ώϱ ΔΏ΋ΕΓΚΓΕ΍΅Ύϱ ΔΏ΋ΕΓΚΓΕ΍΅Ύϱ ΔΉΕ΍ΉΛϱΐΉΑΓ ΔΉΕ΍ΉΛϱΐΉΑΓ Έ΍ΉΕΉΙΑЏΑΘ΅΍ Έ΍ΉΕΉΙΑЏΑΘ΅΍ ΐΉ ΐΉ Θ΋Α Θ΋Α ΐν·΍ΗΘ΋ ΐν·΍ΗΘ΋ ΈΙΑ΅Θφ ΈΙΑ΅Θφ ΏΉΔΘΓΐνΕΉ΍΅ ΏΉΔΘΓΐνΕΉ΍΅, ΉΑЏ ΉΑЏ ΔΉΕ΍ΓΛνΖ ΔΉΕ΍ΓΛνΖ ΐ΍ΎΕϱΘΉΕΓΙ ΐ΍ΎΕϱΘΉΕΓΙ ΉΑΈ΍΅ΚνΕΓΑΘΓΖ ΉΑΈ΍΅ΚνΕΓΑΘΓΖ Ύ΅Θ΅·ΕΣΚΓΑΘ΅΍ Ύ΅Θ΅·ΕΣΚΓΑΘ΅΍ ΐΉ ΐΉ Θ΋Α Θ΋Α ΉΏΣΛ΍ΗΘ΋ ΉΏΣΛ΍ΗΘ΋ ΈΙΑ΅Θφ ΈΙΑ΅Θφ ΅ΎΘ΍ΑΓΆϱΏ΋Η΋ ΅ΎΘ΍ΑΓΆϱΏ΋Η΋. ̖Γ ̖Γ ΅ΔΓΘνΏΉΗΐ΅ ΅ΔΓΘνΏΉΗΐ΅ ΉϟΑ΅΍ ΉϟΑ΅΍ ΋ Ύ΅Θ΅·Ε΅Κφ Ύ΅Θ΅·Ε΅Κφ Θ΋Ζ Θ΋Ζ ΐν·΍ΗΘ΋Ζ ΐν·΍ΗΘ΋Ζ ΈΙΑ΅ΘφΖ ΈΙΑ΅ΘφΖ ΔΏ΋ΕΓΚΓΕϟ΅Ζ ΔΏ΋ΕΓΚΓΕϟ΅Ζ ΐΉ ΐΉ ΘΓ ΘΓ ΉΏΣΛ΍ΗΘΓ ΉΏΣΛ΍ΗΘΓ ΗΙΑΉΔ΅·ϱΐΉΑΓ ΗΙΑΉΔ΅·ϱΐΉΑΓ ΎϱΗΘΓΖ ΎϱΗΘΓΖ, ΔΓΙ ΔΓΙ ΗΘ΋Α ΗΘ΋Α ΔΉΕϟΔΘΝΗ΋ ΔΉΕϟΔΘΝΗ΋ ΘΝΑ ΘΝΑ ΍΅ΘΕ΍ΎЏΑ ΍΅ΘΕ΍ΎЏΑ ΉΚ΅ΕΐΓ·ЏΑ ΉΚ΅ΕΐΓ·ЏΑ ΅ΑΘ΍ΗΘΓ΍ΛΉϟ ΅ΑΘ΍ΗΘΓ΍ΛΉϟ ΗΉ ΗΉ ΅ΎΘ΍ΑΓΆϱΏ΋Η΋ ΅ΎΘ΍ΑΓΆϱΏ΋Η΋ ΉΏΣΛ΍ΗΘ΋Ζ ΉΏΣΛ΍ΗΘ΋Ζ ΈϱΗ΋Ζ ΈϱΗ΋Ζ Ύ΅΍ Ύ΅΍ Έ΍ΣΕΎΉ΍΅Ζ Έ΍ΣΕΎΉ΍΅Ζ ΉΒνΘ΅Η΋Ζ ΉΒνΘ΅Η΋Ζ. ̓ΕΓΘ΅ΌΉϟΗ΅ ̎ϾΗ΋: ̓ΕΓΘ΅ΌΉϟΗ΅ ̎ϾΗ΋: ̄ΑΘϟΌΉΘ΅ ΐΉ Θ΋Α ΙΔΣΕΛΓΙΗ΅ ΘΉΛΑΓΏΓ·ϟ΅ ΅ΙΘϱΐ΅ΘΓΙ ΉΏν·ΛΓΙ νΎΌΉΗ΋Ζ (AERC) ΔΓΙ ΙΔΣΕΛΉ΍ ΗΘ΅ ΗϾ·ΛΕΓΑ΅ ΅ΎΘ΍ΑΓ·Ε΅Κ΍ΎΣ ΐ΋Λ΅Αφΐ΅Θ΅, ΘΓ ΑνΓ ΗϾΗΘ΋ΐ΅ Ό΅ Ά΅ΗϟΊΉΘ΅΍ ΗΉ ΘΉΛΑΓΏΓ·ϟΉΖ line-scanning Μ΋Κ΍΅ΎφΖ ΅ΎΘ΍ΑΓ·Ε΅Κϟ΅Ζ ΗΉ ΗΙΑΈΙ΅Ηΐϱ ΐΉ intelligent adaptive control ·΍΅ Θ΋Α ΆΉΏΘ΍ΗΘΓΔΓϟ΋Η΋ ΘΝΑ Δ΅Ε΅ΐνΘΕΝΑ νΎΌΉΗ΋Ζ ΗΉ ΔΕ΅·ΐ΅Θ΍Ύϱ ΛΕϱΑΓ. ̕ΘϱΛΓΖ ΘΓΙ ȍνΒΙΔΑΓΙȎ ΉΏΉ·ΎΘφ ΗΣΕΝΗ΋Ζ ΉϟΑ΅΍ ΋ ΆΉΏΘ΍ΗΘΓΔΓϟ΋Η΋ Θ΋Ζ ΔΓ΍ϱΘ΋Θ΅Ζ Θ΋Ζ ΔΏ΋ΕΓΚΓΕϟ΅Ζ ΗΘ΍Ζ ΔΉΕ΍ΓΛνΖ ΉΑΈ΍΅ΚνΕΓΑΘΓΖ Ύ΅΍ ΋ ΉΏ΅Λ΍ΗΘΓΔΓϟ΋Η΋ Θ΋Ζ ΈϱΗ΋Ζ ΗΘ΍Ζ ΙΔϱΏΓ΍ΔΉΖ ΔΉΕ΍ΓΛνΖ. ̊ ΅ΑΣΔΘΙΒ΋ ΘΓΙ ΗΙΗΘφΐ΅ΘΓΖ Ό΅ ·ϟΑΉ΍ ΗΉ ΘΕϟ΅ Έ΍΅ΈΓΛ΍ΎΣ ΗΘΣΈ΍΅, Θ΅ ΓΔΓϟ΅ ΔΉΕ΍Ώ΅ΐΆΣΑΓΙΑ Ύ΅ΘΣ ΗΉ΍ΕΣ Θ΋Α Έ΋ΐ΍ΓΙΕ·ϟ΅ ΉΑϱΖ ΗΙΑϱΏΓΙ ΔΓΗΓΘ΍ΎЏΑ ΐΉΘΕφΗΉΝΑ ΔΓ΍ϱΘ΋Θ΅Ζ Ή΍ΎϱΑ΅Ζ ΐΉ Θ΋ ΐΓΕΚφ ȍΙΔΓ·Ε΅ΚφΖ», Θ΋Α ΗΛΉΈϟ΅Η΋ ΘΓΙ ΅΍ΗΌ΋ΘφΕ΅ Ύ΅΍ ΘΓΙ ΗΙΗΘφΐ΅ΘΓΖ ΉΏν·ΛΓΙ Θ΋Ζ ΗΣΕΝΗ΋Ζ, Ύ΅΍ ΘνΏΓΖ Θ΋Α Ύ΅Θ΅ΗΎΉΙφ Ύ΅΍ ΘΓΑ ΎΏ΍Α΍Ύϱ νΏΉ·ΛΓ ΘΓΙ ΘΉΏ΍ΎΓϾ ΗΙΗΘφΐ΅ΘΓΖ. ̓΅ΕΣΏΏ΋Ώ΅ ·ϟΑΉΘ΅΍ ΐΉΏνΘ΋ ·΍΅ Θ΋Α ΉΑΗΝΐΣΘΝΗ΋ ΓΏΓνΑ΅ Ύ΅΍ ΔΉΕ΍ΗΗϱΘΉΕ΋Ζ ΔΏ΋ΕΓΚΓΕϟ΅Ζ ȍΙΜ΋ΏΓϾȎ ΉΔ΍ΔνΈΓΙ ΗΘΓ ΎϾΎΏΝΐ΅ ΉΏν·ΛΓΙ, νΘΗ΍
poster
Computational screening of functionalized zinc porphyrins for dye sensitized solar cells K. B. Ørnsø, J. M. Garcia-Lastra and K. S. Thygesen krbt@fysik.dtu.dk References: [1] O’Regan, B. & Grätzel, M., Nature, 1991, 353, 737-740 [2] Hagfeldt, A.; Boschloo, G.; Sun, L.; Kloo, L. & Pettersson, H., Chem. Rev., 2010, 110, 6595-6663 [3] Ørnsø, K. B.; Garcia-Lastra, J. M. & Thygesen, K. S., PCCP, 2013, 15, 19478-19486 Introduction An efficient dye sensitized solar cell (DSSC)[1,2] is one possible solution to meet the world’s rapidly increasing energy demands and associated climate challenges. This requires inexpensive and stable dyes with well-positioned frontier energy levels for maximal solar absorption, efficient charge separation, and high output voltage. Here we demonstrate an extensive computational screening of 1029 zinc porphyrins systematically functionalized with electron donating side groups and electron accepting anchoring groups. The trends in frontier energy levels versus side groups are analyzed and a no-loss DSSC level alignment quality is estimated. All frontier energy levels, gaps and level alignment quality values are stored in a database publicly available.[3] Dye Sensitized Solar Cells Building blocks Conclusions By systematically changing the side groups and anchor groups of zinc porphyrin dyes we obtain a handle to control the frontier orbitals and the level alignment to match the requirements for Dye Sensitized Solar Cells which may be used to improve the efficiency. Computational methods • Density Functional Theory with PBE in the GPAW code. • Geometry optimization of all candidates. • Fundamental gaps using total energies. • Triplet excitation energies used to compute the level alignment quality. Level alignment quality . Center for Atomic-scale Materials Design (CAMD), Department of Physics, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark www.camd.dtu.dk Fundamental gaps Further information online Paper DOI: 10.1039/C3CP54050B Poster DOI: 10.5281/zenodo.7584 Paper: Poster: Data: Acknowledgment: The authors would like to thank Angel Rubio and Franz Himpsel for inspiring discussions. KBØ and KST would further like to thank the Danish Council for Independent Research DFF-Sapere Aude program (grant no. 11-1051390) for financial support. JMGL acknowledges support from the Spanish Ministry of Economy and Competitiveness under Projects FIS2009-07083,FIS2010-21282-C02-01 and FIS2012- 30996 and through Ramon y Cajal grant RYC-2011-07782. Anchor groups: Side groups:
poster
0 0,1 0,2 0,3 0,4 0,5 0,6 1 2 3 4 5 6 wurtzite (g) batch 2a precipitazione ZnS Co-precipitation Synthesis Microstructure investigations • Hexagonal wurtzite ZnS nanopowder • Beige colour nanopowder • Wide band gap (3.77 eV) • High refractive index (n=2.75) • Nanoparticles size: 3 to 5 nm • Specific surface area: 38 m2 g-1 • Ethyl glycol & thiourea decompose at 250°C-290°C The two-step sintering method (TSS) is based on the heating of the sample to a high temperature T1 followed by a rapid cooling down to a lower temperature T2 and then holding it at T2 for a long period. The main characteristic of this method is that the grain boundary diffusion of the sample is maintained, while also avoiding grain boundary migration. Therefore, the grain growth associated with the final step of the sintering process is completely suppressed. Two-Step Sintering Process Commercially available ZnS powder (a mixture of 80% cubic and 20% hexagonal phase) was used as the starting powder, without any binder or sintering aid. Discs with a diameter of 8 mm and a thickness of 4 mm were prepared by uniaxial pressing at 100 MPa, followed by cold isostatic pressing at 150 MPa. Advantages  Finer microstructure  Sintering temperature reduction  Simple and pressureless method  Large-scale production of components with complex shapes  Binders not necessary  Lower costs The amount of solvent remained the same (60 ml of ethyl glycol) by re-using what remained of the solvent used in the previous reaction and topping up the quantity lost. Productivity yields increased over six successive reactions from 156 mg to 549 mg of ZnS nanopowder per batch. Self-alignament of ZnS nanopowder dispersed in ethanol, after being washed several times in acetone and ethanol. Take a picture to download the full video “Circular synthesis”: By recycling a used solvent obtained from previous reactions and topping up the quantity lost, the productivity yield increased 3.5 times. Method: The well-known reaction of ZnCl2, as the zinc source, with thiourea as the sulphur source, dissolved in ethyl-glycol at a carefully controlled, constant molar ratio (mMZn/mMS=1) under medium temperature conditions (140°C-150°C). The next step is using recycled ZnCl2, to make the synthesis fully circular. Astro Furnace Thermal technology Equipment and operating conditions  Graphite crucible  N2 flowing atmosphere  Pressureless solid-state thermal sintering SINGLE-STEP SINTERING (SSS) TWO-STEP SINTERING (TSS) 1250°C - 1h 1150°C t=0 1100°C - 5h 1250°C - 1 h 1250°C t=0 1250°C - 1 h 1250°C - 1 h MCM 2019 14th Multinational Congress on Microscopy September 15-20, 2019 in Belgrade, Serbia • The SSS processing of commercial ZnS powder at 1250°C gives dense ceramic samples with only the wurtzite phase present. • The TSS processing of the commercial powder at T1=1150°C and T2=1100°C gives ceramics of similar density but which contain a small percentage in the cubic phase (not fully transformed into the hexagonal phase). A possible solution: Increase the second step holding time. • The first step in the TSS processing of prepared ZnS nanopowder at T1= 1250°C did not prove successful. The next step is to improve the densification, using a higher compaction pressure to prepare the green body (200 MPa isostatic pressure) and heating it to a lower T1 (of around 1000°C). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.797951. Schematic representation of densification Green body COMMERCIAL ZnS POWDER ZnS NANOPOWDER Pyroelectric materials can harvest energy from naturally occurring ambient temperature changes, as well as artificial temperature changes due to exhaust gases or convection and solar energy. Pyroelectric energy harvesting is a highly promising technology for the future of autonomous and self-powered electronic devices (battery free) and could be the right methodology to harvest enormous am
poster
The 7th International Symposium on Gas Transfer at Water Surfaces Seattle, WA USA May 18-21, 2015 Flow field and momentum balance in the Heidelberg Aeolotron, a large annular wind/wave facility M. Bopp1 and B. Jähne1,2 1 Institute of Environmental Physics (IUP), University of Heidelberg, Germany, maximilian.bopp@iup.uni-heidelberg.de 2 Heidelberg Collaboratory of Image Processing (HCI), University of Heidelberg, Germany, bernd.jaehne@iwr.uni-heidelberg.de The Heidelberg Aeolotron is an annular wind/wave facility with 10m diameter. One of the main advantages in contrast to a linear facility is the nearly infinite fetch. The conditions on the water surface, and so the gas transfer are more similar to conditions at the open ocean. The wind field in the Aeolotron was investigated with a pitot tube mounted on a translation stage. The radial wind speed was measured for a complete cross section at ten positions. To measure near the water surface surfactants were added to dampen the wave field. As expected the wind generators, two axial fans embedded in the ceiling, and the annual geometry of the facility produce a flow pattern shaped like a half helix between the fans. This results in variations of the wind speed near the surface, which are about 30% at the measured wind condition. Figure 1: The friction velocity determined by the momentum balance method for clean water (crosses) and two concentrations of the surfactant Triton X-100 (circles and stars) in dependence of the reference wind speed. The surfactants have a reducing effect, especially for lower wind speeds abstract, doi: 10.5281/zenodo.17669 abstract, doi: 10.5281/zenodo.17669
poster
• a Calibration System of the JUNO Experiment Yue Meng, on behalf of the JUNO collaboration Shanghai Jiao Tong University mengyue@sjtu.edu.cn 每秒几万亿 A comprehensive multiple-source, multiple-position calibration program is developed for a JUNO central detector(CD) to achieve a better than 1% linear energy scale and a superior effective energy resolution of 3% between 1 MeV and 8 MeV in order to determine neutrino mass hierarchy, measure solar neutrino, detect supernova neutrino, etc. • The bias in the reconstructed energy is less than 0.6% over the entire range of the inverse beta decay energy, which is significantly better than the 1% requirement. • The effective energy resolution is less 3.0% between 1 MeV and 8 MeV. Control Room spool spool spool spool spool spool spool spool Automatic Calibration Unit Calibration house Central cable Side cable Bridge Remotely Operated under-LS Vehicles ROV guide rail Guide Tube Calibration System Source Cable Loop System source storage AURORA Calibration house • Source storage, motors to control CLS system, ROV rail, etc Automatic Calibration Unit (ACU) Remotely Operated Vehicle (ROV) Cable Loop System (CLS) • 1D central axis scan with gamma sources, neutron sources and laser automatically ACU prototype CLS layout CLS motor prototype • 2D plane source scan with cable loops assembled on both semisphere sides • 3D source scan with a self-driven vehicle unit ROV ROV water test Ultrasonic sensor system (USS) and CCD AURORA (A Unit for Researching Online the LSc tRAnsparency) • Monitor and determine LS attenuation length, scattering length and absorption length with laser system Guide Tube Calibration System (GTCS) • Calibrate boundary area and provide boundary condition for the CD 1:1 GTCS prototype • positioning ROV and CLS USS prototype CCD prototype Calibration sources Energy resolution • a term is the statistical term, about 2.7% • b is a constant term independent of the energy, and in the case of JUNO, dominated by position non-uniformity • c term represents the contribution of a background noise term, c is estimated to be 1.0% Positron energy resolution vs. Evis after the ideal non-uniformity correction a=2.57%, b=0.73%, c=1.25% Effective energy resolution Energy resolution • a Physics nonlinearity • Multiple gamma sources and cosmogenic background are used to corrected physics nonlinearity Instrumental nonlinearity • Combine dual calorimetry including LPMTs and reference SPMT whose response is linear in charge by design and tuneable laser source to calibrate channel nonlinearity Systematic uncertainties • Shadowing effect: < 0.15% • Energy loss effect: < 0.1% • High energy gamma uncertainty: 0.4% • Instrumental nonlinearity: <0.3% • Position dependent effect: 0.3% • Statistics: 0.01% Combined systematic uncertainty: 0.6% Caused by scintillating and Cerenkov photons in LS Liquid scintillator (LS) Refer to https://doi.org/10.1007/s41605-017-0022-2
poster
NEONATO CON ATRESIA ESOFÁGICA Y RESOLUCIÓN EXITOSA EN EL HOSPITAL GINECO- OBSTÉTRICO ISIDRO AYORA Antonio Andrés Ceron Caicedo¹, Ximena Aracely Orozco Quinga², Kattherine Jennyfer Salazar Muñoz³, Médico Residente HGOIA¹, Médico Pediatra Neonatóloga HGOIA², Médico Pediatra HGOIA³ INTRODUCCIÓN: La atresia esofágica se destaca como una de las malformaciones congénitas más prevalentes en la población neonatal. Se suele presentar junto a una fístula traqueoesofágica, clasificándose en cinco categorías según la presencia o no de la mencionada fístula. La variante más frecuente es la atresia esofágica tipo C, es decir atresia fístula traqueoesofágica distal, que corresponde al 85% de los casos descritos de la misma, en segundo lugar, tenemos a la atresia esofágica tipo A, la cual corresponder a atresia esofágica pira, sin la presencia de fistula, con solo 10% de prevalencia, muy alejada de las más común tipo C, anteriormente mencionada. El diagnóstico puede realizarse prenatalmente mediante evaluación de antecedentes obstétricos y hallazgos en ecografías, así como también postnatalmente mediante la observación de síntomas como intolerancia a la alimentación y excesiva producción de secreciones orales. Todo esto, claro está debe ser confirmado mediante radiografía de tórax que evidencia la imposibilidad de pasar una sonda nasogástrica a través del esófago, junto a presencia o no de aire en cámara gástrica, lo cual nos ayudará a discernir en la presencia o no de fistula. El tratamiento principal, por supuesto será de naturaleza quirúrgica. El pronóstico es generalmente favorable, las complicaciones se relacionan con el período postoperatorio y a largo plazo. Ilustración 1y 2: Tomografía simple de tórax, destaca presencia de atresia esofágica con fistula. OBJETIVO: Determinar los factores de riesgo asociados a atresia esofágica y resolución quirúrgica en recién nacido hospitalizado en Hospital Gineco Obstétrico Isidro Ayora, en Quito, Ecuador durante el periodo comprendido desde 12/09/2023 a 26/10/2023. METODOLOGÍA: Estudio de caso clínico, de recién nacido hospitalizado en el Hospital Gineco Obstétrico Isidro ayora en el periodo descrito, periodo durante el cual se diagnosticó Cuadro de atresia esofágica Tipo C, se realizó resolución quirúrgica, seguimiento post- operatorias y alta médica definitiva. CONSENTIMIENTO INFORMADO: Consentimiento informado de parte de padres de paciente. RESULTADOS: Se trató de neonato producto de parto pretérmino en Hospital Básico de Cayambe, antropometría: peso: 2120 gramos, talla: 45 cm, perímetro cefálico: 32 cm, edad gestacional: 38.2 semanas por Capurro, transferido inmediatamente a unidad de tercer nivel, Hospital Gineco Obstétrico Isidro Ayora por cuadro de dificultad respiratoria, a la llegada por clínica y apoyado de radiografía de tórax se presume en atresia esofágica, cuadro confirmado mediante la realización de TAC simple de tórax, donde se la clasifico como tipo C, con presencia de fistula. Posterior a estabilizar al paciente, se procede a manejo quirúrgico, lográndose una intervención quirúrgica mediante toracotomía, de buen resultado, a continuación, se ingresa al paciente a área de cuidados intensivos neonatales, para manejo integral del paciente y finalmente tener a paciente en condiciones de alta médica definitiva, junto a controles continuos en la institución. Recién nacidos hospitalizados en el servicio de Neonatología del Hospital Gineco Obstétrico Isidro Ayora (enero 2009 a diciembre 2022) Recién nacidos Atresia esofágica Prevalencia 26236 18 0.06% Fuente: Elaboración propia DISCUSIÓN: Resulta esencial el diagnóstico rápido, efectivo y adecuado de cuadros como este que, a pesar de no requerir manejo emergente su pronto diagnóstico y por ende manejo especializado, van a determinar una serie de complicaciones a corto, mediano y largo plazo en la vía de pacientes pediátricos. En nuestro caso en específico, la presunción diagnostica precoz, permitió un abordaje integral del paciente, pudi
poster
I-RIM Conference 2019 October 18 - 20, Rome, Italy ISBN: 9788894580501 Robotic task optimization in food industry 1st Giulio Rosati Department of Industrial Engineering University of Padova Padova, Italy giulio.rosati@unipd.it 2nd Silvio Cocuzza Department of Industrial Engineering University of Padova Padova, Italy silvio.cocuzza@unipd.it 3rd Matteo Bottin Department of Management and Engineering University of Padova Padova, Italy matteo.bottin.1@phd.unipd.it 4th Nicola Comand Department of Management and Engineering University of Padova Padova, Italy nicola.comand@phd.unipd.it Abstract—The market demand for product customization is increasing. In particular, the customization of food products, such as 3D cakes, is performed by skilled operators with problems in finding them in peak production periods. Fortunately, text and drawing decorations can be translated into trajectories that can be followed by industrial robots, thus making it possible to fulfill the same task. However, the redundancy of the task and the synchronization with the external equipment, such as conveyors, make trajectory planning particularly demanding. In this paper, an optimization technique that considers all these parameters is presented, which allows to reduce cycle time and therefore energy consumption. Index Terms—Robot, redundancy, path planning, food industry I. INTRODUCTION The market demand of 3D cakes and of big chocolate struc- tures is rapidly increasing [2]. Unfortunately, the operation of writing customized text/drawing decorations over cakes using melted chocolate or icing is now performed by skilled staff, with problems in finding them in peak production periods. Nowadays several 3D food printers have been developed [1], [3], [6], [8], but also conventional printers with edible ink have been used [3]. However, the usual application of food customization relies on small adjustments of the final product: small drawings, different text and similar. This is particularly true in cake industry, where the cake can be personalized by adding a person or a company name. This small adjustments do not justify a high investment cost of a dedicated machine, but need a fast equipment that is easy to reconfigure for each single product. Industrial robots seems to be perfect for this purpose [4], [5]. Text and drawing decorations can be easily transferred to the robot by defining specific viapoints to be interpolated. These viapoints form a trajectory that reproduce the personalized element to be printed on the surface of the cake. Each viapoint is described by n parameters (required to define its position and orientation in space), but the task itself This work is partially supported by EPSON Robotics within the Win-a- robot Contest of 2018 is redundant: each position can be reached by rotating around the surface normal, thus there are infinite robot configurations [7] that can fulfill the task. Even if this redundancy can be exploited by the last joint rotation, the equipment is usually too big to be installed aligned with the last joint axis, so its installation is required to be off-axis. As a result, the aforementioned redundancy affects all joints value. In this paper, an optimization technique that takes care of the redundancy is developed. To further improve the work, a linear conveyor is included in the workcell: this aspect will show how our solution could be ready for a real industrial scenario with a high throughput. II. OPTIMIZATION TECHNIQUE Let’s consider a trajectory f(x(t)) that expands in 3D space on a surface. This trajectory can be discretized in multiple via points to be used in the robot path planning. If the robot is redundant, so the number of parameters required to uniquely define it configuration are more than the degrees of freedom of the structure, inverse kinematics results in infinite possible solutions. That’s said, for each point the joint values are a function of the Cartesian position x and the redundant parameters values: q(t) = g(x(
poster
Exploring the Nature of the Hard X-ray/ Soft Gamma-ray Emission of Cen A James Rodi1, Elisabeth Jourdain2,3, Jean-Pierre Roques2,3 1INAF-IAPS, Rome, Italy; 2Universite de Toulouse, Toulouse; 3CNRS, Toulouse Above: Cen A 20—40 keV and 100- 200 keV light curves on revolution timescale. SPI (black) and ISGRI (red). Sides: Photon index vs 50 keV flux for obs close in time with SPI (left) and ISGRI (right). Best-fit to a constant photon index shown as red dashed line. Centaurus A Introduction: As one of the brightest radio-loud active galactic nuclei (AGNs) in the hard X-ray sky (Beckmann et al. 2009), Centarus A (Cen A) has been observed by numerous missions since its first detection in 1969 (Bowyer et al. 1970). Results have shown that the spectral shape does not vary with flux (Jourdain et al. 1993; Rothschild et al. 2006; Beckmann et al. 2011; Rothschild et al. 2011; Burke et al. 2014). Consequently, one would expect reported spectral parameters for Cen A to be similar across time and instruments. Yet that has not been the case as there has been a lack of consistency in the values, which has made interpreting the origin of the hard X-ray/soft gamma-ray emission difficult. From the past ∼30 years, the main disagreement is over the presence of spectral curvature at high energies. In general, X-ray analyses favor models without spectral curvature below ~1 MeV (Rothschild et al. 2006, Rothschild et al. 2011, Furst et al. 2016). However, analyses up to soft gamma-rays show evidence of curvature, though where the curvature is varies (Jourdain et al. 1993, Steinle et al. 1998, Beckmann et al., 2011, Burke et al. 2014). The existence of curvature and where it is is critical to understanding the emission process at work in Cen A. Additionally, recent Swift/BAT results show spectral variability after 2013 with the source spectrum steeper when the source is brighter and harder when the flux is lower (Rani et al. 2022). Data Analyses: We analyzed SPI and ISGRI archival data and observations from the current AO to span the full INTEGRAL mission, revolution 48- 2539 in the ~20 keV – 2 MeV energy range using SPIDAI and the most recent version of OSA, respectively, for a total exposure time of ~ 5 Ms each. During this time, Cen A underwent a range of fluxes that enable searches for spectral-flux variability and also investigate the presence of high-energy spectral curvature. ABSTRACT: Despite decades of observations, the nature of the hard X-ray/soft gamma-ray emission of Cena, the brightest radio-loud AGN in the hard X-ray sky, is still unclear. The two main competing models are thermal Comptonization and synchrotron self-Compton emission. They predict a spectral cutoff of roughly 300 keV or > 1 MeV, respectively, and require spectral coverage to the MeV region to differentiate. Using archival data and observations from the current INTEGRAL AO, we investigate Cen A's spectrum using both SPI and IBIS/ISGRI to better understand the soft gamma-ray emission and the process at work in that energy range. Also, we search for spectral variability in light of recent results from Swift/BAT. We find that INTEGRAL spectra can be described by either a cutoff power law model with Ecut~550 keV or by a log parabola model. Results & Future Work Results: To start we looked for spectral variability by fitting observations close in time to power-law models in the 25-70 keV energy range for SPI and 30-70 keV for ISGRI. The results for each instrument are shown in plots above. Each set of photon indexes were fit to a constant value to test for spectral variability. The best-fit constant for each instrument is plotted as a red dashed line. For SPI the best-fit photon index is 1.70 +/- 0.03 for SPI and 1.83 +/- 0.01 for ISGRI with red. chi2 values less than 1 for each with all the data points within 3-sigma from the average value thus the data are consistent with a constant photon index. However the best-fit indexes for each instrument are significantly different. T
poster
A participatory approach to invent the future of mobility CCM: An open purpose driven ecosystem with an Action driven approach Eveline Wandl-Vogt Ars Electronica Research Institute “knowledge for humanity” Austrian Academy of Sciences In spring 2019, various people from the mobility sector started to build a free and open mobility community. It represents a multistakeholder core of an emerging mobility ecosystem, an open system consisting of more than 80 organizations from industry, start-ups, science, civil society and many other committed mobility thinkers from DACH countries. A so-called orchestrator group is directing the processes. They are not determining the rules and processes in the system. The members of the group come from stakeholder organizations. Nevertheless, a company or organization - as in our ecosystem www.mobility.community - has to give the go-ahead for the ecosystem and the formation of an orchestrator group: In our case it was the Austrian Federal Railway with its Open Innovation Team. The initiating organisation has a greater effort, for example for the coordination by its own employees. In the same way, the initiator has also increased responsibility due to the greater scope for influencing the further development of the ecosystem. That is why we have set up the orchestrator group broadly, with mobility people from startups, EPUs, universities and other companies. And the state? Getting politicians and administrators involved is essential! And we did this using the example of the car-free city center of vienna. Now we have to focus on mobility customers! So that we don't become an introverted bubble of ideas , formerly “ivory tower”. Chiara Eccher thinkvisual.at Anna Gerhardus Institute for Advanced Studies Vienna Mobility as a lateral agenda for the SDGs Mobi Bildrechthinweise/Bildbeschreibung Afamia Jaddah University of Vienna Orchestration against the background of COVID-19 : hybrid events and virtuality We, as a mobility community, are made up by players of the mobility ecosystem and understand mobility as a social mandate - in the sense of maintaining mobility as a commons for the general public. This rationale results in five crucial areas: 1. Physical space & Infrastructure 2. Focus on (societal) needs & Inclusive innovation 3. Cost & Need for mobility 4. Sustainability & Future protection 5. as well as the mobility ecosystem. The ‘Mobility Manifesto’ as a backbone 3 Learning Journeys during COVID-19 lockdown in AT 1) Organization: a) ) Onboarding : New Orchestrators, new community members b) Virtualization: new tools: Menti, Miro, Zoom Breakout groups mentored by two orchestrators each and one or two technical facilitators for the tools 2) Content: a) Detecting trends and associations b) Panel discussion with key actors c) Inspire and trigger future scenarios d) starting new collaborations and building networks CC-By 4.0 Chiara Eccher CC-By 4.0 Chiara Eccher CC-By 4.0 Chiara Eccher Four goals are more or less directly linked to New Mobility: SDG Goal 9 “industry, innovation, infrastructure” SDG Goal 11 “Make cities inclusive, safe, resilient and sustainable” SDG Goal 13 “Climate Action” SDG Goal 7 "Ensure access to affordable, reliable, sustainable and modern energy for all“ as well as SDG Goal 17 “partnerships for the goals” as in a complex world, with non-linear dynamics, action driven knowledge-partnerships are relevant for success. This is why we are building on the ecosystem approach against the background of Open Innovation. Best Practice Example for Open Innovation of the Austrian Government CC-By 4.0 Chiara Eccher presented at Knowledge for Change: A decade for Citizen Science (2020-2030) in support of the SDGs 14./15. 10. 2020 Berlin (DE) / virtual
poster
How the stellar flicker noise affects the characterization of planetary transits ? S. Sulis1, M. Lendl2,3, S. Hofmeister3,4, A. Veronig3,4, L. Fossati3, P. Cubillos3, and V. Van Grootel5 ABSTRACT How flicker impacts the inferred planet radius ? Predictions for PLATO Stellar activity is known to limit exoplanet detection and characterization. Among this activity, stellar convection (``flicker'') evolves during the typical transit timescales (~ hours) and affects the inferred transit parameters. We generated realistic simulations of transiting exoplanets based on solar HMI data. These simulations include planets from 1 to 10 Earth radii with different transit geometries. These simulations comprising hundreds of light curves are available to the community : https://doi.org/10.5281/zenodo.3686871 We analyzed the data using standard MCMC methods assuming the noise is white and Gaussian (WGN), or a Gaussian Process (GP). We show that, in both cases, the resulting planet parameters can be affected by biases, which leads to biased planetary radius measurements. This demonstrates the need to develop robust stellar noise modeling to achieve PLATO's goal of characterizing exoplanets transiting solar-like stars. Next steps of this study will be to investigate i) how other noise sources (e.g., flares, spots and faculae) affect the inferred exoplanet parameters, ii) which noise modeling allows to derive the most accurate transit parameters. The flicker index : a new indicator to track the flicker noise We developed a new indicator to track the granulation noise properties in photometric light curves: the flicker index. It is defined as the slope of the periodogram measured in the frequency range where granulation dominates (e.g., from 8 to 30 min for the Sun). Based on the observed correlations, we predicted for which stars this noise source is expected to be significant enough to impact the characterization of Earth twin transits observed with PLATO (24 camera). For example, we expect the flicker noise to significantly impact the light curves of all solar- type stars with apparent magnitude <13 (dashed contour lines in Fig.4). This indicator informs on the noise amplitude, characteristic frequency and correlation type. Based on hundred of Kepler light curves, we found it is correlated with the stellar parameters (Fig.3). Stellar granulation (and oscillations) signals can reach several tens of ppm. When single transits are observed : this nuisance signal biases the transit parameters. We have done simulations of synthetic exoplanet transits on quiet solar data (Fig.1) to quantify this bias. Based on classical characterization MCMC algorithm (assuming the noise is a WGN), we found bias in the inferred planet radius that can reach 12% (blue histograms, Fig2). Simulating similar light curves, but with true WGN, led to error models << 3% (see red histogram). Based on more complex flicker noise modeling (GP), the errorbars on the transit parameters are larger, but the unpredictable bias remains. 1 Univ. Aix-Marseille, LAM, France, 2 Obs. Genève, Switzerland 3 IWF/ÖAW, Austria, 4 Univ. Graz, Austria, 5 Univ. Liège, Belgium Ref paper: Sulis et al., 2020, A&A, 636, A70 We need good indicators to characterize the statistical properties of this noise source.
poster
Non-Cognitive Predictors of Student Success: A Predictive Validity Comparison Between Domestic and International Students In response to future climate change, reptiles modified their spatial axis, seeking new habitats and changing their species distribution range. Non-Cognitive Predictors of Student Success: Predictive Validity Comparison Between Domestic and International Studentsz INTRO • Species Distribution Modeling is an important tool to mitigate species distribution gaps and assess the impacts of climate change on the distribution of biodiversity; • Our goal is to calculate the potential distribution of ten Squamata reptiles associated with sandy soils endemic from the South American Dry Diagonal METHODS 1. We created a database containing 3,011 occurrences; 2. 20 environmental variables for current (1970-2000) and future (years 2040 and 2060) periods; 3. SSP2-4.5 (optimistic) and SSP5-8.5 (pessimistic) scenarios; 4. Models: biomod2 package – R; 5. General maps of suitability: QGIS The impact of climate change in the psammophilous Squamata of the South American Dry Diagonal 1Silva Oliveira, Júlia*; 2Santana, Diego José; 3Lima Pantoja, Davi; 4,5 Guedes, Thaís 1Universidade Estadual do Maranhãoo, Brazil 2Universidade Federal do Mato Grosso do Sul, Brazil 3Universidade Federal do Piauí, Brazil 4Universidade Estadual de Campinas, Brazil 5University of Gothenburg, Sweden Potential distribution in Current scenarioo Potential distribution in SSP8.5 - 2040 scenarioo RESULTS • Seven variables most important in the models: BIO2, BIO13, BIO15, BIO18, BIO19, BIO3 and BIO8; • TSS values: 0.888 - 0.992; • AUC values: 0.983 - 0.997; • There are extinction risk in all future scenarios juliasoliveiraa@gmail.com www.tbguedes.com guedes_lab DISCUSSION • Our results highlight the vulnerability of psammophilous squamates to climate change and provide crucial information of the need of dynamic public policies for conservation of these species
poster
Auffinden von Open Data in Textpublikationen Ein systematischer Vergleich von Klassifikationsalgorithmen am Beispiel von Publikationen der TU und des Universitätsklinikums Dresden Untertitel bearbeiten Zweite Ebene katharina.zinke@slub-dresden.de Referenzen: Bobrov, E., Riedel, N. & Kip, M. (2023, 2. Februar). Operationalizing Open Data – Formulating verifiable criteria for the openness of datasets mentioned in biomedical research articles. https://doi.org/10.31222/osf.io/ajhs4 Iarkaeva, A., Bobrov, E., Taubitz, J., Carlisle, B. G. & Riedel, N. (2022). Semi-automated extraction of information on open datasets mentioned in articles. protocols.io. https://doi.org/10.17504/protocols.io.q26g74p39gwz/v1 Public Library of Science. (2022). PLOS Open Science Indicators (Version 2) [Data set]. Figshare. https://doi.org/10.6084/m9.figshare.21687686.v2 Riedel, N., Kip, M., & Bobrov, E. (2020). ODDPub – a Text-Mining Algorithm to Detect Data Sharing in Biomedical Publications. Data Science Journal, 19(1), 42. https://doi.org/10.5334/dsj-2020-042 Bilder: undraw.co Katharina Zinke | Open Science Lab Sächsische Landesbibliothek – Staats- und Universitätsbibliothek Dresden (SLUB) Je nach Zielsetzung sind zwei verschiedene Textmining-Algorithmen unterschiedlich gut geeignet wissenschaftliche Textpublikationen automatisiert nach veröffentlichten Forschungsdaten zu durchsuchen. Bewertungsmetriken der ODDPub und DataSeer Klassifikationen für Open Data und Open Code n = 41 n = 34 n = 33 n = 29 54% 59% 45% 66% 17% 26% 33% 31% 2019 2020 2021 2022 Open Data Open Code Darstellung der als Open Data und Open Code klassifizierten Publikationen in der Stichprobe über die betrachteten Jahre hinweg Konfusionsmatrix der manuellen Kodierung und der ODDPub bzw. DataSeer Klassifikationen für Open Data und Open Code (n = 137) HINTERGRUND Open Data (d.h. veröffentlichte Forschungsdaten) sind ein Grundpfeiler von Open Science. Diese Veröffentlichungen zu finden und Open Data Praktiken zu monitoren wird (u.a. für Institutionen) immer wichtiger • um Trends / Entwicklungen zu verfolgen • um Compliance mit Richtlinien zu prüfen • als Grundlage zur Incentivierung von Open Data Praktiken (Beispiel Charité Berlin) FORSCHUNGSFRAGE Ist ein Text Mining-basiertes Vorgehen geeignet um Open Data zu identifizieren, die mit wissenschaftlichen Textpublikationen einer wissenschaftlichen Einrichtung verbunden sind? VORGEHEN Als exemplarische Stichprobe dienten 137 Publikationen von Autor:innen der Technischen Universität Dresden bzw. des Universitätsklinikums Dresden in PLOS Journals aus den Jahren 2019 – 2022 (generiert aus einer Scopus- Suche und einem Abgleich mit dem PLOS OSI Datensatz). Identifikation von Open Data (und Open Code) in der Stichprobe von Publikationen: • Text Mining Algorithmus ODDPub (Riedel et al., 2020) des Charité Dashboards on Responsible Research nachgenutzt und auf eigene Stichprobe übertragen • Kodierung des DataSeer Natural Language Processing Modells für die Publikationen aus dem PLOS Open Science Indicators Datensatz (PLOS, 2022) extrahiert • Manuelle Kodierung der Publikationen orientiert an Kriterien des für das Charité Dashboard durchgeführten Screenings (Bobrov et al., 2023; Iarkeva et al., 2022) ERGEBNISSE Vergleich Klassifkation ODDPub und DataSeer • DataSeer identifiziert ca. 20% mehr Publikationen als Open Data als ODDPub (76% vs. 57%). • Die Klassifikationen stimmen zu 80% überein. • Bewertung der Klassifikationsergebnisse mit Metriken ergibt ein differenzierteres Bild: • Vergleichbare F1-Scores • Precision ODDPub > DataSeer • Recall ODDPub < DataSeer = DataSeer findet zwar fast alle tatsächlichen Forschungsdatenveröffentlichungen, identifiziert dafür aber einige fälschlicherweise als Open Data, während ODDPub weniger fälschlicherweise klassifiziert, dafür aber einige tatsächlich Open Data enthaltende Publikationen verpasst. Abweichungen von der manuellen Kodierung gab es für beide Klassifikationsalgorithmen v.a. • bei Datennachnutzung • bei f
poster
Inference on Individual Differences in Functional Imaging: A test for correlations among tasks Sumitra Purkayastha   , Tor D. Wager  and Thomas E. Nichols   Bayesian and Interdisciplinary Research Unit, Indian Statistical Institute, Kolkata 700108, INDIA,  Department of Psychology, Columbia University, New York, NY 10027, USA,  Department of Biostatistics, University of Michigan, Ann Arbor, MI 48109, USA Objective Complex experimental designs often yield multiple measurements of interest on each subject. For ex- ample, a   factorial design will yield contrasts for each of  subjects. When analyz- ing all  measurements together at the sec- ond level, the problem of inter-task dependence or “non-sphericity” must be accounted for. SPM ([1]) treats the inter-task correlations as nuisance param- eters, estimating an inter-task covariance and adjusting the inferences accordingly ([2]). However, correlations among tasks are of great theoretical interest in psychological studies, because they provide information about how tasks are related to one another. Hence the goal of this work is to ana- lyze inter-task correlations as parameters of interest, not simply as nuisance parameters. Specifically, we wish to detect pairs of effects where a high response in one effect predicts a high (or low) response in the other effect. This kind of information in human per- formance is the basis of arguments for general fac- tors underlying mental abilities (e.g., Spearman’s G) but has seldom been applied to brain imaging data. Working with a two-level model for fMRI data, we estimate the matrix of correlations between at each voxel and create statistic maps that detect the presence of non-zero correlation across differ- ent tasks. Then using scatterplots and the individual correlations, we assess and interpret specific patterns of dependence. We demonstrate our method with a fMRI study of attention switching. Methods We work with the following two-level model for fMRI data for  subjects:      N "!$# % '& )( +* , -)- -)    ./  /  N 01!2# 43 ( 5* , -)- -)   -)- -) 476 / -)- -) 4/86 are independent - !9,:( In (1),   is of length ; , the number of scans. The first level models the fMRI data for the * -the individ- ual and the second level models the population mean  (a length- < vector) while allowing for a nondiago- nal covariance 3 . We wish to test if the covariance matrix of a set of contrasts of the vectors of activation coefficients is diagonal. We assume that the within- subject variability %  is known, which is reasonable since ; is generally large and consequently an accu- rate estimate of % can be obtained. Allowing some possible loss of information, we work with the following modified formulation of this problem. To begin with, we assume that    for all * . Denote by = , the < ma- trix whose rows give the contrast vectors. We pre-multiply both sides of the first equation of (1) above by > ?9@BA =C!D T E(GF  T and both sides of the second equation by = and obtain a transformed two-level model. The observations in the first-level are given by H  ?9@BA >   and the regression pa- rameters corresponding the two levels are given by I  ?9@BA =   I ?9@BA = J- The vectors H  I  I are all , . The covariance matrix in the second-level is given by KL = 3 = T - It is possible now to formulate our question in terms of K . We want to test the following: H M+NOK is diagonal against H NOK is non-diagonal. !  ( In this initial work, we will assume that we have an orthogonal design. That is, we assume that the < contrast estimators are independent, specifically that P ?Q@A =C!D T E(RF = T is diagonal. With this assump- tion, any correlation observed in H  at the second- level is attributable to population-level inter-task cor- relation. In this case we can use Bartlett’s modifica- tion ([3]) of the relevant Likelihood Ratio Test (LRT) ([4], pp. 137-138) as an ad-hoc solution. However, when the d
poster
Translational control of fatty acid synthesis controls nuclear division during the cell cycle Nairita Maitra, Heidi Blank, Clara Kjerfve, Vytas Bankaitis, Michael Polymenis Texas A&M University, Department of Biochemistry and Biophysics, College Station, TX 77843 • Lipid synthesis during mitosis is essential for both human cells and yeasts. From our recent mass spec study on budding yeast, we found that among all other metabolites, the abundance of lipids is the highest late in the cell cycle. However, how the lipid homeostasis is maintained during the cell cycle is not understood. • Given the involvement of lipids in cancers and other diseases such as cardiovascular disease, obesity, diabetes, etc., it is important to study the regulation of lipids during the cell cycle. Moreover, higher levels of lipogenic enzymes are indicative of poor prognosis of different cancers. • Fatty acids are the building blocks of lipids. A genome-wide study performed in budding yeast in our lab showed that the translational efficiency of mRNAs encoding two major fatty acid synthesis enzymes, acetyl CoA carboxylase (ACC1) and fatty acid synthase (FAS1) peaks late in the cell cycle. Here, we show that the higher translational efficiency of ACC1 and FAS1 not only alters the abundance of different lipid species but also significantly promotes nuclear division during mitosis. • Taken together, our data link core metabolic functions of lipogenic enzymes (Acc1p and Fas1p) to the morphological changes of the nucleus during cell division. Background Major lipids peak late in the cell cycle G1 S G2/M |----------------|---------|--------| Log2(expressed ratios) A Figure 1: Abundance of lipids is highest late in the cell cycle (Blank et al MboC, 2020) A. Heatmap of major lipid species during the cell cycle in a synchronous cell culture of wild‐type, diploid cells (BY4743 background). The data were hierarchically clustered and displayed with the pheatmap R package. B. Fatty acids are the building blocks of most lipids. Acc1p catalyzes the first committed step of the fatty acid biosynthesis pathway. Further reactions in the pathway to produce fatty acids are catalyzed by the Fas1,2p complex. Lipogenic enzymes are under translational control in the cell cycle Figure 2: ACC1 and FAS1 are under periodic translational control during the cell cycle A. Major lipogenic enzymes (ACC1 and FAS1) are under periodic translational control in synchronous elutriated cultures of wild‐type, diploid cells (BY4743 background). The x-axis shows the cell sizes (fl) at different time point in the cell cycle. The data were hierarchically clustered and displayed with the pheatmap R package. B. Abundance of FAS1p during cell cycle is queried by immunoblotting, starting at the indicated cell sizes (fl) and budding index (% B). The graphs at the bottom quantify the band intensities for each independent experiment (indicated with different open symbols plotted on the y‐axis as the log2 values of their expressed ratios against the corresponding cell size (x‐axis). Experiment‐matched loading controls (filled symbols) were also quantified. Pgk1p levels Fas1p levels A B ChrXI 100119 100388 100530 100671 FAS1 ChrXIV 662454 661715 661374 ACC1 … B A C De-repressing translation of ACC1 and FAS1 alters the abundance of lipids Figure 3: uORFs mediate translational control of ACC1 and FAS1 A. Representation of the uORFs present in the 5’ leader of ACC1 and FAS1. B. Strip plot of the protein levels of FAS1-TAP in asynchronous culture of wild-type cells and cells lacking uORFs, quantified from four independent experiments. The protein levels were normalized for loading against the Pgk1p levels and the log-2 transformed values were plotted on the y-axis for the mentioned strains. C. Metabolites that changed in abundance in wild type vs. uORFm- ACC1,FAS1 cells were identified based on the magnitude of the difference (x-axis; Log2-fold change) and statistical significance (y-axis), indicated by the red
poster
References [1] S. Cruchley, et. al., Mater High Temp 2016, vol. 33, pp. 465-475. [2] S. Cruchley, et. al., Corros Sci 2015, vol. 100, pp. 242-252. [3] B. R. Barnard, et. al., Materials Science and Engineering: A 2010, vol. 527, pp. 3813-3821. [4] L. Zheng, et. al., Applied Surface Science 2010, vol. 256, pp. 7510- 7515. [5] J. H. Chen, et. al., Oxid. Met. 1997, vol. 47, pp. 381-410. [6] H. E. Evans, et. al., Oxid. Met. 1978, vol. 12, pp. 473-485. [7] S. Cruchley, et. al., Materials Science and Technology 2014, vol. 30, pp. 1884-1889. Experimental Method The alloy used in this project was RR1000 with composition (wt. %): Samples approximately 10 mm x 10 mm x 2 mm were extracted from both coarse-grained (CG) and fine-grained (FG) variants of the alloy. Oxidation exposures were performed at 650 °C for 4000 h in laboratory air on the following samples. The surfaces of samples were examined under SEM before being mounted and polished. The cross-sections of samples were then examined under SEM. Factors Affecting Oxide Formation on a Ni-Based Superalloy: The Effect of Surface Condition & Grain Size. Acknowledgements The authors thank the Centre for Electron Microscopy at the University of Birmingham for their technical expertise. This work was supported by the Rolls-Royce/EPSRC Strategic Partnership under EP/H022309/1. Their financial support, as well as the provision of materials by Roll- Royce, is most appreciated. T. D. Reynolds¹, M. P. Taylor¹, D. J. Child², P. M. Mignanelli2, M. C. Hardy2, H. E. Evans¹ ¹ School of Metallurgy and Materials, University of Birmingham, Birmingham, B15 2TT, UK. ² Rolls-Royce plc, PO Box 31, Derby, DE24 8BJ, UK. Introduction The oxidation resistance of polycrystalline nickel-based superalloys is afforded by the chromium content of the alloy resulting in the formation of an external, protective, thermally grown chromia (Cr2O3) scale. The oxidation kinetics of nickel-based chromia forming alloys have been well documented in the literature [1-7]. Differences of several orders of magnitude have been observed in oxidation rates between different chromia forming alloys. These differences have been attributed to differences in surface finish, the amount of cold work in the alloy, grain size, and oxidation environment. The coarse-grained variant of the RR1000 alloy has been extensively studied in a polished and shot-peened surface condition[1, 2, 7]. In the coarse-grained variant of the alloy shot-peening was reported to be detrimental to the oxidation properties[2]. This work extends the study of this alloy to include the fine-grained variant and at a lower temperature (650°C) where no recrystallisation was expected. Figure 2: Comparison of external oxide thickness and internal oxide penetration depth for FG samples with different surface finishes. Figure 3: Comparison of external oxide thickness and internal oxide penetration depth between FG & CG samples in the polished and shot- peened condition. Literature values are included for coarse-grained samples[2, 7]. Figure 4: SEM micrographs of CG samples. (a)(b) Polished sample. (a) planar view showing regions of delamination. (b) cross-sectional view highlighting layer of aluminium rich oxide. (c)(d) shot-peened sample. (c) planar view showing buckled regions. (d) cross-sectional view. (a) (b) Figure 1: SEM cross-sectional micrographs of FG samples. (a) polished sample. (b) shot-peened sample highlighting buckled region. (c) planar view of turned sample. (d) cross-sectional image of turned sample. Ni Co Cr Mo Ti Al Ta Hf Zr C B 52.3 18.5 15.0 5.0 3.6 3.0 2.0 0.5 0.06 0.027 0.015 Condition Fine-grained (3-5 µm) Coarse-grained (40-60 µm) Stress state Surface roughness, Ra (µm) Polished ✓ ✓ Unstressed 0.01 Shot- peened ✓ ✓ Compressive 1.42 Turned ✓ ✗ Tensile 2.33 Surface Finish Polished Shot-Peened Oxide Penetration Depth / Thickness ( m) -4 -2 0 2 4 Fine-Grained (External) Coarse-Grained (External) Fine-Grained (Intragranular) Coarse-Grained (Intragranular) Cru
poster
NGC 2264 contains ~400 brown dwarfs in the mass range 0.02-0.08 M☉ Samuel Pearson, Aleks Scholz, Paula Teixeira, Koraljka Mužić, Víctor Almendros-Abad sp246@st-Andrews.ac.uk We spectroscopically confirm 13 brown dwarfs, the first in NGC 2264. We find that requiring brown dwarf candidates to display additional criteria for youth significantly increases the success rate of follow up spectroscopy. We estimate that NGC 2264 contains 200-600 brown dwarfs in the mass range of 0.02-0.08 M☉and determine the slope of the sub-stellar mass function to be 𝛼= 0.43!".$% &".'(. These values are consistent with those measures for other young clusters. This points to a uniform sub- stellar mass function across all star-forming environments. Results The origin of brown dwarfs and the substellar mass function have long been subjects of debate. The perquisite for making progress on these issues is to have large and well characterised samples of young brown dwarfs in diverse star-forming regions. NGC 2264 is an ideal region to achieve this due to its large population of ~1500 stellar members, relative proximity of 719 ± 16 pc, and low foreground extinction. NGC 2264 provides a perfect laboratory to study star and brown dwarf formation. Introduction We have used deep multi-wavelength, multi- epoch data to build a catalogue of brown dwarf candidates in NGC 2264. Our brown dwarf candidates were initially identified by their near-infrared colours but were also required to display additional indications of youth, including variability, the presence of a disc, Gaia kinematics, rotation period and extinction. We used KMOS on the VLT to obtain follow up spectroscopy for a small sub- sample of our candidates in order to determine their spectral type, extinction and the confirmation rate of our candidates. Methodology The first spectroscopically confirmed brown dwarfs in NGC 2264 arXiv:2108.07633 The slope of the substellar mass function is 𝛼= 0.43 for a power law of the form !" !# ∝𝑚$% The majority of our confirmed brown dwarfs are very young (<0.5 Myr) Using indicators of youth significantly improves spectroscopic follow up efficiency. The substellar mass function for the three KMOS fields in NGC 2264 (see below figure), represented with three equally size log-mass bins in the range of 0.02-0.08 M☉. For more details see Sect. 5.2. arXiv:2108.07633 Absolute J-band magnitude vs. effective temperature for the 32 low- mass stars and brown dwarfs (M2-M8) identified in this study (red) as well as the Venuti et al. (2018) sample of spectroscopically confirmed NGC 2264 cluster members (blue). For more details see Sect. 4. arXiv:2108.07633 Above: A deep I-band image of NGC 2264. The three fields that were observed with KMOS/VLT are overplotted in white. Confirmed brown dwarfs are marked in red. Left: The KMOS spectra for the 13 sources identified as brown dwarfs in NGC 2264. For details on the spectral fitting see Sect. 3. arXiv:2108.07633 The slope of the Spitzer/IRAC SED (αIRAC) can be used to identify disc baring sources (Lada et al. 2006). A disc is a strong indication of youth. Confirmed brown dwarfs (≥M6) are shown in red, the early-mid M stars (M2-M5.5) in blue, the contaminants in yellow, and the full Pearson 2020 catalogue in grey.
poster
Ultracool Dwarfs in the Gaia Catalogue of Nearby Stars[1] Luminosity function of the GCNS with Poisson uncertainties & expectation value. Stellar/ substellar boundary at L3 in broad agreement with recent literature (e.g [2]) Simulated completeness of Gaia EDR3 from M7-L8 using absolute magnitude distributions of known objects[3] L3 𝑀𝐺> 3 𝐺−𝐽+ 25 2879 new UCD candidates, using criteria from [4] & WD probability < 20% [1] Gaia Collaboration., et al., 2020, arXiv, arXiv:2012.02061 [2] Bardalez Gagliuffi, D. C., et al. 2019, ApJ, 883, 205 [3] Smart, R. L., et al. 2019, MNRAS, 485, 4423 [4] Reylé, C. 2018, A&A, 619, L8 GCNS Paper Here GCNS Website Here GCNS Data Here W. J. Cooper., R. L. Smart., C. Reylé., L. M. Barro., J. Rybizki., H. R. A. Jones., GCNS Project Team
poster
‘Optimization of low glycemic index rice-based biscuits with potential benefits in diabetes controlling’ Cristiana Pereira1, Regina Menezes2,3, Vanda Lourenço4, Carla Brites 1*METHODOLOGY: REFERENCES Sivamaruthi, B. S., Kesika, P., Chaiyasut, C.(2018). A comprehensive review on antidiabetic property of rice bran. Asian Pacific Journal of Tropical Biomedicine. 8(1), pp. 79-84. Thilakavathy, S. &Pandeeswari, N.K., (2012). The glycemic index - a science based diet Int. J. Pharm. Med. & Bio. Sc. 1(2), pp.259–65. Westermark, P., Andersson, A., Westermark, G.T. (2011). Islet amyloid polypeptide, islet amyloid, and diabetes mellitus. Physiol Rev. 91(3), pp. 795-826. Lageiro, M. M., Castanho, A., Pereira, C., Calhelha, R.C., Ferreira, I.C.F. R., Brites, C. (2020). Assessment of gamma oryzanol variability, an attractive rice bran bioactive compound. Emirates Journal of Food and Agriculture, 32(1),pp. 38-46. Lageiro, M., Pereira, C., Castanho, A., Brites, C. (2019). Quantificação de ácido fítico em sêmeas de variedades de arroz. XII Congresso Iberoamericano de Engenharia dos Alimentos, Universidade do Algarve, Faro INTRODUCTIONMechanisms associated with controlling blood glucose:•Bioacessibility: digestion process simulation and glycemicindex •Inhibitionof α-amylaseandamyloglucosidase•Bioactivity:Accumulation of IAPP and inhibitionofglucose transport•Citotixity: interference in the gut barrier and alterations in membrane permeability •Anti-inflammatoryeffect Consumer's view Valorisation of rice and its by-products through the identification of functional natural bioactive compounds and their incorporation in Rice-based biscuits OBJECTIVE Rice By-products: Bran Rice bran oil Rice proteinγ-oryzanolCOMPOUNDS WITH ANTIDIABETIC-PROPERTIES Tocopherols Phytic acidAlbumin Tocotrienols Aminobutyric acid3.Evaluation of bioactivity of ingredients and digestibility 1. Characterization of different varieties and fractions of rice, at chemical and biochemical level 2. Biscuit trials: development of experimental design for processing the biscuits and Evaluation glycemic index of biscuits Ferulicacid Do the different nutritional and bioactive rice compounds and its by- products influence the lower glycemic index of a product, thus adding value to the use of these by-products as ingredients in functional foods? Do the bioactive compounds present in the rice and its by-products, which are incorporated in the biscuits, have inhibitory effects on the digestion and absorption of carbohydrates, and if so, what is the real bio-accessibility and bioactivity of these compounds in human digestion? Can different commercial rice varieties have specific bioactive composition, glycemic index and impact in diabetic mitigation? Do the bioactive compounds from the rice and bran have anti-diabetic effect? What is their relationship with starch enzymes and glucose transporters and their effect in the reduction of IAPP aggregation, which leads to the improvement of the cellular viability in eukaryotic cell models?•Riceis a stable food, one of the mostly consumed in the world. The rice industry generates large quantities of by-products, without great valorisation Rice•The high incidence of diabetesand the dietary limitations it entails urges the mobilization of R&I for development of target food products Diabetes Mellitus 5. Characterization of rice varieties with different genetic background, that can be incorporated in a future low glycemic index productFINALLY, IT WILL BE POSSIBLE TO ANSWER THEFOLLOWING QUESTIONS: 1 2 34 This PhD plan will be integrated in the activities of 2 projects: TRACE-RICE- Tracing rice and valorizing side streams along mediterranean blockchain; EIT-Food LAB Consumer Engagement Lab Portuguese consortium for the production of new type of "Wellness" biscuits by VIEIRA DE CASTRO S.A and SONAE MC4. Testing anti-diabetic potential of ingredients 1INIAV/UTI- Instituto Nacional de Investigação Agrária e Veterinária, Quinta do Marquês, 2784-505 Oeiras, P
poster
Terrestrial Frame from the Combined JASON-2 Orbit (GPS+SLR+DORIS) and GPS Constellation 26th European VLBI Group for Geodesy and Astronomy Working Meeting, 11 – 15 June, 2023, Bad Kötzting, Germany VLBI with GNSS in the L-Band 8.5 GHz Lock-in Amplifier • Frequency range: DC – 8.5 GHz • Directly connected to an antenna - no intermediate frequency, front-end • 2 independent lock-in units with signal generators • Option: measure up to 8 arbitrary frequencies in parallel within the measurement window • A/D conversion: 14 bits, 4 GSa/s • Size: 45×46×14.5 cm 1.8 GHz Lock-in Amplifier • Frequency range: DC – 1.8 GHz • Directly connected to an antenna - no intermediate frequency, front-end • 2 independent lock-in units with signal generators • A/D conversion: 14 bits, 4 GSa/s • Size: 42×45×10 cm VLBI with GNSS is the only way to improve orbits of GNSS satellites in the future Result: combined terrestrial & celestial frame We propose observing GNSS satellites with VLBI operationally in the celestial frame. Basically, the terrestrial frame is fully determined only by the GNSS satellites and the celestial frame is based on the VLBI with AGNs. To combine these two fully independent frames, the only direct solution is to observe AGNs with VLBI in the L-band (L1 and L2 GPS frequencies). All other approaches would introduce local ties and additional effects. This GNSS frequency band is internationally a metrology frequency band outside the interferences with huge LEO constellations of CubeSats operating in the S-band and in the X-band. With the STE-QUEST mission proposal (Svehla et al. 2014), pre-selected (2009-2014) in the ESA Cosmic Vision Programme as well as in GRASP, eGRASP and GENESIS proposals and activities, there is only an indirect link via LEO or a similar orbit between GNSS satellites and AGNs in the celestial frame. This issue is not going to be solved in order to secure orbits of GNSS satellites directly connected with AGNs in the celestial frame. The only way to improve the accuracy of GNSS satellite orbits, and in this way the terrestrial and celestial frame, is to observe GNSS satellites with VLBI relative to the AGNs. LEO orbit is not sufficiently well suited to significantly improving the orbit accuracy of GNSS satellites (gravity, albedo, SRP and thermal re-radiation effects for a very short LEO orbit revolution) nor to be easily observed by all VLBI antennae on the ground. At higher orbit altitudes there are no active satellites placed in the inner Van Allen radiation belts <12000 km that would enable operating such a satellite actively over a longer period of time (e.g. passive LAGEOS satellites or active GPS satellites placed at much higher orbit altitudes). In terms of ICRF frame realization, positions or definitions of AGNs are ”bigger” in the L-band than the X- band, but there are lock-in amplifiers or the existing broadband VLBI receivers (VGOS) that can sample the VLBI signal up to the X-band and that could also cover the L-band simultaneously with the S-band and the X- band. Therefore, the overall cost of such a modification at a VLBI station has been significantly reduced over the years. It is expected that there will be an extension of the celestial frame to higher frequencies such as Q- band and V-band, mainly driven by NASA and ESA for the Delta-DOR technique, but the majority of VLBI stations could only focus on the lower frequency band that could be observed simultaneously in the L-band, S- band and X-band or even up to the Ka-band and used for the realization of reference frames. All GNSS frequencies in the L-band will remain internationally as a metrology frequency band in the future. References: Svehla D, Rothacher M, Hugentobler U, Nothnagel A, Willis P, Biancale R, Ziebart M, Appleby G, Schuh H, Adam J, Iess L and Cacciapuoti L (2014) Terrestrial and Celestial Reference Frame Realization with Highly Elliptical Orbit - The ESA STE-QUEST Mission. Geophysical Research Abstracts, Vol. 16, EGU2014-7934-2, P
poster
CONVEGNO 2019 I SISTEMI PER L'ISOLAMENTO DALL'INTERNO NELL'ARCHITETTURA STORICA MONUMENTALE: CRITICITÀ, SOLUZIONI E L'ONSITE LAB TOUR DEL PROGETTO HeLLo Comitato scientifico e organizzazione Con il supporto di L’evento è proposto da This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 796712 16 Dicembre 2019 ore 14.00-18.00 Palazzo Tassoni Estense, Aula I5 via della Ghiara 36, Ferrara COMITATO SCIENTIFICO E ORGANIZZAZIONE: Prof. Pietromaria Davoli, arch. Luisa Dias Pereira, arch. Giuseppe Camillo Santangelo, Centro Ricerche Architettura>Energia, Dipartimento di Architettura, Università degli studi di Ferrara; Arch. Marta Calzolari, Dipartimento di Ingegneria e Architettura, Università degli studi di Parma ISCRIZIONE E CREDITI FORMATIVI PROFESSIONALI: Agli iscritti all’Ordine degli Architetti, Pianificatori, Paesaggisti e Conservatori saranno riconosciuti n. 4 crediti formativi, previa registrazione obbligatoria sul sito: https://imateria.awn.it PROGRAMMA Ore 13.30 REGISTRAZIONE Ore 14.00 INTRODUZIONE Prof. Pietromaria Davoli, direttore del Centro Ricerche Architettura>Energia, Dipartimento di Architettura, Università degli studi di Ferrara Ore 14.20 IL RETROFIT DEGLI EDIFICI TUTELATI TRA ISTANZE CONSERVATIVE E PRESTAZIONALI Arch. Keoma Ambrogio, funzionario responsabile di Ferrara città e parte dei comuni della provincia Ore 14.40 LE SIMULAZIONI TERMICHE DEGLI EDIFICI STORICI Arch. PhD Marta Calzolari, ricercatore, Dipartimento di Ingegneria e Architettura, Università degli studi di Parma Ore 15.00 LE SIMULAZIONI IGROMETRICHE DEGLI EDIFICI STORICI Arch. Dario Bottino-Leone, dottorando, Eurac Research, Bolzano Ore 15.20 IL PROGETTO HELLO - HERITAGE ENERGY LIVING LAB ONSITE Arch. PhD Luisa Dias Pereira, MSCA IF-EF fellow, Centro Ricerche Architettura>Energia, Dipartimento di Architettura, Università degli studi Ferrara Ore 15.40 COFFEE BREAK Ore 16.10 PROGETTI E APPLICAZIONI DELL'ISOLAMENTO INTERNO IN SUGHERO BIONDO NATURALE – SOLUZIONI A CAPPOTTO E A SECCO Geom. Emilio Capra, tecnico specializzato in isolamento termico ed acustico, Coverd S.r.l. Ore 16.30 L’ISOLAMENTO DALL’INTERNO IN LANA DI ROCCIA: PROGETTI E APPLICAZIONI Ing. Margherita Mor , Technical Specialist, ROCKWOOL® Italia S.p.A Ore 16.50 ISOLAMENTO INTERNO E SOLUZIONI ANTIMUFFA-ANTISALE IN SILICATO DI CALCIO IDRATO Ing. David Matricardi, Field Engineer, Xella Italia S.R.L. (Ytong) Ore 17.10 L'ESPERIENZA DEGLI OPEN LABS PER IL COINVOLGIMENTO DI NUOVI STAKEHOLDERS Arch. PhD Giuseppe Camillo Santangelo, responsabile del “Cantiere in aula”, Dipartimento di Architettura, Università degli studi di Ferrara Ore 17.30 ONSITELAB TOUR: VISITA AL LABORATORIO SPERIMENTALE DEL PROGETTO HELLO Arch. PhD Marta Calzolari, Arch. PhD Luisa Dias Pereira Ore 18.00 APERITIVO
poster
Soutenir l’implémentation de la planification de l’espace maritime en Atlantique Nord PARTENAIRES ET AUTORITES NATIONALES Comme SIMNORAT est développé parallèlement à la mise en œuvre de la directive PEM dans les pays, l'implication des autorités compétentes de ces dernières au niveau du comité de pilotage assure des liens et les synergies entre les deux actions. Pour la France, les politiques maritimes sont intégrées dans un ministère, MTES, et la gestion des politiques maritimes est organisée par «bassin maritime». Le Shom (responsable de la sécurité de la navigation et responsable de la description de l'environnement physique marin), le CEREMA (Centre d'études et d'expertise sur les risques, l'environnement, la mobilité et l'aménagement) et l’AFB (Agence française de la biodiversité) sont les principales organisations impliquées dans le processus PEM, avec le MTES et le SGMer (secrétariat général de la mer), bureau du Premier ministre chargé de la coordination des politiques maritimes. A chaque niveau du bassin maritime, plusieurs préfets maritimes et terrestres mettent en œuvre la directive PEM, appuyés par les DIRMs (Direction Inter-régionale de la Mer). En Espagne, deux organisations, déjà impliquées dans le processus de directive cadre sur la stratégie pour le milieu marin, sont déléguées par le ministère de la transition écologique, MITECO, chargé du processus PEM, le CEDEX et l’IEO. L'organisation de la PEM est en cours. Au Portugal, la Direction générale des ressources naturelles, de la sécurité et des services maritimes (DGRM) a délégué l'Université d'Aveiro (UA), qui a participé au projet TPEA. De plus, les régions sont responsables de nombreuses compétences directement liées au processus PEM. Comme il était clairement impossible d'impliquer toutes les régions dans le projet, la CRPM, association de 160 régions maritimes d'Europe, active dans les domaines de la gestion intégrée des zones côtières et de la protection civile, assurera donc le lien avec ce niveau important de gouvernance. SIMNORAT réunit un certain nombre de partenaires - organisations de recherche, autorités de planification maritime et organismes de gestion marine. Ils abordent deux objectifs clés par diverses approches, notamment: recherche documentaire; analyse des tendances futures; développement de scénarios collaboratifs; entretiens avec des parties prenantes; développement d'études de cas; outils d'analyse et de test PEM; mécanismes d'engagement des parties prenantes. Les livrables de SIMNORAT s’adressent aux planificateurs et cherchent à identifier et à partager les meilleures pratiques concernant les aspects techniques (tels que la gestion des données), scientifiques (tels que la gestion écosystémique) et sociaux (tels que les processus d’engagement des parties prenantes) afin de relever les défis de la mise en œuvre de la directive PEM et d’établir une coopération efficace en matière de collaboration transfrontalière de la PEM. Les activités du projet comprennent: - Évaluation initiale concernant la conservation marine et les secteurs maritimes - Analyse du processus PEM - Analyse de la méthodologie de mise en œuvre de la directive PEM avec un accent particulier sur les relations avec Ospar, les interactions terre-mer et la gestion intégrée des côtes, les échelles géographiques - Identification des demandes spatiales et des tendances futures dans les secteurs de la conservation marine et maritime - Prise en charge de l'accès aux données de la PEM et de leur utilisation - Analyse des mécanismes d'engagement des parties prenantes et préparation d'ateliers de parties prenantes (locaux, nationaux, transnationaux) - Études de cas dans le golfe de Gascogne et les eaux marines du nord-ouest des côtes ibériques JANVIER 2017 – JANVIER 2019 Budget: 1.8 Million d’Euro Cofinancé par la EC – DG Affaires maritimes et pêche http://simnorat.eu/ La zone d’intérêt du projet comprend les eaux marines du Portugal, de l’Espagne et de la France de la région d’Os
poster
• TRAPPIST-1 shows Na I absorption that is not prominate in low-gravity dwarfs @EGonzales788 1. 2. 3. This research was supported by NSF under Grant No. NSF-1747996. Is TRAPPIST-1 a Unique M-dwarf Host Star? • K I and Na I alkali line depths are slightly deeper than the field dwarfs 4. Eileen Gonzales 1, 2, 3, Jacqueline Faherty 3, Jonathan Gagné 4, Johanna Teske 4,5, Andy McWilliam 5, Kelle Cruz 2,3 5. • SEDs are constructed from spectra and photometric points using the technique of Filippazzo+ 2015 and a parallax from Gaia DR2. • Using the SEDs we integrate and solve for Lbol and Teff assuming a radius as described in Filippazzo+ 2015 that relies on age estimates and evolutionary models. CONSTRUCTING SPECTRAL ENERGY DISTRIBUTIONS Optical- Cruz+ 2007, NIR- Gonzales+ in prep. TRAPPIST-1 is not as old as the subdwarfs. It most resembles a field object with signatures of youth. • Shape of the H band is most similar to field dwarfs, but slightly more triangular TRAPPIST-1 is an M7.5 dwarf that hosts seven rocky earth-size planets, three of which are in its habitable zone. Given the abundance of M dwarfs throughout the Galaxy as well as the ease by which rocky planets might be uncovered around low mass stars with future studies like TESS, an inquiry into the uniqueness of the nature of the TRAPPIST-1 system is particularly relevant today. TRAPPIST-1 is classified as a field dwarf with kinematics that suggest it is an “old disk” star. However, the near infrared spectrum of TRAPPIST-1 exhibits a subtle peculiarity that causes it to be classified as an intermediate gravity (INT-G) object using spectral indices. To understand this subtle peculiarity as well as place TRAPPIST-1 in context with other nearby M dwarfs, we have created a distance-calibrated spectral energy distribution (SED). Combining the most recent parallax measurement with optical and infrared spectra and all available photometry, we re-evaluate its bolometric luminosity and effective temperature. We compare the resultant SED to a sample of old, young, and field age objects of similar properties. Using a FIRE echelle spectrum, we also investigate the near-infrared Y, J,H, and K bands to compare observables linked to gravity, atmosphere, metallicity and age effects. Field Subdwarfs BROWN DWARFS OF SAME Teff: NIR Band-by-Band Comparison • TRAPPIST-1 matches the overall shape of field dwarfs. • A temperature dependent spread is seen from 1.28–1.8 μm • TRAPPIST-1 and the low-gravity dwarfs overlap in the optical. • Low-gravity dwarfs are brighter than TRAPPIST-1 in the NIR and beyond. • GJ 660.1 B and TRAPPIST-1 overlap in the optical. • Both subdwarfs differ from TRAPPIST-1 in the NIR. (Y band) (H band) (J band) (K band) • Slope from 0.95—0.99 μm is similar to field sources and J0953-1014 • Slight indication of VO absorption Subdwarfs J Band Comparison 5 • Depth of Na I doublet looks similar for all sources. • Some subdwarfs have unequal depths for the indiviual lines of the Na I doublet • K I doublets of TRAPPIST-1 are of similar depth, unlike the subdwarfs which exhbit differing depths of the K I doublets. Low gravity COMPARING OVERALL SEDs of BROWN DWARFS OF SAME Teff
poster
A study of 3+N neutrino scenarios through singular values Kamil Porwit, Wojciech Flieger, Janusz Gluza Division of Field Theory and Elementary Particle Physics, University of Silesia, Katowice, Poland Aim of studies ▶Investigation of the experimentally determined 3 × 3 neutrino mixing matrix from point of view of matrix theory, ▶confrontation of 3 + N scenarios with experimental data, ▶determination of a number of sterile neutrinos by analyzing singular values of three dimensional mixing matrices, ▶construction of the complete unitary mixing matrix via the procedure of matrix dilation. Common parametrizations of non-unitary mixing matrices [1] ▶QR decomposition: V = (I −α)U = TU where α is lower triangular matrix and U is unitary. ▶Polar decomposition: V ′ = (I −η)U′ where η is a Hermitian matrix and U′ is unitary. Present status [1] ▶Current upper bounds on the α matrix and corresponding T matrix for three different neutrino mass scenarios: m > 246 GeV αee 1.3 · 10−3 αµµ 2.2 · 10−4 αττ 2.8 · 10−3 |αµe| 6.8 · 10−4 |ατe| 2.7 · 10−3 |ατµ| 1.2 · 10−3 α ≤    0.00130 0 0 |0.00068| 0.00022 0 |0.00270| |0.00120| 0.0028    ∆m2 ≳100 eV 2 αee 2.4 · 10−2 αµµ 2.2 · 10−2 αττ 1.0 · 10−1 |αµe| 2.5 · 10−2 |ατe| 6.9 · 10−2 |ατµ| 1.2 · 10−2 α ≤    0.024 0 0 |0.025| 0.022 0 |0.069| |0.012| 0.100    ∆m2 ∼0.1 −1 eV 2 αee 1.0 · 10−2 αµµ 1.4 · 10−2 αττ 1.0 · 10−1 |αµe| 1.7 · 10−2 |ατe| 4.5 · 10−2 |ατµ| 5.3 · 10−2 α ≤    0.010 0 0 |0.017| 0.014 0 |0.045| |0.053| 0.100    Matrix theory: singular values [2] Singular values σi of a given matrix A are positive square roots of the eigenvalues λi of the matrix AA′ σi(A) = q λi(AA†), i = 1, 2, 3, ... Properties: ▶operator norm (spectral norm): ∥A∥:= sup ∥x∥=1 ∥Ax∥= σmax(A) ▶contractions: ∥A∥≤1 General beyond the Standard Model (BSM) scenario [3] Extensions from three neutrinos setting: V →W =  V Vlh Vhl Vhh  ▶matrix W is unitary, ▶operator norm is bounded by unity: ∥V ∥≤1 Any deviation of singular value of V from unity →BSM scenario. Matrix dilation: Minimal number of additional sterile neutrinos is determined by the number of singular values strictly less than one. Strategy: 3 + N scenario studies ▶We study scenarios with additional neutrinos by constructing T matrices with prescribed singular values. ▶Number of singular values strictly less than one completely fix the minimal number n of additional neutrinos: σ(T) = {σ1, σ2, σ3}, n = number of σi < 1 ▶Conditions given by constrains on α matrix are met through the random generation of T entries within region given by fixed singular values. ▶To find non-diagonal cases we involve the idea of an inverse eigenvalue problem, i.e., we construct matrix with prescribed singular values [4]. Results: 3 + 1 ▶m > 246 GeV σ(T) = {1.000, 1.000, 0.992} T =   0.9987 0 0 0.0007 0.9988 0 0.0027 0.0012 0.9972   ▶∆m2 ≳100 eV 2 σ(T) = {1.000, 1.000, 0.899} T =   1.000 0 0 0.001 1.000 0 0.012 0.012 0.900   ▶∆m2 ∼0.1 −1 eV 2 σ(T) = {1.000, 1.000, 0.888} T =   0.994 0 0 0.013 0.993 0 0.045 0.053 0.900   Results: 3 + 2 ▶m > 246 GeV σ(T) = {1.000, 0.999, 0.999} T =    0.998702 0 0 0.000476 + 0.000471i 0.999880 0 0.000252 + 0.000139i 0.001013 −0.000198i 0.998797   |T| =    0.99870 0 0 0.00067 0.9998 0 0.00028 0.0010 0.9988    ▶∆m2 ≳100 eV 2 σ(T) = {1.000, 0.970, 0.969} T =    0.976753 0 0 0.024451 + 0.000184i 0.991755 0 0.005957 −0.000192i 0.010959 −0.000386i 0.970301   |T| =    0.977 0 0 0.025 0.992 0 0.006 0.011 0.970    ▶∆m2 ∼0.1 −1 eV 2 σ(T) = {1.000, 0.980, 0.979} T =    0.990114 0 0 0.016262 + 0.000113i 0.986144 0 0.011641 −0.000160i 0.010193 −0.000068i 0.982614   |T| =    0.990 0 0 0.016 0.986 0 0.012 0.010 0.983    Conclusions ▶Based on Weyl inequalities, errors of T entries are estimated on: 0.003 [3], ▶considering error all mass scenarios allows the addition of 1 and 2 additional neutrinos. However in the case of 3 + 2 and m > 246 GeV result is not decisive, ▶singular values and inverse eigenvalue problem are
poster
Sodicity is a major abiotic stress reducing the yield of a wide variety of crops all over the world. To investigate the mithipagal genotypes to the sodicity tolerance a study was taken up with 30 mithipagal genotypes. MCM-19 (Naduvakurichi local) maintained higher levels of chlorophyll and proline contents as well as catalase and peroxidase activities, ascorbic acid content, No. of female flowers and fruit yield under sodic soil condition followed by MCM-22 (Puthiyamputhur local) and MCM-21 (Arupukottai local). Abstract Genetic diversity and utilization of cultivated bitter gourd (Momordica charantia var. muricata) germplasm for varietal development S. Priyadharshini, K. Kumanan and A. Nithya Devi Vegetable Research Station, TNAU, Palur, Tamil Nadu – 607 102 Objectives • To estimate the extent of variability among the mithipagal genotypes for yield and quality under sodic soil condition. • Investigation of biochemical basis of tolerance in momordica sp. Introduction Materials and Methods Result and Discussion Conclusion Acknowledgment Keling, H., Zhang, L., Jitao, W., and Yang, Y. 2013. Influence of selenium on growth, lipid peroxidation and antioxidative enzyme activity in melon (Cucumis melo L.) seedlings under salt stress. Acta Societatis Botanicorum Poloniae, 82(3):193. Seckin, B., Turkan, I., Sekmen, A. H. and Ozfidan, C. 2010. The role of antioxidant defense systems at differential salt tolerance of Hordeum marinum Huds.(sea barleygrass) and Hordeum vulgare (cultivated barley). Environmental and Experimental Botany, 69(1), 76-85. • Mithipagal (Momordica charantia var. muricata) • Small (6-10 cm length) and round shaped fruit • Traditional medicine for the treatment of diabetes • Sodic soils - presence of excess Na+ and a high concentration of carbonate or bicarbonate anions. High pH (> 8.5–10.8), High SAR, and poor soil structure. • Genetic diversity in momordica species - key to successful breeding programme for obtaining variety with abiotic stress. • Location : HC &RI (W), Trichy • Genotypes : 30 Nos. • Spacing: 2 x1.5 m • Design: RBD • Replications: 3 • Field is sodic, sandy loam texture, with a pH of 9.00, EC 0.94 dSm-1 and ESP 21.76 per cent • Growth, Yield, quality and biochemical characters. • Increased antioxidant enzyme, catalase and peroxidase activity, chlorophyll content and proline content in mithipagal genotypes could be some of the physiological and biochemical mechanisms underlying sodicity tolerance in mithipagal genotypes. • On the basis of biochemical assays, MCM-19 (Naduvakurichi local), MCM-22 (Puthiyamputhur local) and MCM-21(Arupukottai local) showed a higher tolerance to sodicity. These genotypes may be utilized as source for hybridization programme for crop improvement. HGR P57 • MCM-19 exhibited superior expression on vine length (3.44 m), sex ratio (10.95:1), number of fruits per vine (78.33), fruit weight (9.08 g), yield per vine (0.717 Kg) followed by MCM-22 and MCM-21. • MCM-19 (Naduvakurichi local) maintained higher levels of chlorophyll (2.893 mg/g) and proline (650.42 µg g-1) contents as well as catalase (8.61 µg of H2O2 g-1 min-1) and peroxidase (0.082 ΔOD min-1 g-1) activities under sodic soil condition. References Fig 1. Variability in fruit character of mithipagal genotypes (MCM-19, MCM-22 & MCM 21) Field view of experimental field Variability in flower character
poster
A NOVEL APPROACH FOR DATA PROCESSING AND MANAGEMENT IN EDGE COMPUTING Armend Duzha, Dimosthenis Kyriazis Department of Digital Systems, University of Piraeus, Greece Edge nodes are close to the users, and as a result can potentially receive large amounts of privacy-sensitive data. 1. Suthar, F., et al., “A Survey on Cloud Security Issues”. Inter. Journal of Computer Science and Engineering, 7(3): 120-123, 2019. 2. Swabey, P., “Why edge computing is a double-edged sword for privacy”. Techmonitor, 2022. 3. Guynes, S., et al. “Edge Computing Societal Privacy and Security Issues”. SIGCAS Comput. Soc. 48 (3-4): 11–12, 2020. 4. W. Shi and S. Dustdar, "The promise of edge computing“. Computer, 49 (5): 78-81, 2016. AIAI 2022 18th International Conference on Artificial Intelligence Applications and Innovations 19 June 2022 – Crete, Greece Further information: aduzha@unipi.gr This work has received funding from the European Union’s Horizon 2020 Innovative Training Networks programme under Grant Agreement No. 956562 Figure 2: High-Level Conceptual Architecture Challenges Introduction Approach References Develop an effective data governance solution in order to exploit the huge data volumes and allow interoperability, sharing, and re-use of high-quality and high-value data, and contributing towards the expansion of common EU Data Spaces. Build secure and trustworthy data processing pipelines, exploiting in-situ processing techniques of raw data locally at the edge to extract useful information and avoid excessive data transfer and redundant storage. Only more important (aggregated) data are transferred to a central location. Produce actionable insights from distributed data by exploiting Federated Learning algorithms, aiming at building local models that describe the surrounding data more accurately, while incrementally pushing model updates to a centralized location that maintains the complete model, but not the complete data. Provide approaches for resource optimization addressing both the cloud/edge space and the supported data operations, allocation of resources, and deployment patterns, driving towards the “green cloud computing” paradigm. Figure 1: Smart Mobility Use Case Edge nodes have limited resources, so they cannot support complex security mechanisms. Due to the high mobility of the devices and users, the edge computing environment is always changing. With the development of Internet of Things, more and more powerful devices are mediating our daily lives. This also result in massive data generated by billions of sensors and devices. Meanwhile the increasing demand for advanced services and applications, such as virtual reality, augmented reality and smart cities [see Figure 1], has created huge challenges on cloud computing. Edge computing has been proposed as a new computing paradigm where resources like computation and storage are placed closer to the data source. It enables a new class of latency and bandwidth sensitive applications as data can be processed at the nearest edge. Without sending data to the cloud, edge computing therefore enhances security and privacy.
poster
VascularizationFeaturesforPolypLocalizationinCapsule Endoscopy V. B. Surya Prasath1, Hiroharu Kawanaka2 1Computational Imaging and Visualization Analysis (CiVA) Lab, Department of Computer Science, University of Missouri-Columbia, USA 1Graduate School of Engineering, Mie University, Japan prasaths@missouri.edu Contribution: Capsule Endoscopy Polyp Localization • Polyps in gastrointestinal (GI) tract can be precursors to cancer and detecting them early is important in determining their malignancy. • We consider polyp localization from capsule endoscopy imagery using vascularization features [1]. • We use features computed from principle curvatures of the image surface, and multiscale directional vesselness stamping we obtain localization of polyps. Vascularization Features • Principle curvatures: Let the principle curvatures of the image surface (x, y, I(x, y)) be κ2 1, κ2 1, the curvatures based texture feature is, Vκ(⃗x) = q κ2 1(⃗x) + κ2 2(⃗x). (1) • Multiscale directional vesselness stamping (MDVS): (a) Directional stamping [2]: Vd(⃗x) = G(⃗x) · I(⃗x) where, G(⃗x) = 1 2π p |L| exp  −1 2⃗xT L−1⃗x  , L =  cos(δ) −sin(δ) sin(δ) cos(δ)   λ1 0 0 λ2   cos(δ) −sin(δ) sin(δ) cos(δ) T with (λ1, λ2) are the eigenvalues and the angle of rotation δ = atan2(v2y, v2x), with v2 = (v2x, v2y) the smallest eigenvector of the structure tensor. (b) Frangi’s filter [3] is based on multiscale Hessian eigen-analysis and let the output be Vf. We combine the directional stamping Vd(·) with the multiscale vesselness Vf(·) to obtain multiscale directional vesselness stamping (MDVS): Vm(⃗x) = Vd(⃗x) · Vf(⃗x). (2) We use maximum response of both vascularization features Vκ in Eqn. (1), Vm in Eqn. (2) that can provide a good indicator of vascularization in polyps. (a) Polyp (b) Eigenvalues (λ1, λ2) (c) Angle δ (d) Vd(⃗x) (e) Vf(⃗x) IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Washington, DC, USA, 2015. Localization Results Example polyp localization using vascularization features for different polyps imaged using Pillcam® cap- sules. Our approach obtains good localization and clearly reveals feature differences in adenoma and hyper- plasia polyps. (a) Malignant (b) Benign Vascularization features can be used to localize colorectal polyps (circled here) from VCE frames. (a) Malignant polyps obtained maximum responses for principle curvature based texture features whereas (b) benign polyps has smaller values. Currently, we are evaluating the method with expert segmentations and developing a classification scheme based on the detected vessel features similar to [4]. References [1] V. B. S. Prasath. Polyp detection and segmentation from video capsule endoscopy: A review, Journal of Imaging, 2015. Special issue on Image and Video Processing in Medicine (Eds.: G. P. Martinsanz, P. Morrow, K. Suzuki). [2] J. Bredno et al. Algorithmic solutions for live device-to-vessel match. Proc. SPIE Med. Imag., Vol. 5370, pp. 1486–1497, 2004. [3] A. Frangi et al. Multiscale vessel enhancement filtering. MICCAI, Springer LNCS Vol. 1496, pp. 130–137, 1998. [4] V. B. S. Prasath, R. Delhibabu. Automatic image segmentation for video capsule endoscopy. CIHD, Springer Briefs in Forensic and Medical Bioinformatics (Eds. N. B. Muppalaneni, V. K. Gunjan), pp. 73–80, 2015. Website and More Information • Download the poster @ figshare.com: http://dx.doi.org/10.6084/m9.figshare.1585847 • Project page: http://goo.gl/ZtlrY1
poster
DÜRER DIGITALISIERUNG im Rahmen des DFG-Transferprojekts Evidenz ausstellen In kooperativer Zusammenarbeit von Universität und Museum untersucht das Team des DFG-geförderten Transferprojekts Evi- denz ausstellen. Praxis und Theorie der musealen Vermittlung von Evidenzerzeugung (2015‒2018), wie Sinnzusammenhänge und Erkenntnisse von Bildern in öffentlichen Präsentations- formen erzeugt und vermittelt werden. Zu diesem Zweck wurde die experimentelle Ausstellung Double Vision. Albrecht Dürer & William Kentridge (Kulturforum Berlin, 20.11.2015‒6.3.2016; Staatl. Kunsthalle Karlsruhe, 10.9.2016‒ 8.1.2017) entwickelt. Sie ist das visuelle Labor für die weitere projektbezogene Forschung, bei der die Möglichkeiten und Grenzen des Display von Arbeiten auf Papier sowie die spezifische Ästhetik des Schwarzweiß erkundet werden. Eine intensive Untersuchung der visuellen Erfahrungsräume, die Albrecht Dürers Zeichnungen und Druckgraphiken er- zeugen, erfordert die Möglichkeit einer genauen Analyse der Originale. Um dem Forschungsinteresse und den konservatori- schen Bedingungen zum nachhaltigen Schutz der Kunstwerke gleichermaßen zu entsprechen, wird derzeit im Rahmen der Kooperation der ca. 1600 Arbeiten umfassende Dürer-Bestand im Kupferstichkabinett der Staatlichen Museen zu Berlin di- gitalisiert. Als einmalige Initiative trägt diese Teil-Erschließung zur langfristig angelegten Open Access-Strategie der Stiftung Preußischer Kulturbesitz bei, die ihre umfangreichen Samm- lungen allen Interessierten unter vereinfachten Bedingungen weltweit zugängig machen will. Für den digitalen Zugriff auf die bedeutenden Dürer-Bestände im Kupferstichkabinett der Staatlichen Museen zu Berlin ‒ da- runter einzigartige Zeichnungen wie das Porträt seiner Mutter (1514) und die Drahtziehmühle (1494) sowie eine außerge- wöhnlich hohe Anzahl von Abzugvarianten einer Druckplatte oder eines Druckstocks auf verschiedenen Papieren und/oder von späterer Hand ‒ wurde im Team von KunsthistorikerInnen, Restaurator, Fotograf und Magazinern ein Workflow mit einer schonenden Handhabung der Kunstwerke konzipiert. Die parallel erfolgende Dateneingabe wird die Online-Stellung der Daten im Frühjahr 2017 ermöglichen. Leitung FU Berlin: Prof. Dr. Klaus Krüger, Dr. Elke Anna Werner Leitung Kupferstichkabinett SMB: Dr. Michael Roth Restauratorische Betreuung: Dipl.-Rest. Georg Josef Dietz Konzeption & Koordination: Dr. Nadine Rottau Mitarbeit: Franziska May, Charlotte Wagner Fotograf: Dietmar Katz Die erhobenen Dürer-Forschungsdaten werden im Museumsdokumentationssystem der Staatlichen Museen zu Berlin und in der digitalen Diathek des Kunsthistorischen Instituts der Freien Universität Berlin erfasst. Über dieses institutionsgebundene Angebot hinaus werden die Daten in die Online- Datenbank SMB-digital sowie die Deutsche Digi- tale Bibliothek eingespeist und unter der Creative Commons Lizenz CC BY NC SA 3.0 Deutschland zur Verfügung gestellt. Poster-Text & Layout: Nadine Rottau │Lizensiert unter CC BY 4.0 Albrecht Dürers Werke Rhinocerus, 1515 (Holzschnitt) und Barbara Dürer, die Mutter des Künstlers, 1514 (Kohlezeichnung) aus der Sammlung des Kupferstichkabinetts der Staatlichen Museen zu Berlin (oben) und der Datensatz zu dem Kupferstich Herkules am Scheide- wege (um 1498) in der Deutschen Digitalen Bibliothek (unten) bildevidenz.de│smb.museum│doublevision-berlin.de
poster