url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://openstudy.com/updates/5095f338e4b0d0275a3cca49 | ## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## hartnn Group Title An nXn matrix A has all diagonal elements=0 and non-diagonal elements =1 Find the eigen values of A. one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. Dido525
Best Response
You've already chosen the best response.
0
Hartnn asking a question? That's a first :P . Joking ;) .
• one year ago
2. hartnn
Best Response
You've already chosen the best response.
2
i know |A-$$\lambda$$I|=0 solving for $$\lambda$$ gives eigen values.
• one year ago
3. hartnn
Best Response
You've already chosen the best response.
2
for 2X2, its $$\pm 1$$
• one year ago
4. hartnn
Best Response
You've already chosen the best response.
2
for nXn, there are n eigen values.
• one year ago
5. mahmit2012
Best Response
You've already chosen the best response.
1
|dw:1352005712436:dw|
• one year ago
6. mahmit2012
Best Response
You've already chosen the best response.
1
|dw:1352006085694:dw|
• one year ago
7. hartnn
Best Response
You've already chosen the best response.
2
|dw:1352006090747:dw|
• one year ago
8. hartnn
Best Response
You've already chosen the best response.
2
yeah.
• one year ago
9. mahmit2012
Best Response
You've already chosen the best response.
1
|dw:1352006113098:dw|
• one year ago
10. mahmit2012
Best Response
You've already chosen the best response.
1
So, it has n-1 roots -1 and one root n-1.
• one year ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995076656341553, "perplexity": 29541.670445174183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898978.59/warc/CC-MAIN-20141030025818-00163-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://nuit-blanche.blogspot.nl/2010/09/ | ## Thursday, September 30, 2010
### CS: A Q&A with Felix Herrmann, SMALLbox
I took the opportunity of Felix Herrmann sending me the job opportunities for graduate studentship and postdocs to ask a question destined for people on the applied side of things:.
Felix,
The question applies specifically to the applied folks. In most systems that are being solved, we somehow hope that the RIP or some such property holds for our system so we can get on the solving of actual problems. It's all great and fun to pretend that but in the end nobody is really convince of the matter.
So here are my questions: In order to be more convincing, have you tried to show that your systems followed the Donoho-Tanner phase transition ? in the negative, is it because of time constraints or something else ?
Felix kindly responded with:
Hi Igor,
....
I have always been a fan of the phase diagram by Donoho/Tanner because they capture the recovery conditions for typical sparse vectors and acquisition matrices. This means they are less pessimistic and they reflect practical situations better.
However, in my field we are confronted with problem sizes that make it impossible to compute complete phase diagrams. In addition, we are typically relying on redundant sparsifying transformations for which the theory is relatively under developed.
Having said this, I looked in a recent paper at the "oversampling ratio" between the percentage of coefficients required to get a particular nonlinear approximation error and the degree subsampling required to get the same error. I borrowed this idea from Mallat's book and adapted it to the seismic problem. In principle, this sort of thing is somewhat similar to phase diagrams and it gives some assurances but I am afraid maybe not the type of assurances acquisition practitioners are looking for.
While useful, the method I presented in the paper is expensive to compute and does not really lead to a workflow that would provide practical principles to design acquisition scenarios based on compressive sampling. For instance, I am not aware of practical (read computationally feasible and doable in the field) ways to do QC etc. So in summary, while CS provides beautiful insights I think we still have a long way to go to bring this technology into the main stream.
Many seismic exploration techniques rely on the collection of massive data volumes that are subsequently mined for information during processing. While this approach has been extremely successful in the past, current efforts toward higher resolution images in increasingly complicated regions of the Earth continue to reveal fundamental shortcomings in our workflows. Chiefly amongst these is the so-called “curse of dimensionality” exemplified by Nyquist’s sampling criterion, which disproportionately strains current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. In this paper, we offer an alternative sampling method leveraging recent insights from compressive sensing towards seismic acquisition and processing for data that are traditionally considered to be undersampled. The main outcome of this approach is a new technology where acquisition and processing related costs are no longer determined by overly stringent sampling criteria, such as Nyquist. At the heart of our approach lies randomized incoherent sampling that breaks subsampling related interferences by turning them into harmless noise, which we subsequently remove by promoting transform-domain sparsity. Now, costs no longer grow significantly with resolution and dimensionality of the survey area, but instead depend on transform-domain sparsity only. Our contribution is twofold. First, we demonstrate by means of carefully designed numerical experiments that compressive sensing can successfully be adapted to seismic exploration. Second, we show that accurate recovery can be accomplished for compressively sampled data volumes sizes that exceed the size of conventional transform-domain data volumes by only a small factor. Because compressive sensing combines transformation and encoding by a single linear encoding step, this technology is directly applicable to acquisition and to dimensionality reduction during processing. In either case, sampling, storage, and processing costs scale with transform-domain sparsity. We illustrate this principle by means of number of case studies.
On a different note, the SMALLbox toolbox was announced at the LVA conference. From the twitter feed:
SMALLbox provides a common API for various sparsity toolboxes like sparco, sparselab, etc.
From the website:
Sparse Models, Algorithms, and Learning for
Large-scale data (SMALL)
SMALL will develop a new foundational framework for processing signals, using adaptive sparse structured representations.
A key discriminating feature of sparse representations, which opened up the horizons to new ways of thinking in signal processing including compressed sensing, has been the focus on developing reliable algorithms with provable performance and bounded complexity.
Yet, such approaches are simply inapplicable in many scenarios for which no suitable sparse model is known. Moreover, the success of sparse models heavily depends on the choice of a "dictionary" to reflect the natural structures of a class of data, but choosing a dictionary is currently something of an "art", using expert knowledge rather than automatically applicable principles. Inferring a dictionary from training data is key to the extension of sparse models for new exotic types of data.
SMALL will explore new generations of provably good methods to obtain inherently data-driven sparse models, able to cope with large-scale and complicated data much beyond state-of-the-art sparse signal modelling. The project will develop a radically new foundational theoretical framework for the dictionary-learning problem, and scalable algorithms for the training of structured dictionaries. SMALL algorithms will be evaluated against state-of-the art alternatives and we will demonstrate our approach on a range of showcase applications. We will organise two open workshops to disseminate our results and get feedback from the research community.
The framework will deeply impact the research landscape since the new models, approaches and algorithms will be generically applicable to a wide variety of signal processing problems, including acquisition, enhancement, manipulation, interpretation and coding. This new line of attack will lead to many new theoretical and practical challenges, with a potential to reshape both the signal processing research community and the burgeoning compressed sensing industry.
Credit: SpaceX, Test firing of a Merlin 1C regeneratively cooled rocket engine on a single engine vertical test stand at the SpaceX test facility in McGregor, Texas. The next Falcon flight will occur no earlier than November 8th.
## Wednesday, September 29, 2010
### CS: Hackaday and La Recherche
My submission to Hackaday on the EPFL ECG system caught on only because of the iPhone aspect of it. It would not be so bad if somehow this explanation had not been inserted in the summary:
....Oh, and why the iPhone? The device that displays the data makes little difference. In this case they’re transmitting via Bluetooth for a real-time display (seen in the video after the break). This could be used for a wide variety of devices, or monitored remotely via the Internet....
Not only that, but since the explanation about compressed sensing was somehow removed from the description of the hardware, many commenters seem to think that the prototype was built only to appeal to the latest shiny new iPhone story. As Pierre confirmed last night on Twitter, the iPhone performs a full l_1 minimization with wavelet frames in real time! wow ! As far as I could understand from the presentaiton, the encoding is following that of Piotr and Radu). The comments in the Hackaday entry are generally very instructive on how to communicate with a technical crowd that has not been exposed to compressive sensing before..The trolls are part of that game.
For our french readers, the October edition of La Recherche has a piece on Compressive Sensing on page 20-21 entitled "Des signaux comprimés à l'enregistrement" with an interview of Emmanuel. The MRI photo is from Pierre's lab. The article points to the Big Picture as a reference document, woohoo!
## Tuesday, September 28, 2010
### CS: LVA 2010 on Twitter, More ECG, CS on Manifolds, Bayesian PCA, Job Postings at SLIM
Prasad Sudhakar just sent me the following:
Dear Igor,
I am writing to inform you that the 9th international conference on latent variable analysis and signal separation is kicking off today at St. Malo, France.
Here is the conference website: http://lva2010.inria.fr/
There are two sessions on sparsity: Wednesday morning and afternoon.
Also, I will try to tweet the highlights of the proceedings (mostly the sparsity sessions and plenary talks) at http://twitter.com/lva2010
Best regards,
Thanks Prasad, I am now following @LVA2010 on the the Twitter. @LVA2010 is also, now, on the compressed-sensing list on Twitter as well (let me know if you want to be added to it).
Following up on the The EPFL Real-Time Compressed Sensing-Based Personal Electrocardiogram Monitoring System here is: Implementation of Compressed Sensing in Telecardiology Sensor Networks by Eduardo Correia Pinheiro , Octavian Adrian Postolache, and Pedro Silva Girão. The abstract reads:
Mobile solutions for patient cardiac monitoring are viewed with growing interest, and improvements on current implementations are frequently reported, with wireless, and in particular, wearable devices promising to achieve ubiquity. However, due to unavoidable power consumption limitations, the amount of data acquired, processed, and transmitted needs to be diminished, which is counterproductive, regarding the quality of the information produced.
Compressed sensing implementation in wireless sensor networks (WSNs) promises to bring gains not only in power savings to the devices, but also with minor impact in signal quality. Several cardiac signals have a sparse representation in some wavelet transformations. The compressed sensing paradigm states that signals can be recovered from a few projections into another basis, incoherent with the first. This paper evaluates the compressed sensing paradigm impact in a cardiac monitoring WSN, discussing the implications in data reliability, energy management, and the improvements accomplished by in-network processing.
Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x 2 RN that are of high dimension N but are constrained to reside in a low-dimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily computed quantities, drawing on block–sparsity properties. The proposed methodology is validated on several synthetic and real datasets.
Bayesian Robust Principal Component Analysis by Xinghao Ding, Lihan He, and Lawrence Carin. The abstract reads:
A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse
components, assuming the observed matrix is a superposition of the two. The matrix is assumed
noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.
The code implementing this Bayesian PCA is here.
I usually don't post about graduate studentships but the sheer size of this announcement makes it worthwhile for a post. Felix Herrmann just sent me this announcement for three postdocs and ten graduate studentships at UBC. The jobs are posted in the compressive sensing jobs page. They are also listed below:
• September 27th, 2010, Three postdoctoral positions at the Seismic Laboratory for Imaging and Modeling (SLIM) of the University of British Columbia, Vancouver, BC, Canada
Project:
DNOISE is a 5-year NSERC and industry-funded project for research in seismic data acquisition, processing, and imaging. Our interdisciplinary approach builds on recent developments in compressive sensing, large-scale optimization, and full-waveform inversion from severely sub-sampled data. The project includes 10 graduate students, 3 postdocs, and a research associate. The postdoctoral positions, under supervision Felix J. Herrmann (Earth and Ocean Sciences), Ozgur Yilmaz (Mathematics), and Michael P. Friedlander (Computer Science), are available immediately.
Description:
The aim of the DNOISE project is to design the next generation of seismic imaging technology to address fundamental issues related to the quality and cost of seismic data acquisition, the ability to invert exceedingly large data volumes, and the capacity to mitigate non-uniqueness of full-waveform inversion.
You will be part of a dynamic interdisciplinary research group and will present your research at international conferences and to industry. You will also be involved in industry collaborations that include internships and projects on real field data. You will have extensive contact with graduate students, fellow postdocs, and faculty. We seek excellence in any of a wide variety of areas, spanning from theory, algorithm design, to concrete software implementations to be applied to field data. SLIM has state-of-the-art resources, including a 288 CPU cluster, Parallel Matlab, and seismic data-processing software.
Successful candidates will have a PhD degree obtained in 2008 or later in geophysics, mathematics, computer science, electrical engineering, or a related field, with a strong achievement record in at least one of the following areas: seismic imaging, inverse problems, PDE-constrained optimization, signal processing, sparse approximation and compressed sensing, convex optimization, and stochastic optimization. Earlier PhDs will be considered where the research career has been interrupted by circumstances such as parental responsibilities or illness. UBC hires on the basis of merit, and is committed to employment equity. Positions are open to individuals of any nationality.
Compressive seismic-data acquisition. Development of practical seismic acquisition scenarios, sigma-delta quantization, and experimental design for seismic inversion.
Full-waveform inversion. Development of PDE-constrained optimization algorithms that deal with large data volumes and that remedy the non-uniqueness problem.
Large-scale optimization. Development of optimization algorithms and software for sparse approximation and problems with PDE-constraints.
The University of British Columbia, established in 1908, educates a student population of 50,000 and holds an international reputation for excellence in advanced research and learning. Our campus is 30 minutes from the heart of downtown Vancouver, a spectacular campus that is a 'must-see' for any visitor to the city -- where snow-capped mountains meet ocean, and breathtaking vistas greet you around every corner.
How to apply: Applicants should submit a CV, a list of all publications, and a statement of research, and arrange for three or more letters of recommendation to be sent to Manjit Dosanjh (MDOSANJH@eos.ubc.ca). All qualified candidates are encouraged to apply; however, Canadians and Permanent Residents will be given priority. For more information, see: http://slim.eos.ubc.ca
• September 27th, 2010, Ten graduate students at the Seismic Laboratory for Imaging and Modeling (SLIM) of the University of British Columbia
Project:
DNOISE is a 5-year NSERC and industry-funded project for research in seismic data acquisition, processing, and imaging. Our interdisciplinary approach builds on recent developments in compressive sensing, large-scale optimization, and full-waveform inversion from severely sub-sampled data. The project includes 10 graduate students, 3 postdocs, and a research associate. Subject to admission by the faculty of graduate studies, prospective students can start as early January 1st, 2011 under supervision of Felix J. Herrmann (Earth and Ocean Sciences), Ozgur Yilmaz (Mathematics), or Michael P. Friedlander (Computer Science).
Description:
The aim of the DNOISE project is to design the next generation of seismic imaging technology to address fundamental issues related to the quality and cost of seismic data acquisition, the ability to invert exceedingly large data volumes, and the capacity to mitigate non-uniqueness of full-waveform inversion.
You will be part of a dynamic interdisciplinary research group and will present your research at international conferences and to industry. You will also be involved in industry collaborations that include internships and projects on real field data.
You will have extensive contact with fellow graduate students, postdocs, and faculty. We seek excellence in any of a wide variety of areas, spanning theoretical, algorithmic, and software development. A good background in mathematics is important.
The graduate funding is for two years for MSc students and four years for PhD students and includes generous funding for travel. PhD student funding includes a tuition waiver.
Successful candidates will join the PhD or MSc programs in one of the following departments at UBC: Earth and Ocean Sciences, Mathematics, or Computer Science. At SLIM you will have state-of-the art resources available, including a 288 CPU cluster, Parallel Matlab, and seismic data-processing software.
The University of British Columbia, established in 1908, educates a student population of 50,000 and holds an international reputation for excellence in advanced research and learning. Our campus is 30 minutes from the heart of downtown Vancouver, a spectacular campus that is a 'must-see' for any visitor to the city -- where snow-capped mountains meet ocean, and breathtaking vistas greet you around every corner.
Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from
## Monday, September 27, 2010
### CS: The EPFL Real-Time Compressed Sensing-Based Personal Electrocardiogram Monitoring System
I previously mentioned the presentation of Pierre Vandergheynst on the EPFL Compressive Sensing ECG system (check also the Q&A on the matter).. Pierre sent me more in the form of a webpage featuring the project A Real-Time Compressed Sensing-Based Personal Electrocardiogram Monitoring System by Hossein Mamaghanian, Karim Kanoun, and Nadia Khaled
From the website:
#### OUR APPROACH
Capitalizing on the sparsity of ECG signals, we propose to apply the emerging compressed sensing (CS) approach for a low-complexity, real-time and energy-aware ECG signal compression on WBSN motes. This is motivated by the observation that this new signal acquisition/compression paradigm is particularly well suited for low-power implementations because it dramatically reduces the need for resource- (both processing and storage) intensive DSP operations on the encoder side. The price to pay for these advantages however, is a more complex decoder. In general, reconstruction algorithms for CS recovery are computationally expensive, because they involve large and dense matrix operation, which prevents their real-time implementation on embedded platforms.
This project has developed a real time, energy-aware wireless ECG monitoring system. The main contributions of this work are: (1) a novel CS approach that precludes large and dense matrix operations both at compression and recovery, and (2) several platform-dependent optimizations and parallelization techniques to achieve real-time CS recovery. To the best of our knowledge, this work is the first to demonstrate CS as an advantageous real-time and energyefficient ECG compression technique, with a computationally light and energy-efficient ECG encoder on the state-of-the-art ShimmerTM wearable sensor node and a real-time decoder running on an iPhone as a representative of compact embedded (CE) device (acting as a WBSN coordinator).
This information has been added to the Compressive Sensing Hardware page.
## Saturday, September 25, 2010
### Orphan Technologies redux ...
Alejandro pointed out to me the recent book from David McKay on sustainable energy. I haven't read the whole book but all the things he says on sustainability and Nuclear power are right on. Since David is not an insider, read work/has worked/worked for a nuclear utility or government entity dedicated to the use of nuclear materials, I'd say he is a pretty good source to rely on. What's disconcerting is that nobody in our political landscape can relate to enabling a power source beyond a five year schedule. The book is here. It is also available from the Amazon - Nuit Blanche Bookstore (Kindle version). Maybe David's new position as Chief Scientific Advisor to the Department of Energy and Climate Change will change some of that (at least in the U.K.)
## Friday, September 24, 2010
From the ever great PetaPixel blog, here is the first Plenoptic cameras on the market. from Raytrix. This was unearthed from Photorumors which featured a presentation by Adobe on using GPU to reconstruct images from Plenoptic cameras:
David Salesin and Todor Georgiev are the presenters. Looking at Todor's page, I noticed among many interesting tidbits:
Adobe Tech Report, April 2007 Lightfield Capture by Frequency Multiplexing. Careful theoretical derivation of what MERL authors call "Dappled Photography" and "Heterodyning". Proving that frequency multiplexing works for both mask-based and microlens-based cameras. This is a new result. It shows that the approach is truly universal: It's not a special "heterodyning" type of camera, but a "heterodyning" method of processing the data, applicable to any camera! Also, proposing a new "mosquito net" camera. And much more (wave artifact removal, F/number analysis). Examples.
Of related interest is this page: 100 Years Light-Field featuring the work of Lippman ( I did not know the connection between Marie Curie and Gabriel Lippman ). Sure this is about computational photography an area which include compressive sensing and shows that at least some hardware is considering having a two-step process. The GPU aspect is something that we are all convinced about since our solvers probably need this type of computational power. The main aspect for compressive sensing is whether it can bring another dimension to the data being reconstructed. In other words, could a plenoptic camera be hacked into doing something entirely new ? In other news, David Brady let us know that Hasselblad is releasing a 200 MP camera. Also, T-Ray Science, Inc. Files US Non Provisional Patent. From MarketWire:
VANCOUVER, BRITISH COLUMBIA--(Marketwire - Sept. 21, 2010) - T-Ray Science, Inc. (TSX VENTURE:THZ) announces it has filed a non provisional patent with the U.S. Patent and Trade Mark Office on a Unified Sensing Matrix for a Single Pixel Detector Based on Compressive Sensing (USPTO Serial No. 61/243,559).
T-Ray's Patent covers a particular process of acquiring a 2D image that allows for higher quality and faster scanning from a Terahertz beam. The Company is anticipating using the patented process in the development of its portable Skin Cancer Imager.
"T-Ray recognizes the importance of strengthening our patent portfolio," says T-Ray President and CEO Thomas Braun. "The signal process covered by this patent complements our existing intellectual property recently licensed from the BC Cancer Agency."
I don't have much time today as I have to fill my invention disclosure forms...
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store
## Thursday, September 23, 2010
### Orphan Technologies ...
There is an entry in this blog that keeps on getting some attention, it features two technologies that, because they are complex, have become orphan (at least in the U.S.), some would called them rudderless at the policy level:
While reading the interview of James Bagian I was reminded that both of these technologies use Probabilistic Rsik Assessments (PRA), maybe it's a sign that when you begin to use Monte Carlo techniques, you are doomed. Both sets of technologies are in that state because most political stakeholders have no interest in developing them. Take for instance the case of reprocessing nuclear materials after they have gone through one core use: the Left which has, for years, built its political credentials on fighting earlier projects , cannot but be extremely negative about any development on some ideological level while the Right cannot think of a market where building one plant results in putting billions of dollars in some government escrow account for years during the plant construction while the stock market has 2 to 15% returns. On a longer time scale, reprocessing is the only way we can buy some time but the political landscape does not allow for this type of vision. It just buggles the mind that we ask people to waste energy in creating and recycling glass [see note in the comment section] but throw away perfectly usable plutonium. On the other end of the spectrum timewise, maybe we should not care so much about nonproliferation as much as dealing with ETFs. Maybe a way to make ETFs less a problem is getting people to use Monte Carlo techniques there.. oh wait they do, we're screwed. But the interview also pointed out to issues in health care where even getting people to follow a check list is a matter that seems to involve lawyers. As it turns out John Langford just had a posting on Regretting the dead about making tiny changes in our way of thinking to map better the expectations in the Health care sector. It would seem to me that we ought to have similar undertaking from the Theoretical Computer Science community as well. For instance, the Stable Marriage Problem could be extended a little further so that we could eventually get to know why certain traits, conditions or diseases stay endemic at a low level in the population. I certainly don't hold my breath on having algorithms changing some landscape as exemplified by the inability for industry to include ART or better algorithms in current CT technology even with the stick of the radiation burden [1]. In a somewhat different direction, Jason Moore has a post on Machine Learning Prediction of Cancer Susceptibility. From his grant renewal, I note:
This complex genetic architecture has important implications for the use of genome-wide association studies for identifying susceptibility genes. The assumption of a simple architecture supports a strategy of testing each single-nucleotide polymorphism (SNP) individually using traditional univariate statistics followed by a correction for multiple tests. However, a complex genetic architecture that is characteristic of most types of cancer requires analytical methods that specifically model combinations of SNPs and environmental exposures. While new and novel methods are available for modeling interactions, exhaustive testing of all combinations of SNPs is not feasible on a genome-wide scale because the number of comparisons is effectively infinite.
I am sure that some of you already see the connection between this problem and some of the literature featured here. As Seth Godin stated recently:
You can add value in two ways:
* You can know the answers.
* You can offer the questions.
Relentlessly asking the right questions is a long term career, mostly because no one ever knows the right answer on a regular basis.
I certainly don't know the answers but if you think this blog offers the right questions please support it by ordering through the Amazon - Nuit Blanche Reference Store
Reference:
Credit: NASA
## Wednesday, September 22, 2010
### CS: Around the blogs in 80 hours, Sudocodes, HPT code, Volumetric OCT and more...
Here are some of the blog entries that caught my eyes recently:
Simon Foucart, now at Drexel, released the Hard Thresholding Pursuit (HPT) code that was described back in August in Hard thresholding pursuit: an algorithm for Compressive Sensing. The code is here. I have added it to the reconstruction solver section of the Big Picture.
Dear all,
I've started my Master Thesis and it's about CS for Electron Microscopy Data. We have the common problem of the "missing cone", the specimen which we want to capture the 3D-model makes a shadow when we turn it. This shadow in the Fourier Domain is a cone, and the data is not useful. In pixel domain we obtain an elongated image, so none of the two representations is sparse.
So the question is: do you think that it's possible to find any basis where the data is Sparse??? Or can we modified the data to obtain a Sparse Signal, apply CS and then revert the modifications??? Or can we force to obtain a sparse signal in the EM???
Any suggestion will be welcome.
Thank you everybody.
Now let us see the publication and preprints that just showed up on my radar screen:
Acquiring three dimensional image volumes with techniques such as Optical Coherence Tomography (OCT) relies on reconstructing the tissue layers based on reflection of light from tissue interfaces. One B-mode scan in an image is acquired by scanning and concatenating several A-mode scans, and several contiguous slices are acquired to assemble a full 3D image volume. In this work, we demonstrate how Compressive Sampling (CS) can be used to accurately reconstruct 3D OCT images with minimal quality degradation from a subset of the original image. The full 3D image is reconstructed from sparsely sampled data by exploiting the sparsity of the image in a carefully chosen transform domain. We use several sub-sampling schemes, recover the full 3D image using CS, and show that there is negligible effect on clinically relevant morphometric measurements of the optic nerve head in the recovered image. The potential outcome of this work is a significant reduction in OCT image acquisition time, with possible extensions to speeding up acquisition in other imaging modalities such as ultrasound and MRI.
In this paper we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted $\ell_1$ minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted $\ell_1$ minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero- so that when i.i.d. random Gaussian measurement matrices are used, the weighted $\ell_1$ minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the optimal weighted $\ell_1$ minimization outperforms the regular $\ell_1$ minimization substantially. We also generalize the results to an arbitrary number of classes.
A Lower Bound on the Estimator Variance for the Sparse Linear Model by Sebastian Schmutzhard, Alexander Jung, Franz Hlawatsch, Zvika Ben-Haim, Yonina C. Eldar. The abstract reads:
We study the performance of estimators of a sparse nonrandom vector based on an observation which is linearly transformed and corrupted by additive white Gaussian noise. Using the reproducing kernel Hilbert space framework, we derive a new lower bound on the estimator variance for a given differentiable bias function (including the unbiased case) and an almost arbitrary transformation matrix (including the underdetermined case considered in compressed sensing theory). For the special case of a sparse vector corrupted by white Gaussian noise-i.e., without a linear transformation-and unbiased estimation, our lower bound improves on previously proposed bounds.
We propose a shrinkage procedure for simultaneous variable selection and estimation in generalized linear models (GLMs) with an explicit predictive motivation. The procedure estimates the coefficients by minimizing the Kullback-Leibler divergence of a set of predictive distributions to the corresponding predictive distributions for the full model, subject to an $l_1$ constraint on the coefficient vector. This results in selection of a parsimonious model with similar predictive performance to the full model. Thanks to its similar form to the original lasso problem for GLMs, our procedure can benefit from available $l_1$-regularization path algorithms. Simulation studies and real-data examples confirm the efficiency of our method in terms of predictive performance on future observations.
We propose the Bayesian adaptive Lasso (BaLasso) for variable selection and coefficient estimation in linear regression. The BaLasso is adaptive to the signal level by adopting different shrinkage for different coefficients. Furthermore, we provide a model selection machinery for the BaLasso by assessing the posterior conditional mode estimates, motivated by the hierarchical Bayesian interpretation of the Lasso. Our formulation also permits prediction using a model averaging strategy. We discuss other variants of this new approach and provide a unified framework for variable selection using flexible penalties. Empirical evidence of the attractiveness of the method is demonstrated via extensive simulation studies and data analysis.
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store
## Tuesday, September 21, 2010
### CS: The long post of the week (Remarks, Noisy Group Testing, Streaming Compressive Sensing and more)
Echoing an entry on why we should not care about P vs NP when discussing compressed sensing issues, and rather focus on the Donoho-Tanner phase transition, I was reminded (after re-reading one of Suresh's entry "...Remember that exponential might be better than n^100 for many values of n." ) that Dick Lipton pointed out to this issue as well Is P=NP an Ill Posed Problem?, albeit more eloquently. If last week's scammer (P = NP, "It's all about you, isn't it ?") had even bothered to answer half seriously the question, I think I would have sent some money, really...So maybe it's not such good idea to use the same subject to fight the next scammer, they might get good at answering P vs NP questions :-)
Dick Lipton mentioned on his blog that he has new book out. Giuseppe also mentioned his intention on buying Understanding Complex Datasets: Data Mining with Matrix Decompositions. Both of them are in the Nuit Blanche bookstore.
Talking about the bookstore, some commenter summarized some of my thoughts in the entry about the store's introduction.
The Nikon camera having a projector built-in makes it possible to project on the object various pseudo-random matrix masks, for 3d reconstruction, dual-photography or various CS uses. Does the software allow to use the projector at the same time as the main camera module ? Some hacking might be needed to open these cool possibilities...
You bet! I might add that a similar stance should be taken for this FujiFilm Finpix camera, the 3D AIPTEK camcorder or the Minoru 3Dwebcam albeit more along the lines of MIT's random lens imager.
In a different direction, Kerkil Choi has a post on his website where he is trying to clear up some statement on his recent paper:
### Compressive holography and the restricted isometry property
posted Sep 8, 2010 4:34 PM by Kerkil Choi [ updated Sep 13, 2010 11:21 AM ]
I have received a surge of emails asking why our compressive holography necessarily works in the context of compressive sensing.
One intuitional explanation is that the hologram (after removing the signal autocorrelation and the twin image terms) may be interpreted as a slice of the 3D Fourier transform according to the Fourier diffraction slice theorem developed in diffraction tomography. A Fourier transform measurement matrix is called an incoherent measurement matrix in the compressive sensing literature. Our measurement is quite structured, but it may be thought of as an instance or realization of such random sets of samples.
A more rigorous argument relies on the work of Justin Romberg at Georgia Tech. Justin proved that a random convolution matrix built by multiplying an inverse Fourier transform matrix, a diagonal with random phase entries with unit magnitudes, and a Fourier transform matrix satisfies the restricted isometry property (RIP), which is an essential (but sufficient) condition for compressive sensing to work. Amusingly, such a matrix can be interpreted as a free space propagation matrix in the context of physics, which is often used in holography and interferometry - a free space propagation matrix has an identical form to that proven by Romberg. Again, the matrix is structured, but it may be viewed as a realization of such a class of random convolution matrices.
Here is Justin's paper:
Compressive Sensing by Random Convolution, Justin Romberg, SIAM J. Imaging Sci. 2, 1098 (2009), DOI:10.1137/08072975X
He proved that a random convolution matrix constructed as he suggests satisfies a concentration inequality similar to what is shown in the Johnson-Lindenstrauss Lemma (JL lemma). This fascinating link, which now shows in almost every CS literature, between the JL lemma and the RIP has been beautifully elucidated by Baraniuk et al - here is the paper:
R. G. Baraniuk, M. A. Davenport, R. A. DeVore and M. B. Wakin, "A simple proof of the restricted isometry property for random matrices," Constructive Approximation, 2007.
If this response is not enough to convince y'all that if you have a physical system your main job is to focus on getting a Donoho-Tanner phase transition diagram then I don't know what can sway you. I mean making a statement on RIP and then not broaching the subject afterwards in the paper when it comes to the actual realization of this property by the hardware is simply asking the reviewers to beat you up with a large and mean stick!.
Of interest this week on the long post are the following papers:
Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool.
In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design non-adaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items. In particular, for a population of size N and at most K defective items with activation probability p, our results show that M = O(K^2 log(N/K)/p^2) tests is sufficient if the sampling procedure should work for all possible sets of defective items, while M = O(K log(N)/p^2) tests is enough to be successful for any single set of defective items. Moreover, we show that the defective members can be recovered using a simple reconstruction algorithm with complexity of O(MN).
Extending Group Testing to include noisy measurements, I like it a lot!
The ability of Compressive Sensing (CS) to recover sparse signals from limited measurements has been recently exploited in computational imaging to acquire high-speed periodic and near periodic videos using only a low-speed camera with coded exposure and intensive off-line processing. Each low-speed frame integrates a coded sequence of high-speed frames during its exposure time. The high-speed video can be reconstructed from the low-speed coded frames using a sparse recovery algorithm. This paper presents a new streaming CS algorithm specifically tailored to this application. Our streaming approach allows causal on-line acquisition and reconstruction of the video, with a small, controllable, and guaranteed buffer delay and low computational cost. The algorithm adapts to changes in the signal structure and, thus, outperforms the off-line algorithm in realistic signals.
The attendant videos are here. I note from the introduction:
In this paper we significantly enhance this work in several ways:
• We develop a streaming reconstruction algorithm, the Streaming Greedy Pursuit (SGP) [9], which enables on-line reconstruction of the high-speed video. The CoSaMP-based SGP is specifically designed for streaming CS scenarios, with explicit guarantees on the computational cost per sample and on the input-output delay.
• We formulate a signal model to incorporate the similarities in the sparsity structure of nearby pixels in the reconstruction algorithm using the principles of joint sparsity and model-based CS [10, 11].
• We combat matrix-conditioning problems that arise due to the non-negative nature of imaging measurements by revisiting the minimum variance (or Capon) beamformer from the array processing literature [12] and re-introducing it in the context of CS. Our work significantly improves the reconstruction performance of coded strobing video and, more importantly, enables on-line reconstruction of the acquired video.
This paper develops an optimal decentralized algorithm for sparse signal recovery and demonstrates its application in monitoring localized phenomena using energy-constrained large-scale wireless sensor networks. Capitalizing on the spatial sparsity of localized phenomena, compressive data collection is enforced by turning off a fraction of sensors using a simple random node sleeping strategy, which conserves sensing energy and prolongs network lifetime. In the absence of a fusion center, sparse signal recovery via decentralized in-network processing is developed, based on a consensus optimization formulation and the alternating direction method of multipliers. In the proposed algorithm, each active sensor monitors and recovers its local region only, collaborates with its neighboring active sensors through low-power one-hop communication, and iteratively improves the local estimates until reaching the global optimum. Because each sensor monitors the local region rather than the entire large field, the iterative algorithm converges fast, in addition to being scalable in terms of transmission and computation costs. Further, through collaboration, the sensing performance is globally optimal and attains a high spatial resolution commensurate with the node density of the original network containing both active and inactive sensors. Simulations demonstrate the performance of the proposed
approach.
We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.
Also in a somewhat different direction there is the presentation: Scalable Tensor Factorizations with Incomplete Data by Tamara G. Kolda, Daniel M. Dunlavy, Evrim Acar , Morten Mørup.
As stated earlier, (CS: Compressive Sensing on Cleve's corner), Cleve Moler, the founder of Mathworks (Matlab) has an article on Compressed Sensing. An interesting tidbit shows up at the very end:
Compressed Sensing
Compressed sensing promises, in theory, to reconstruct a signal or image from surprisingly few samples. Discovered just five years ago by Candès and Tao and by Donoho, the subject is a very active research area. Practical devices that implement the theory are just now being developed. It is important to realize that compressed sensing can be done only by a compressing
sensor, and that it requires new recording technology and file formats. The MP3 and JPEG files used by today’s audio systems and digital cameras are already compressed in such a way that exact reconstruction of the original signals and images is impossible. Some of the Web postings and magazine articles about compressed sensing fail to acknowledge this fact.
Looks like somebody has been, at the very least, reading the wikipedia page on the subject. Good!
Finally, we have behind a paywall:
A compressive sensing approach to perceptual image coding by Mark R. Pickering, Junyong You, Touradj Ebrahimi. The abstract reads:
There exist limitations in the human visual system (HVS) which allow images and video to be reconstructed using fewer bits for the same perceived image quality. In this paper we will review the basis of spatial masking at edges and show a new method for generating a just-noticeable distortion (JND) threshold. This JND threshold is then used in a spatial noise shaping algorithm using a compressive sensing technique to provide a perceptual coding approach for JPEG2000 coding of images. Results of subjective tests show that the new spatial noise shaping framework can provide significant savings in bit-rate compared to the standard approach. The algorithm also allows much more precise control of distortion than existing spatial domain techniques and is fully compliant with part 1 of the JPEG2000 standard.
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store
Credit: NASA/JPL/Space Science Institute,
W00065463.jpg was taken on September 18, 2010 and received on Earth September 20, 2010. The camera was pointing toward SATURN at approximately 1,963,182 kilometers away, and the image was taken using the MT3 and IRP90 filters.
## Monday, September 20, 2010
### CS: 1-bit Fest
I don't know if this was a local expression or just something known in the language of students, but back when I was one, some times, we would unplug and do crazy things for a little while. We would generally call that period of time: 'name-of-person-involved-in-the-crazy-partying'-Fest. Today we have something that might not be that wild but is amazing nonetheless; so I am going to call it: 1-Bit Fest and it comes in the shape of three papers. Maybe I should have a static page on the subject after all. Enjoy.
This paper considers the problem of identifying the support set of a high-dimensional sparse vector, from noise corrupted 1-bit measurements. We present passive and adaptive algorithms for this problem, both requiring no more than O(d log(D)) measurements to recover the unknown support. The adaptive algorithm has the additional benefit of robustness to the dynamic range of the unknown signal.
The recently emerged compressive sensing (CS) framework aims to acquire signals at reduced sample rates compared to the classical Shannon-Nyquist rate. To date, the CS theory has assumed primarily real-valued measurements; it has recently been demonstrated that accurate and stable signal acquisition is still possible even when each measurement is quantized to just a single bit. This property enables the design of simplified CS acquisition hardware based around a simple sign comparator rather than a more complex analog-to-digital converter; moreover, it ensures robustness to gross non-linearities applied to the measurements. In this paper we introduce a new algorithm — restricted-step shrinkage (RSS) — to recover sparse signals from 1-bit CS measurements. In contrast to previous algorithms for 1-bit CS, RSS has provable convergence guarantees, is about an order of magnitude faster, and achieves higher average recovery signal-to-noise ratio. RSS is similar in spirit to trust-region methods for non-convex optimization on the unit sphere, which are relatively unexplored in signal processing and hence of independent interest.
I have seen this trust but verify term used before, all y'all are reading too much Nuit Blanche and something sticks...
Scalar quantization is the most practical and straightforward approach to signal quantization. However, it has been shown that scalar quantization of oversampled or Compressively Sensed signals can be inefficient in terms of the rate-distortion trade-off, especially as the oversampling rate or the sparsity of the signal increases. In this paper, we modify the scalar quantizer to have discontinuous quantization regions. We demonstrate that with this modification it is possible to achieve exponential decay of the quantization error as a function of the oversampling rate instead of the quadratic decay exhibited by current approaches. We further demonstrate that it is possible to further reduce the quantization error by incorporating side information on the acquired signal, such as sparse signal models or signal similarity with known signals. Our approach is universal in the sense that prior knowledge of the signal model is not necessary in the quantizer design.
From the conclusion I note the following interesting tidbit:
Last, we should note that this quantization approach has very tight connections with locality-sensitive hashing (LSH) and `2 embeddings under the hamming distance (e.g., see [46] and references within). Specifically, our quantization approach effectively constructs such an embedding, some of the properties of which are examined in [47], although not in the same language. A significant difference is on the objective. Our goal is to enable reconstruction, whereas the goal of LSH and randomized embeddings is to approximately preserve distances with very high probability. A rigorous treatment of the connections on the connections of quantization and LSH is quite interesting and deserves a publication of its own. A preliminary attempt to view LSH as a quantization problem is performed in [48].
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store
## Friday, September 17, 2010
### P = NP, "It's all about you, isn't it ?"
[Check the update at the end of this entry]
While the P=NP proof is taking some time to be digested by the community, I wondered if the time spent in understanding its implication would weed out the wrong people. Case in point: here is another person who doesn't care about Deolalikar's proof, the scammer who took over Hadi Zayyani's yahoo email account.
The first message came to me yesterday:
I Hope you get this on time,sorry I didn't inform you about my trip in Spain for a program, I'm presently in Madrid and am having some difficulties here because i misplaced my wallet on my way to the hotel where my money and other valuable things were kept.I want you to assist me with a loan of 2500Euros to sort-out my hotel bills and to get myself back home.I have spoken to the embassy here but they are not responding to the matter effectively,I will appreciate whatever you can afford to assist me with,I'll Refund the money back to you as soon as i return, let me know if you can be of any help. I don't have a phone where i can be reached. Please let me know immediately.
With best regards
At that point, I know this is a scam since you just need to take the whole text and feed it to the Google and see the results. It is however perplexing because the e-mail header really shows it has been sent from Hadi's account. I then responded with the following with the intent on figuring out if this was a live scam or just one of those spam e-mails:
M:
I need a phone number to let you know where I can wire the money.
The scammer is interested it seems, S:
Thanks for your response and also for your concern towards my present situation. Kindly send the money to me Using Western Union Money Transfer.I will be able to get the money in minutes after you transfer fund with ease. Here is the information you need.
Please scan and forward the receipt of the transfer to me or write out the details on the receipt and send to me,it will enable me to pick up the money from here.i will be waiting for your positive response.
Thanks
Ok so now we are getting more personal and so I have to make up something because I really don't like people using other people's e-mail account for scamming:
You never responded to my query as to whether you think the recent P = NP proof will be important with regards to your reconstruction solver ?
I look forward to hearing from you on this matter.
PS: I have sent the money
S:
Kindly forward me the transfer details for picking up the money at the Western union outlet.
M:
You will get the details of the transfer only if you tell me if you think the recent P = NP proof will be important with regards to your reconstruction solver ? Thanks.
S:
No there's no need for P =NP prof concerning my present situation down here.I will refund the money back to you as soon as i get back home.
M:
It may be the case for your situation, but what about the recent P = NP proof as it pertains specifically to your reconstruction solver ? Thanks.
S:
I want you to know that all this you were asking me is not so important for me right now.Please i want you to understand me and do this for me so that i can get out of trouble here in Spain.
M:
It's all about you, isn't it ?
I haven't heard from that person since.
[Update: To give some background to our new readers: in Compressive Sensing, we solve what was thought to be an NP-hard problem however a simple additional constraint (sparsity) has put our problem in the P territory. In the past few years, reconstruction solvers ( in the P class) and have steadily improved numerically (as opposed to just asymptotically.) Our main concern these days is figuring out the Donoho-Tanner border
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store
### CS: I don't really care what you call it, Samsung is planning on doing compressed sensing!
Image Sensor World, a very good blog on what is happening in the imaging sensors area, just featured the following tidbits:
### Samsung Plans 3D Cameras with Single Sensor
Korea Times quoted Park Sang-jin, president of Samsung’s digital imaging division, saying "As the issue with 3D televisions is providing a glass-free viewer experience, 3D cameras has a similar challenge for achieving a one-lens, one-imaging sensor approach. The two-lens, two-sensor 3D camera released by Fuji is still too expensive and inconvenient for users."
Park also said that the company may produce a camera capable of taking three-dimensional (3D) images sometime next year, but admitted that it will be a digital guinea pig, saying that the "real" 3D cameras that are suited for conventional use won’t probably be available until after 2012.
In the comment section, I guessed that the process involved in this technology would include some of Ramesh Raskar et al work/ideas. The blog owner also mentioned a Kodak patent. Either way, light from different views is superimposed on the same chip, i.e. several views are multiplexed together. Irrespective to the reconstruction method that might take advantage of the particulars of the hardware, this is compressed sensing in that information is being multiplexed (and compressed) to eventually be produced back to the user (in 3D format). Now the question of interest is really whether the demultiplexing operation will use a sparsity argument. Some people might take the view that if this is not the case, then it is not what we all call compressed sensing. However, one should always be very circumstantial on the matter: Demultiplexing your information through some known engineering does not means that you are not implicitly using the sparsity assumption.
Of note, one of the commenter is none other than Eric Fossum, the inventor of the CMOS Active Pixel Technology.
By the way, Ramesh let me know that the Camera Culture Lab has now a page on Facebook. Another Facebook page is of interest is Yes, CSI Image Processing is Science ( Fiction! )
## Thursday, September 16, 2010
### CS: In this post I continue to not care about any proofs regarding P and NP... and more
I realize that I am commenting on the matter with some lateness .However, irrespective of the expected outcome (see Dick Lipton's latest entry on the subject), one could already make the following statement from day 1: It is very likely we should not care too much about this result in Compressive Sensing as shown in the argument developed here four months ago: In Compressive Sensing: P= NP + Additional Constraints. Let's not even add to that argument that being P also means an algorithm that scale in O(n^501) is OK as opposed to an NP algorithm that scales as O(exp(0.000000001n)).. At our level, the scales and their attendant constants are enormously important, asymptotics are not enough at least when it comes to the reconstruction solvers..
In unrelated news:
Credit: NOAA, NASA
If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4231426417827606, "perplexity": 1343.8755547396538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00138.warc.gz"} |
https://www.physicsforums.com/threads/x-r-cos-u-i-sin-u-and-y-t-cos-v-i-sin-v.153086/ | X=r(cos u+i sin u) and y=t(cos v + i sin v)
• Start date
• #1
2
0
Main Question or Discussion Point
I need help getting this one started... PLEASE...
Given x=r(cos U + i sin u) and y =t(cos v + i sin v):
Prove tha tthe modulus of (xy) is the product of their moduli and that the amplitude of (xy) is the sum of their amplitudes.
• #2
207
0
I dont really know the answer to this question out-right, though it seems like that they are taking x,y in the complex field to a polar coordinate mapping. When you multiply two elements in this way the length's are a product and the angles add together which can be verified by straight multiplication of the variables xy.
hince: xy = rt(cos(u+v) + isin(u+v))
Graphically this looks like x/y lengths multiplied with their angles added together.
That should give you a good insight into what the modulus/length and the amplitude/direction should be.
Edit:
Also: when proving the xy = "" portion I posted, it may be helpful to look up trig identities for cos(x+y) and sin(x+y) as it will allow you to make needed substitutions :P.
• #3
101
1
Maybe use Euler's formlua?
• #4
The modulus of a complex number is given by multiplying the number by its complex conjugate and taking the square root. So:
$$|x|^2 = x\overline{x}$$
$$|y|^2 = y\overline{y}$$
$$|xy|^2 = xy\overline{xy}=x\overline{x}y\overline{y}=|x|^2|y|^2$$
This works because complex multiplication is commutative(order doesn't matter). Now just take the square root of both sides and you're done.
• Last Post
Replies
9
Views
1K
• Last Post
Replies
12
Views
1K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
1
Views
4K
• Last Post
Replies
5
Views
2K
• Last Post
Replies
6
Views
2K
• Last Post
Replies
1
Views
2K
• Last Post
Replies
4
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140926003456116, "perplexity": 1711.7711980075878}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00144.warc.gz"} |
http://math.stackexchange.com/questions/246496/the-mode-of-the-poisson-distribution | The mode of the Poisson Distribution
Lately, I am doing an investigation on Stirling's formula and its applications. So I thought I could use it to prove that the mode of the Poisson model is approximately equal to the mean. Of course, you do that by considering the curve that is formed by connecting the points of the probabilities of occurrence and the different values of the discrete random variable. Then you differentiate the p.d.f. where for $x!$ you use Stirling's formula $x!\approx \sqrt{2\pi x}~x^xe^{-x}$. The result is $\lnλ-1/(2x)-\ln x$ whose roots cannot be found analytically, but by iterative methods we find that as λ is larger and larger, the mode~mean.
Problem is, I found the following paper online, which seems to be the solution from a Harvard's undergraduate problem set.
http://www.physics.harvard.edu/academics/undergrad/probweek/sol84.pdf
It reads "You can also show this by taking the derivative of eq. (2), with Stirling’s expression in place of the $x!$. Furthermore, you can show that $x = a[=λ ~\text{in my case}~]-1/2$ leads to a maximum $P(x)$ value of $P_\max\approx1/\sqrt{2\pi a}$."
Does this puzzle you as much as it puzzles me? My main concern is over the "=" sign: how does this hold? The derivative=0 equation cannot have such an exact solution. Furthermore at such x, how does $P(X=a-1/2)$ give $1/\sqrt{2\pi a}$?
Am I (and my professor) missing something rather obvious or is the solution wrong?
Discuss!
PS: This sort of question might have been asked before, but still, I am really curious that somebody reads the paper in the link above, so that I can figure out what's going on.
-
The link does not work for me. But I note that it contains the word "physics" so it is possible that the solution would strike a mathematician as less than rigorous. (Or as complete nonsense.) – Johan Nov 28 '12 at 14:18
You are right, I had copied and pasted it wrong. It should work now: physics.harvard.edu/academics/undergrad/probweek/sol84.pdf – Ryuky Nov 28 '12 at 14:20
add comment
3 Answers
The weight $w(n)$ of the Poisson distribution with positive parameter $\lambda$ at the integer $n\geqslant0$ is $w(n)=\mathrm e^{-\lambda}\lambda^n/n!$ hence $w(n+1)/w(n)=\lambda/(n+1)$. One sees that $w(n)\gt w(n-1)$ for every $n\lt\lambda$ while $w(n+1)\lt w(n)$ for every $n\gt\lambda-1$. Thus, the mode of the Poisson distribution with parameter $\lambda$ is the highest integer $n_\lambda$ such that $n\lt\lambda$.
The estimation of $w_\lambda=\max\limits_nw(n)$ when $\lambda\to\infty$ is direct through Stirling's equivalent since $\lambda-1\lt n_\lambda\lt \lambda$, and indeed yields $\lim\limits_{\lambda\to\infty}\sqrt{2\pi\lambda}\cdot w_\lambda=1$.
Edit: Extend the sequence $(w(n))_{n\geqslant0}$ to a function $W$ defined on $\mathbb R^+$ through the formula $W(x)=\mathrm e^{-\lambda}\lambda^x/\Gamma(x+1)$. Thus, $W(n)=w(n)$ for every nonnegative integer $n$, and the function $W$ is maximal at $x_\lambda$ such that $\log\lambda=\psi(x_\lambda+1)$, where $\psi$ denotes the polygamma function. The asymptotic expansion of $\psi$ at infinity is $\psi(z)=\log(z)-\frac1{2z}+o(\frac1z)$ hence $\mathrm e^{\psi(z)}=z-\frac12+o(1)$ and $\lambda=x_\lambda+\frac12+o(1)$, that is, $\lim\limits_{\lambda\to\infty}x_\lambda-\lambda=-\frac12$.
-
To rely on Stirling's approximation to compute the mode would be, as somebody put it on this page, complete nonsense. – Did Nov 28 '12 at 14:27
I understand your reasoning about the mode. However, did you read the paper from the link? The part I pointed out is false, right? – Ryuky Nov 28 '12 at 14:32
Can you please elaborate on the last bit? – Ryuky Nov 28 '12 at 14:47
See Edit. – Did Nov 28 '12 at 14:59
add comment
To find the mode of the Poisson distribution, for $k > 0$, consider the ratio $$\frac{P\{X = k\}}{P\{X = k-1\}} = \frac{e^{-\lambda}\frac{\lambda^k}{k!}}{e^{-\lambda}\frac{\lambda^{k-1}}{(k-1)!}} = \frac{\lambda}{k}$$ which is larger than $1$ for $k < \lambda$ and smaller than $1$ for $k > \lambda$.
• If $\lambda < 1$, then $P\{X = 0\} > P\{X = 1\} > P\{X > 2\} \cdots$ and so the mode is $0$.
• If $\lambda > 1$ is not an integer, then the mode is $\lfloor\lambda\rfloor$ since $P\{X = \lceil\lambda\rceil\} < P\{X = \lfloor\lambda\rfloor\}$.
• If $\lambda$ is an integer $m$, then $P\{X = m\} = P\{X = m-1\}$ and so either $m$ or $m-1$ can be taken to be the mode.
In all cases, the mode and the mean differ by less than $1$. You do not need to use Stirling's approximation at all.
-
add comment
If all you're trying to prove is that the mode of the Poisson distribution is approximately equal to the mean, then bringing in Stirling's formula is swatting a fly with a pile driver. You have $$P(X=x) = \frac{\lambda^x e^{-\lambda}}{x!}.$$ The mean is $\lambda$. Now let us seek the mode.
Observe that $$\frac{P(X=x+1)}{P(X=x)} = \frac{\lambda^{x+1}/(x+1)!}{\lambda^x/x!} = \frac{\lambda}{x+1}.$$ This is $\ge 1$ if $x\le\lambda-1$ and $\le1$ if $x\ge \lambda-1$. Thus $$P(X=x+1)\quad \left.\begin{cases} \ge \\ = \\ \le \end{cases}\right\}\quad P(X=x)$$ according as $$x\quad \left.\begin{cases} \le \\ = \\ \ge \end{cases}\right\}\quad \lambda-1.$$ The mode is therefore the integer part of $\lambda-1$. Except when $\lambda$ is an integer, in which case two consecutive integers are both modes.
The linked item at Harvard was trying to do much more than just find the mode.
Later addendum: Although the mode of the distribution must be within the set that is the support of the distribution, which is $\{0,1,2,3,\ldots\}$, the linked paper seeks the value of $x$ that maximizes $\lambda^2 e^{-\lambda}/x!$ when non-integer values of $x$ are allowed. We could define $x!=\Gamma(x+1)$. However, the author uses Stirling's approximation, and that at least allows us to do something in closed form. Consider (at this point I'll call it $a$ instead of $\lambda$) $$\frac{a^x e^{-a}}{x!} \approx \frac{a^x e^{-a}}{x^x e^{-x}\sqrt{2\pi x}}.$$ Since $e^{-a}$ and $\sqrt{2\pi}$ don't depend on $x$ we can lose those and look at $$a^x e^x x^{-x} x^{-1/2} = \exp\left((x\log a) + x - x\log x - \frac12 \log x\right).$$ Since $\exp$ is an increasing function, we can seek the value of $x$ that maximizes the expression inside it, and that will be the value of $x$ that makes the derivative of that expression $0$: $$\frac{d}{dx} \left( (x\log a) + x - x\log x - \frac12 \log x \right) = \log a - \log x -\frac{1}{2x} = \log\left(\frac a x\right) - \frac{1}{2a}\left(\frac a x\right) = 0.$$ I don't see a way to get this in closed form, unless you want to bring in Lambert's $W$. That being the case, I don't know why he didn't just use $x!=\Gamma(x+1)$. But I entered this command into Wolfram Alpha:
f(x) = log(6) - log(x) - 1/(2x); x from 4 to 6
Lo and behold: it crosses the $x$-axis very close to $5.5$.
So the paper is not very explicit, to say the least, about how this conclusion was arrived at.
Still later addendum: Now I've entered this command into Wolfram Alpha:
f(x) = 6^x*e^(-6) / Gamma(x+1); from 5.49 to 5.51
It looks as if the maximum is near $5.494$.
-
Okay, I agree with all that, but if I understand correctly, in the part that I have quoted, he is saying that the maximum value of $P$ occurs at $x=a-1/2$. How is this so? – Ryuky Nov 28 '12 at 14:41
Yes. But except for that, how did he come up with this value of $x$. He is saying that he is taking the derivative, which after setting equal to $0$ gives a rather complicated equation that cannot be solved exactly. So, please tell me, what is he talking about? And if you plug this value of $x$ into the Poisson how can you get $1/\sqrt{2\pi a}$ There might be something that I am missing since I am still at high school, or what he is doing does not make sense. – Ryuky Nov 28 '12 at 15:00
@Ryuky : I've added some material on how $a-1/2$ was arrived at. The answer could be: numerically. But that's only for particular values of $a$. – Michael Hardy Nov 28 '12 at 17:26
The answer could be: numerically... No, this is a consequence of the expansion of the polygamma function $\psi$, see my answer. – Did Nov 28 '12 at 18:12
A minor correction: if $\lambda$ is not an integer, the mode is the integer part of $\lambda$ (not of $\lambda-1$ as you have it), cf. Didier's answer (and mine). – Dilip Sarwate Nov 28 '12 at 20:28
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711050987243652, "perplexity": 159.2087783739664}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345773090/warc/CC-MAIN-20131218054933-00066-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://eprints.iisc.ac.in/16045/ | # Catechol Oxidase Activity of a Series of New Dinuclear Copper(II)Complexes with 3,5-DTBC and TCC as Substrates: Syntheses, X-ray Crystal Structures, Spectroscopic Characterization of the Adducts and Kinetic Studies
Banu, Kazi Sabnam and Chattopadhyay, Tanmay and Banerjee, Arpita and Bhattacharya, Santanu and Suresh, Eringathodi and Nethaji, Munirathinam and Zangrando, Ennio and Das, Debasis (2008) Catechol Oxidase Activity of a Series of New Dinuclear Copper(II)Complexes with 3,5-DTBC and TCC as Substrates: Syntheses, X-ray Crystal Structures, Spectroscopic Characterization of the Adducts and Kinetic Studies. In: Inorganic Chemistry, 47 (16). pp. 7083-7093.
PDF catachol.pdf - Published Version Restricted to Registered users only Download (1MB) | Request a copy
## Abstract
A series of dinuclear copper(II) complexes has been synthesized with the aim to investigate their applicability as potential structure and function models for the active site of catechol oxidase enzyme. They have been characterized by routine physicochemical techniques as well as by X-ray single-crystal structure analysis: $[Cu_2(H_2L2^2)(OH)(H_2O)(NO_3)](NO_3)_3.2H_2O$ (1), $[Cu(HL1^4)(H_2O)(NO_3)]_2(NO_3)_2.2H_2O$ (2), $[Cu(L1^1)H_2O)(NO_3)]_2$ (3), $[Cu_2(L2^3)(OH)(H_2O)_2](NO_3)_2$, (4) and $[Cu_2(L2^1)(N_3)_3]$ (5) [L1 = 2-formyl-4-methyl-6R-iminomethyl-phenolato and L2 = 2,6-bis(R-iminomethyl)-4-methyl-phenolato; for $L1^1$ and $L2^1$, R = N-propylmorpholine; for $L2^2$, R = N-ethylpiperazine; for $L2^3$, R = N-ethylpyrrolidine, and for $L1^4$, R = N-ethylmorpholine]. Dinuclear 1 and 4 possess two "end-off" compartmental ligands with exogenous \mu-hydroxido and endogenous \mu-phenoxido groups leading to intermetallic distances of 2.9794(15) and $2.9435(9) \AA$, respectively; 2 and 3 are formed by two tridentate compartmental ligands where the copper centers are connected by endogenous phenoxido bridges with Cu-Cu separations of 3.0213(13) and $3.0152(15) \AA$, respectively; 5 is built by an end-off compartmental ligand having exogenous \mu-azido and endogenous \mu-phenoxido groups with a Cu-Cu distance of $3.133(2) \AA$ (mean of two independent molecules). The catecholase activity of all of the complexes has been investigated in acetonitrile and methanol medium by UV-vis spectrophotometric study using 3,5-di-tert-butylcatechol (3,5-DTBC) and tetrachlorocatechol (TCC) as substrates. In acetonitrile medium, the conversion of 3,5-DTBC to 3,5-di-tert-butylbenzoquinone (3,5-DTBQ) catalyzed by 1-5 is observed to proceed via the formation of two enzyme-substrate adducts, ES1 and ES2, detected spectroscopically for the first time. In methanol medium no such enzyme-substrate adduct has been detected, and the 3,5-DTBC to 3,5-DTBQ conversion is observed to be catalyzed by 1-5 very efficiently. The substrate TCC forms an adduct with 2-5 without performing further oxidation to TCQ due to the high reduction potential of TCC (in comparison with 3,5-DTBC). But most interestingly, 1 is observed to be effective even in TCC oxidation, a process never reported earlier. Kinetic experiments have been performed to determine initial rate of reactions (3,5-DTBC as substrate, in methanol medium) and the activity sequence is 1 > 5 > 2 ) 4 > 3. A treatment on the basis of Michaelis-Menten model has been applied for kinetic study, suggesting that all five complexes exhibit very high turnover number, especially 1, which exhibits turnover number or $K_{cat}$ of $3.24 \times 10^4 (h^{-1})$, which is $\sim 3.5$ times higher than the most efficient catalyst reported to date for catecholase activity in methanol medium.
Item Type: Journal Article Inorganic Chemistry American Chemical Society Copyright of this article belongs to American Chemical Society. Division of Chemical Sciences > Inorganic & Physical Chemistry 17 Oct 2008 05:47 04 Jan 2013 07:19 http://eprints.iisc.ac.in/id/eprint/16045 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6691465973854065, "perplexity": 8435.30695249902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00780.warc.gz"} |
https://arxiv.org/abs/1407.3184 | Full-text links:
hep-ph
(what is this?)
# Title: Resummed Higgs $p_T$ distribution at NNLL + NNLO in bottom-quark annihilation
Abstract: The resummed transverse-momentum distribution for Higgs bosons produced via bottom-quark annihilation at the LHC is presented. Our results are obtained in the five-flavor scheme to NNLO+NNLL accuracy. We present a theoretical prediction which consistently matches the cross section at small and large transverse momenta. Theoretical uncertainties are derived from a variation of the unphysical scales entering the calculation. Their size is significantly reduced with respect to lower orders.
Comments: 8 pages LATEX, 4 figures, Proceedings of Loops and Legs in Quantum Field Theory, Weimar April 2014 Subjects: High Energy Physics - Phenomenology (hep-ph) Cite as: arXiv:1407.3184 [hep-ph] (or arXiv:1407.3184v1 [hep-ph] for this version)
## Submission history
From: Anurag Tripathi [view email]
[v1] Fri, 11 Jul 2014 14:57:42 GMT (197kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651137351989746, "perplexity": 8659.744652068788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00288-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://groups.oist.jp/ja/obu/fy2012-annual-report | # FY2012 Annual Report
## Open Biology Unit
### Abstract
Our unit aims to develop a novel software platform for systems biology and systems drug discovery that may fundamentally transform the way we study these fields.
Modern biology and medicine are fundamentally data- and evidence-driven, and researchers are fighting against the vastness and complexity of the data and the scattered nature of our knowledge. In order to obtain in-depth insights that may lead to new biological discoveries and medical implications, one must be able to properly access this data and knowledge and then integrate, analyse, and link them to practical solutions. This requires a new approach in science in the sense that it has to be open-ended, evolvable, community-based, and computationally supported.
Our unit is developing a methodology and software platform for open biology that entails the above features. This includes the Payao community-based pathway curation system, PhysioDesigner physiological modeling tool, drug-molecular network interaction prediction software, and a series of software packages that will be interoperable to The Garuda Platform. The Garuda Platform is a global alliance that we initiated to create a consistent, integrated user experience - a one-stop service style software platform. Such a platform will enable researchers to explore system-level characteristics of biological systems more effectively than before. At the same time, we are exploring a novel drug discovery approach based on computational systems biology, theory of biological robustness, and a concept of long-tail drugs.
#### 1. Members
• Dr. Hiroaki Kitano, Projessor (adjunct)
• Dr. Yoshiyuki Asai, Group Leader
• Dr. Kun-Yi Hsin, Researcher
• Takeshi Abe, Technical Staff
• Kyota Kamioshi, Technical Staff
• Ken Kuwae, Technical Staff
### 2. Collaborations
• Theme: Software platform for systems biology
• Type of collaboration: Research Collaboration
• Researchers:
• Dr. Samik Ghosh, Systems Biology Institute
• Ms. Yukiko Matsuoka, Systems Biology Institute
• Theme: Integration of Manchester text mining system to Payao
• Type of collaboration: Research Collaboration
• Researchers:
• Professor Sophia Ananiadou, Manchester University
• Theme: Open platform for multi-level modeling and simulation
• Type of collaboration: Joint research
• Researchers:
• Professor Taishin Nomura, Osaka University
• Professor Hideki Oka, RIKEN
• Professor Yoshihisa Kurachi, Osaka University
• Professor Kun-ichi Hagihara, Osaka University
• Dr. Masao Okita, Osaka University
• Professor Akira Amano, Ritsumeikan University
• Dr. Fumiyoshi Yamashita, Kyoto University
• Theme: Modeling and database with dynamic brain platform
• Type of collaboration: Joint research
• Researchers:
• Professor Hiroaki Wagatsuma, Kyushu Institute of Technology
• Professor Yoko Yamaguchi, RIKEN
• Neurosignal analysis
• Type of collaboration: Joint research
• Researcher:
• Professor Alessandro E. P. Villa, University of Lausanne
• Theme: 4D Visualization of simulation results
• Type of collaboration: Joint research
• Researchers:
• Dr. Ryo Haraguchi, National Cerebral and Cardiovascular Center
• Dr. Takashi Ashihara, Shiga University
• Theme: Cloud supporting simulation service; Flint K3
• Type of collaboration: Joint research
• Researchers:
• Dr. Nobukazu Yoshioka, National Institute of Informatics
• Shigetsohi Yokoyama, National Institute of Informatics
• Masaru Nagaku, National Institute of Informatics
### 3. Activities and Findings
#### 3.1 Garuda Platform
One of the major focus of the unit is to launch and lead the Garuda Alliance that is designed to remedy various interoperability issues in systems biology and biomedical software tools and data resources. It aims at providing a coherent and comprehensive software, data, and knowledge platform for systems biology and biomedical research to maximize efficacy of software and computational oriented research while significantly benefitting end-users (typically bench biologists, bioinformatics person at the user site, and those who are in related industrial sectors).
One of the Garuda Alliance’s intentions is to provide a novel open innovation platform that may dramatically improves R&D productivity of basic biology research and pharmaceutical industry. We strived to accomplish these objectives by providing mechanisms for global distribution of computational, software, and knowledge base resources as a one-stop service portal, unified application program interface (Garuda Core API), consistent user experiences, and solid user supports.
In FY 2012, the community beta version of Garuda core has been released. Some of new systems biology tools, such as Cytoscape, joined to Garuda community and supported Garuda client component so that they can seamlessly connect to other applications through open, standardized interfaces. Applications that already supported Garuda previously were updated to the latest Garuda in the Garuda 9 workshop.
Garuda workshops held in FY2012:
• Garuda Nine Workshop @ OIST Feb 19-22, 2013
• Garuda Eight Workshop @ ICSB Toronto Aug 18, 2012
#### 3.2 Development of software for multilevel modeling: PhysioDesigner
PhysioDesigner is a software that supports to create computable models of physiological systems with spatiotemporal multiple levels. Models built on PhysioDesigner are written in PHML format, which is an XML based specification taking over from ISML. We have rebranded ISML to PHML in December, 2011. In FY2012, we have released three versions of PhysioDesigner, i.e. 1.0 beta1 (April, 2012), 1.0 beta2 (September, 2012), and 1.0 beta3 (January, 2013) at physiodesigner.org (Fig. 2).
Figure 2: A snapshot of the official website for PhysioDesigner.
Figure 3: Snapshots of PhysioDesigner and Flint.
The major achievements on PhysioDesigner development in FY2012 were development of functions (1) to support of Garuda platform (http://www.garuda-alliance.org/), (2) to semi-automatically link edges among instances for large scale modeling, (3) to create a computational model as an aggregate of modules based on a 3D volume object such as a heart, (4) to visualize several medical image formats, such as NIfTI, analyze and so on.
We often find physiological structures composing of plenty of similar sub-structures, such as muscle tissue as an aggregate of many muscle cells and neural network consisting of many similar neurons. To model such a physiological systems, PhysioDesigner equips a template/instance framework to deal with those repetitive structure systematically. Multiple instances can be created according to the template. Instances are not simple copy" of a template. All properties of instances are inherited from the template, but each instance itself does not have concrete definitions of physical-quantities such as states and parameters. Since all instances follows the configuration of their template, functional edges connected to the template are considered to be connected all instances as well. This is convenient when all instances need to receive some information from other modules.
Template/Instance framework can be powerful method to create, for example, a whole organ model, because usually certain amount of cells of the same type are involved forming the organs, which can be modelled by instances. For such modeling, we also need to consider the morphological property. PhysioDesigner can utilise morphometric data with voxel-based volume representation and template/instances for creating such a model. A voxel-based volume model is an aggregate of volume elements, representing values on a regular grid in three dimensional space. By replacing each voxel by an instance and defining how an instance links with adjacent instances (in 6- or 26-directions), a computable model is created based on a voxel-based volume object (Fig. 4).
Fig. 4 Schema to create a model using template/instance framework based on a volume object.
#### 3.3 Development of software for visualization of simulation results: PhysioVisualizer
Using PhysioDesigner, we can create multilevel large scale computable models of physiological systems. Flint can conduct the simulations of those models. What was missing was a visualizer of the simulation results. Especially as above mentioned, we now can create a model based on morphological data with template/instance framework, a visualizer which can map the simulated values onto the morphological objet was demanded. PhysioVisualizer deals with the morphological object and simulation data with the concept of the layers. Namely it can overlay several layers in one visualization (Fig. 5).
Although the basic functions were already implemented, PhysioVisualizer is still under development. It will be able to output a movie file as a result of the visualization of Flint simulation data with volume object data.
Fig. 5 A snapshot of PhysioVisualizer.
#### 3.4 Development of software for simulations of multilevel physiological models: Flint
Flint is a simulator aiming to be capable of simulating a multilevel physiological models described in PHML/SBML. At the end of FY2012, we have released Flint 1.0 beta 3 for those who interested in running simulations of such models on their desktops. Now it comes with better performance and support of latest PHML features as well as useful functions such as rendering logarithmic scale graphs. We also have enhanced its interoperability with existing desktop and web applications for biology by supporting Garuda API complied with the Garuda alliance (http://www.garuda-alliance.org/).
Fig. 6 Flint configuration and simulation windows.
#### 3.5 Development of Flint K3 for cloud and HPC computing
Since the size of models is getting larger and larger nowadays, to enable Flint to work on high performance computers is demanded. We have been developing a Flint server, called Flint K3, working on computer clouds, so that users of PhysioDesigner can immediately send simulation jobs to high performance computing environment even if users do not have any accesses to high performance computers.
Fig. 7 Schema of Flint K3 architecture.
We have started K3 with edubaseCloud'' (http://edubase.jp/cloud) which is an open source based computer cloud for education of cloud engineering developed in National Institute of Informatics (NII). For development and preliminary test-run of K3, 64 cores on the cloud are assigned.
K3 is composed of two servers as shown in Fig. 7. One is an interface server (IFS), which receives job requests from users and manage the jobs. Simulation jobs are sent from IFS to simulation servers (SS) in the clouds. SSs evoke a computation program (CP) in every node assigned for the simulation. CPs perform numerical computation of the model in parallel having communications among them.
There are three ways for users to submit simulations jobs to Flint K3. One way is to visit the K3 IFS on a web browser. Users can upload models and configure simulation parameters for submitting simulation jobs at the site. The second way is to utilize a linkage between K3 and Model databases at open domain. There is PHML model database at Physiome.jp (http://www.physiome.jp/modeldb/) and SBML model database at several sites such as BioModels in EMBL-EBI (http://www.ebi.ac.uk/biomodels-main/). Users can provide a model ID defined in each database to K3 IFS. Then K3 IFS access to the database and download the model directly from the model database. Once such linkages among database and simulation server are built, users can easily check the dynamics of a model in database. The third way is to use the REST APIs implemented on IFS, so that applications such as PhysioDesigner can access to K3 directly, for example, to submit a simulation job and get the progress report of simulations.
Fig. 8 A snapshot of FlintK3
#### 3.6 Porting Flint to The K computer
We have ported Flint to the K computer (http://www.riken.go.jp/en/research/environment/kcomputer/) in order to take advantage of its computation power for simulating large-scale models. Combining inter-node parallel computation with better scheduling, Flint now calculates both ODE and DDE (Delay Differential Equation) faster, and it became possible to finish a simulation on a large model like heart cells within a feasible time span. The technique developed in this attempt is based on MPI so that it is portable up to recent HPC architecture, i.e., can be applied to porting it to any other HPC facility.
#### 3.7 Development of a web and community based system for pathway publisher: iPathways+
Open Biology Unit and SBI has jointly developed Payao aiming of enabling a community to work on the same models of the cellular signal transduction simultaneously, insert tags to the specific parts of the model, exchange comments, record the discussions and eventually update the models accurately and concurrently.
Focusing on the feature of publishing the models, we have reengineered Payao based on JavaScript, and has started a sister service called iPathways+ at http://ipathways.org/plus/. iPathways+ does not equip interactive functions, instead, as a new feature, we added a function to enable users to embed a map in other web sites, as like google map. This feature has a big potential to expand collaborations with other database sites. Currently iPathways+ alpha version has been released. Besides that linkage between iPathways+ and existing social network service such as Facebook, Twitter and Google+ were enhanced, so that users can share models in iPathways+ over communities formed on the SNSs.
The development of iPathways+ has been promoted jointly by OIST and SBI.
Fig. 9 Snapshots of iPathways+
#### 3.8 High-precision in silico prediction approach for network pharmacology
Increased availability of bioinformatics resources is creating opportunities for the application of systems pharmacology to predict drug effects and toxicity led by multi-target interaction. Together with specialized bio- and chemo-informatics data, technologies including molecular interaction network description, structure-based drug design, high-throughput screening methods and statistical analysis help investigate the polypharmacology of a given drug/candidate. To achieve a comprehensive assessment of pharmacological effect in the early stages of drug discovery, we have been developing a high-precision in silico network-based screening approach for rapidly predicting the binding interactions between a given compound (e.g. lead/drug) against proteins involved in a complex molecular network (e.g. signaling networks). Novelty of the approach is composed of following points: 1) network-based screening, 2) incorporation of multiple molecular docking tools and SAR (structure–activity relationship), 3) application of machine learning systems and 4) open type innovation.
The machine learning systems that we have built are seen a great potential in improving docking simulation accuracy. Results from a series of validations and a case study show that our method possesses a competitive performance compared with state of the art (Fig. 10), as well as an adequate capability of identifying either primary or off-targets of numerous kinase inhibitors (Fig. 11). Such an in silico prediction approach has been implemented and deployed on OIST HPC together with a graphical interface for users not only bioinformaticist but also drugdevelopers who might not be familiar with a complex software operation.
Fig. 10. Comparison of the prediction accuracy using different docking approaches.
Values are the correlations between the calculated scores and the corresponding experimental binding affinities. Black bars are the results using the default scoring functions equipped with the docking tools. Gray bars are those rescored by external scoring functions (i.e. X-Score and RF-Score) after docking. Red are performances of the machine learning systems A + B we developed using PDBbind version 2007 and 2012 datasets, respectively.
Fig. 11 Selectivity scores of the 33 kinase inhibitors against 139 different kinases.
A comparison is conducted by using the screening approach we developed (blue bars) and the referred bioassay results30 (red bars). The calculation of a predicted selectivity score is “S = number of docking score > 5.52 / number of kinases tested”, whereas the experimental selectivity scores is “S = number of binding affinity Kd < 3 µM / number of kinases tested”. A compound with a lower selectivity score indicates that it only actively interacts with a small number of target proteins, implying a lower potential of off-target effects.
### 4. Publications
#### 4.1 Journals
1. Carl-Fredrik Tiger, Falko Krause, Gunnar Cedersund, Robert Palmér, Edda Klipp, Stefan Hohmann, Hiroaki Kitano and Marcus Krantz*. A framework for mapping, visualisation and automatic model creation of signal-transduction networks. Molecular Systems Biology. 8: 578, April 24, 2012.
2. Martijn P. van Iersel*, Alice C. Villeger, Tobias Czauderna, Sarah E. Boyd, Frank T. Bergmann, Augustin Luna, Emek Demir, Anatoly Sorokin, Ugur Dogrusoz, Yukiko Matsuoka, Akira Funahashi, Mirit I. Aladjem, Huaiyu Mi, Stuart L. Moodie, Hiroaki Kitano Nicolas Le Novere, and Falk Schreiber. Software support for SBGN maps: SBGN-ML and LibSBGN. Bioinformatics. 28 (15): 2016-21, 2012. doi: 10.1093/bioinformatics/bts270, published online May 10, 2012.
3. Shoemaker, J. E*., Lopes, T. J., Ghosh, S., Matsuoka, Y., Kawaoka, Y., Kitano, H. CTen: a web-based platform for identifying enriched cell types from heterogeneous microarray data. BMC Genomics, 13 (1): 460, 2012.
4. Shoemaker, J.; Fukuyama, S.; Eisfeld, A. J.; Muramoto, Y.; Watanabe, S.; Watanabe, T.; Matsuoka, Y.; Kitano, H.*; Kawaoka, Y. Integrated network analysis reveals a novel role for the cell cycle in 2009 pandemic influenza virus-induced inflammation in macaque lungs. BMC Systems Biology, 6:117, 2012.
5. Martin H. Schaefer; Tiago J. S. Lopes, Nancy Mah, Jason E. Shoemaker, Yukiko Matsuoka, Jean-Fred Fontaine, Caroline Louis-Jeune, Amie J. Eisfeld, Gabriele Neumann, Carol Perez-Iratxeta, Yoshihiro Kawaoka, Hiroaki Kitano, Miguel A. Andrade-Navarro. Adding Protein Context to the Human Protein-Protein Interaction Network to Reveal Meaningful Interactions. PLOS Computational Biology. 9, 1, 2013.
6. Koji Makanae; Reiko Kintaka; Takashi Makino; Hiroaki Kitano; and Hisao Moriya. Identification of dosage-sensitive genes in Saccharomyces cerevisiae using the genetic tug-of-war method. Genome Research. 23, 300-311, 2013.
#### 4.2 Books and other one-time publications
1. Kitano, H*. Cancer Systems Biology: A Robustness-Based Approach. HANDBOOK OF SYSTEMS BIOLOGY CONCEPTS AND INSIGHTS, Marian Walhout, et al.(ed.), 469-479, Academic Press, Dec.4, 2012.
2. Samik Ghosh*, Yukiko Matsuoka, Yoshiyuki Asai, Hiroaki Kitano, Anshu Bhardwaj*, Vinod Scaria, Rohit Vashisht, Anup Shah, Anupam Kumar Mondal, Priti Vishnoi, Kumari Sonal, Akanksha Jain, Priyanka Priyadarshini, Kausik Bhattacharyya, Vikas Kumar, Anurag Passi, Pratibha Sharma, Samir Brahmachari. Software Platform for Metabolic Network Reconstruction of Mycobacterium tuberculosis. Systems Biology of Tuberculosis, Johnjoe McFadden, et al.(ed.), pp21-35, Springer, Dec. 6, 2012. . (ISBN 978-1-4614-4965-2)
3. 北野宏明*. がんの多様性の数理. 実験医学2013年1月号, 31, 1, 45-52, Jan. 1, 2013.
#### 4.3 Oral and Poster Presentations
1. Yoshiyuki Asai, Takeshi Abe, Tatsuhide Okamoto, Hideki Oka, Taishin Nomura, Yoshihisa Kurachi, Hiroaki Kitano (2012) Multilevel modeling on PhysioDesigner. 第51回日本生体医工学会大会, 2012年5月10-12日(福岡)
2. 吉川禎, 奥山倫弘, 置田真生, 浅井義之, 安部武志, 野村泰伸, 八木哲也, 萩原兼一. 生体モデルにおける数式の類似性に着目したOpenMPによる並列シミュレーションの検討''. 第10回先進的計算基盤システムシンポジウム論文集(SACSIS 2012), 2012年5月16日 (招待講演)
3. Kitano, H. Will engineering play the lead role in drug discovery in 2030? BioMelbourne breakfast, Cinema 1, Melbourne, Australia, June 5. (invited)
4. Kitano, H. Software Platform for Systems Drug Discovery. Bio-IT World Asia Conference 2012, Marina Bay Sands, Singapore, June 8, 2012. (invited)
5. Yoshiyuki Asai, Hiroaki Kitano (2012) Multilevel modeling of physiological systems and nervous systems using PhysioDesigner. IEICE Tech. Rep., vol. 112, no. 108, NC2012-12, pp. 93-95, Okinawa, Jun 2012. (invited)
6. Kitano, H. Systems Drug Design and Garuda Software Platform. Network Biology SIG: On the Analysis and Visualization of Network in Biology (NetBio SIG), ISMB 2012, Long Beach Convention Center, USA, July 13, 2012. (invited)
7. Kitano, H. HD-Physiology, Garuda, and Computational drug side-effects prediction. Talk at FDA, FDA, Silver Spring, USA, July 17, 2012. (invited) .
8. Yoshiyuki Asai. PhysioDesigner: Current Status. 新学術領域研究「統合的多階層生体機能学領域の確立とその応用」平成24年度領域全体会議. 2012年7月22-24日(福岡)
9. Yoshiyuki Asai, Takeshi Abe, Masao Okita, Tomohiro Okuyama, Nobukazu Yoshioka, Shigetsohi Yokoyama, Masaru Nagaku, Ken-ichi Hagihara, Hiroaki Kitano. (2012) Multilevel modeling of Physiological Systems and Simulation Platform: PhysioDesigner, Flint and Flint K3 service. Conf Proc 12th IEEE/IPSJ International Symposium on Applications and the Internet (SAINT 2012) July 27, 2012. pp.215-219
10. Kitano, H. Biological Robustness. Seminar at University of Toronto, University of Toronto, Canada, Aug. 20, 2012. (invited)
11. Kitano, H. Systems Biology powered by Artificial Intelligence. PRICAI-2012: 12th Pacific Rim International Conference on Artificial Intelligence (via skype), Pullman Hotel, Kuching, Malaysia, Sep. 7, 2012. (invited)
12. Yoshiyuki Asai, Takeshi Abe, Hideki Oka, Masao Okita, Hagihara Ken-ichi, Yukiko Matsuoka, Samik Ghosh, Yoshihisa Kurachi, Hiroaki Kitano (2012) A platform for multilevel modeling of physiological systems: Template and instance framework for large scale models. 生体医工学シンポジウム2012,2012年9月7日(大阪)
13. Hideki Oka, Ken-ichiro Iwasaki, Yoshiyuki Asai, Taishin Nomura, Yoko Yamaguchi. EEG Analysis in PhysioDesigner. NeuroInformatics 2012. September 10-12, 2012, Munich, Germany. p. 128.
14. Yoshiyuki Asai, Alessandro E.P. Villa. Transmission of distributed temporal information on diverging/converging neural network and its implementation on a multilevel modeling platform. International conference on artificial neural networks. September 11-14, 2012, Lausanne, Switzerland.
15. Kitano, H. VPH in industrial research. VPH 2012, Savoy Place, London, UK, Sep. 20, 2012. (invited)
16. 北野宏明. システム創薬とGarudaプラットフォームの概要. BioJapan 2012, パシフィコ横浜, Oct. 11, 2012,
17. Kitano, H. Systems biomedicine and their computational platforms. FOSBE 2012, Institute for Advanced Biosciences, Keio University, Oct. 21, 2012. (invited keynote)
18. Yoshiyuki Asai, Hideki Oka, Alessandro E.P. Villa, Hiroaki Kitano. Multilevel modeling platform and its application for modeling in neuroscience. 2012 International Symposium on Nonlinear Theory and its Applications (NOLTA). October 22-26, 2012, Palma, Majorca, Spain.
19. Yoshiyuki Asai, Takeshi Abe, Hideki Oka, Yoshiyuki Kido, Li Li, Taishin Nomuran, Yoshihisa Kurachi, Hiroaki Kitano. Multilevel Modeling and Simulation of Physiological Systems on PhysioDesigner. INCF Japan Node International Symposium. ADVANCES IN NEUROINFORMATICS 2012. 2012年10月30日
20. 北野宏明. システムバイオロジー概論. 第6回KASTシステムバイオロジー講座, かながわサイエンスパーク, Nov. 2, 2012. (invited)
21. Kitano, H. Systems Toxicology. An invited talk at DSTO, Defence Science and Technology Organisation (DSTO), Department of Defence, Australian Government, Melbourne, Australia, Dec. 4, 2012. (invited) .
22. 北野宏明. Data-Driven Network-Based Biomarker Discovery. 第35会 日本分子生物学会年会 バイオテクノロジーセミナー, 福岡国際会議場, Dec. 12, 2012. (invited)
23. Yoshiyuki Asai. PhysioDesigner 1.0 beta3. 新学術領域研究「統合的多階層生体機能学領域の確立とその応用」平成24年度領域全体会議. 2013年1月16-18日(福岡)
24. Yoshiyuki Asai, Alessandro E.P. Villa. スパイクパターンに基づいたフィードフォワードニュラールネットワーク内の神経活動伝達に関する考察.信学技報 (Technical Report of IEICE) CAS2012-86, pp.115-119
25. 北野宏明. システムズバイオロジーのための京コンピュータへの期待. 第4回スーパーコンコンピュータ「京」と創薬・医療の産学連携セミナー -HPCI計算生命科学推進プログラム-, フクラシア東京ステーション, Jan. 25, 2013. (invited) )
26. Yoshiyuki Asai. Demonstration of PhysioDesigner 1.0 beta3 and Flint with Garuda.Garuda Nine Workshop. Okinawa, Japan. Feb. 19-22, 2012.
27. Yoshiyuki Asai, Hiroaki Wagatsuma. J-Node PFs + Garuda, PhysioDesigner and Flint. Dynamic Brain Platform Meeting. 2013年2月27日(北九州)
28. Kitano, H. Systems Drug Design and Software Platform for drug efficacy and adverse effects prediction. FDA Workshop: Systems Pharmacology for the Prediction of Tyrosine Kinase Inhibitor non-QT Cardiotoxicity, FDA White Oak Campus, USA, Feb. 28, 2013. (invited)
29. Yoshiyuki Asai, Hiroaki Wagatsuma. Garuda, PhysioDesigner and Flint with J-Node PFs. NIJC meeting. 2013年3月6日(和光)
30. Tomohiro Okuyama, Masao Okita, Takeshi AbeYoshiyuki Asai, Taishin Nomura, Hiroaki Kitano, and Kenichi Hagihara. Accelerating General and Heterogeneous Biophysical Simulations Using the GPU. In Poster in the 4th GPU Technology Conference (GTC 2013), San Jose, CA, USA, (2013-03).
### 5. Intellectual Property Rights and Other Specific Achievements
1. 特許/特願2012-134261
2. 特許/特願2012-245326
3. US仮出願61671049
### 6. Meetings and Events
#### 6.1 HD-Physiology 6th plenery meeting and young researchers workshop (externally organized)
• Date: January 16-17, 2012
• Venue: OIST Campus Seminar Room B250/C209
• Co-organizers: Osaka University ad OIST (sponcered by KAKENHI MEXT Innovative Areas HD-Physiology Project)
#### 6.2 Garuda Nine Workshop
• Date: February 19-22, 2013
• Venue: OIST Seaside House
• Co-organizers: OIST and SBI | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2701415419578552, "perplexity": 16725.472117502133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202418.22/warc/CC-MAIN-20200929154729-20200929184729-00748.warc.gz"} |
http://worldwidescience.org/topicpages/m/magnetoencephalography+multipolar+modeling.html | #### Sample records for magnetoencephalography multipolar modeling
1. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)
Energy Technology Data Exchange (ETDEWEB)
Mosher, J. C. (John C.); Baillet, S. (Sylvain); Jerbi, K. (Karim); Leahy, R. M. (Richard M.)
2001-01-01
We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the procedure is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.
2. Magnetoencephalography
Energy Technology Data Exchange (ETDEWEB)
Schwartz, Erin Simon [Children' s Hospital of Philadelphia, Lurie Family Foundations MEG Imaging Center, Department of Radiology, Philadelphia, PA (United States); Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States); Edgar, J.C.; Gaetz, William C.; Roberts, Timothy P.L. [Children' s Hospital of Philadelphia, Lurie Family Foundations MEG Imaging Center, Department of Radiology, Philadelphia, PA (United States)
2010-01-15
Although magnetoencephalography (MEG) may not be familiar to many pediatric radiologists, it is an increasingly available neuroimaging technique both for evaluating normal and abnormal intracranial neural activity and for functional mapping. By providing spatial, temporal, and time-frequency spectral information, MEG affords patients with epilepsy, intracranial neoplasia, and vascular malformations an opportunity for a sensitive and accurate non-invasive preoperative evaluation. This technique can optimize selection of surgical candidates as well as increase confidence in preoperative counseling and prognosis. Research applications that appear promising for near-future clinical translation include the evaluation of children with autism spectrum disorder, traumatic brain injury, and schizophrenia. (orig.)
3. Magnetoencephalography
International Nuclear Information System (INIS)
Although magnetoencephalography (MEG) may not be familiar to many pediatric radiologists, it is an increasingly available neuroimaging technique both for evaluating normal and abnormal intracranial neural activity and for functional mapping. By providing spatial, temporal, and time-frequency spectral information, MEG affords patients with epilepsy, intracranial neoplasia, and vascular malformations an opportunity for a sensitive and accurate non-invasive preoperative evaluation. This technique can optimize selection of surgical candidates as well as increase confidence in preoperative counseling and prognosis. Research applications that appear promising for near-future clinical translation include the evaluation of children with autism spectrum disorder, traumatic brain injury, and schizophrenia. (orig.)
4. On MEG forward modelling using multipolar expansions
International Nuclear Information System (INIS)
Magnetoencephalography (MEG) is a non-invasive functional imaging modality based on the measurement of the external magnetic field produced by neural current sources within the brain. The reconstruction of the underlying sources is a severely ill-posed inverse problem typically tackled using either low-dimensional parametric source models, such as an equivalent current dipole (ECD), or high-dimensional minimum-norm imaging techniques. The inability of the ECD to properly represent non-focal sources and the over-smoothed solutions obtained by minimum-norm methods underline the need for an alternative approach. Multipole expansion methods have the advantages of the parametric approach while at the same time adequately describing sources with significant spatial extent and arbitrary activation patterns. In this paper we first present a comparative review of spherical harmonic and Cartesian multipole expansion methods that can be used in MEG. The equations are given for the general case of arbitrary conductors and realistic sensor configurations and also for the special cases of spherically symmetric conductors and radially oriented sensors. We then report the results of computer simulations used to investigate the ability of a first-order multipole model (dipole and quadrupole) to represent spatially extended sources, which are simulated by 2D and 3D clusters of elemental dipoles. The overall field of a cluster is analysed using singular value decomposition and compared singular value decomposition and compared to the unit fields of a multipole, centred in the middle of the cluster, using subspace correlation metrics. Our results demonstrate the superior utility of the multipolar source model over ECD models in providing source representations of extended regions of activity. (author)
5. Bremsstrahlung during $\\alpha$-decay: quantum multipolar model
OpenAIRE
Maydanyuk, Sergei P.
2008-01-01
In this paper the improved multipolar model of bremsstrahlung accompanied the $\\alpha$-decay is presented. The angular formalism of calculations of the matrix elements, being enough complicated component of the model, is stated in details. A new definition of the angular (differential) probability of the photon emission in the $\\alpha$-decay is proposed where direction of motion of the $\\alpha$-particle outside (with its tunneling inside barrier) is defined on the basis of a...
6. A wind-shell interaction model for multipolar planetary nebulae
CERN Document Server
Steffen, W; Esquivel, A; Garcia-Segura, G; Garcia-Diaz, Ma T; Lopez, J A; Magnor, M
2013-01-01
We explore the formation of multipolar structures in planetary and pre-planetary nebulae from the interaction of a fast post-AGB wind with a highly inhomogeneous and filamentary shell structure assumed to form during the final phase of the high density wind. The simulations were performed with a new hydrodynamics code integrated in the interactive framework of the astrophysical modeling package SHAPE. In contrast to conventional astrophysical hydrodynamics software, the new code does not require any programming intervention by the user for setting up or controlling the code. Visualization and analysis of the simulation data has been done in SHAPE without external software. The key conclusion from the simulations is that secondary lobes in planetary nebulae, such as Hubble 5 and K3-17, can be formed through the interaction of a fast low-density wind with a complex high density environment, such as a filamentary circumstellar shell. The more complicated alternative explanation of intermittent collimated outflow...
7. Error bounds in MEG (Magnetoencephalography) multipole localization
Energy Technology Data Exchange (ETDEWEB)
Jerbi, K. (Karim); Mosher, J. C. (John C.); Baillet, S. (Sylvain); Leahy, R. M. (Richard M.)
2001-01-01
Magnetoencephalography (MEG) is a non-invasive method that enables the measurement of the magnetic field produced by neural current sources within the human brain. Unfortunately, MEG source estimation is a severely ill-posed inverse problem. The two major approaches used to tackle this problem are 'imaging' and 'model-based' methods. The first class of methods relies on a tessellation of the cortex, assigning an elemental current source to each area element and solving the linear inverse problem. Accurate tessellations lead to a highly underdetermined problem, and regularized linear methods lead to very smooth current distributions. An alternative approach widely used is a parametric representation of the neural source. Such model-based methods include the classic equivalent current dipole (ECD) and its multiple current dipole extension [1]. The definition of such models has been based on the assumption that the underlying sources are focal and small in number. An alternative approach reviewed in [4], [5] is to extend the parametric source representations within the model-based framework to allow for distributed sources. The multipolar expansion of the magnetic field about the centroid of a distributed source readily offers an elegant parametric model, which collapses to a dipole model in the limiting case and includes higher order terms in the case of a spatially extended source. While multipolar expansions have been applied to magnetocardiography (MCG) source modeling [2], their use in MEG has been restricted to simplified models [7]. The physiological interpretation of these higher-order components in non-intuitive, therefore limiting their application in this community (cf. [8]). In this study we investigate both the applicability of dipolar and multipolar models to cortical patches, and the accuracy with which we can locate these sources. We use a combination of Monte Carlo analyses and Cramer-Rao lower bounds (CRLBs), paralleling the work in [3] for the ECD. Results are presented for both point sources and cortical patches.
8. Force-balance model of suppression of multipolar division in cancer cells with extra centrosomes
Science.gov (United States)
Zhu, Jie
2013-03-01
Cancer cells often possess extra centrosomes which have the potential to cause cell death due to catastrophic multipolar division. Many cancer cells, however, are able to escape multipolar mitosis by clustering the extra centrosomes to form bipolar spindles. The mechanism of centrosome clustering is therefore of great interest to the development of anti-cancer drugs because the de-clustering of extra centrosomes provides an appealing way to eliminate cancer cells while keeping healthy cells intact. We present a physical model assuming 1) dynamic centrosomal microtubules interact with chromosomes by both pushing on chromosome arms and pulling along kinetochores; 2) these microtubules interact with force generators associated with actin/adhesion structures at the cell boundary; and 3) motors act on anti-parallel microtubules from different centrosomes. We find via computer simulations that chromosomes tend to aggregate near the cell center while centrosomes can be either clustered to form bipolar spindles or scattered to form multipolar spindles, depending on the strengths of relative forces, cell shape and adhesion geometry. The model predictions agree with data from cells plated on adhesive micropatterns and from biochemically or genetically perturbed cells. Furthermore, our model is able to explain various microtubule distributions in interphase cells on patterned substrates.
9. Multipolar electrostatics.
Science.gov (United States)
Cardamone, Salvatore; Hughes, Timothy J; Popelier, Paul L A
2014-06-14
Atomistic simulation of chemical systems is currently limited by the elementary description of electrostatics that atomic point-charges offer. Unfortunately, a model of one point-charge for each atom fails to capture the anisotropic nature of electronic features such as lone pairs or ?-systems. Higher order electrostatic terms, such as those offered by a multipole moment expansion, naturally recover these important electronic features. The question remains as to why such a description has not yet been widely adopted by popular molecular mechanics force fields. There are two widely-held misconceptions about the more rigorous formalism of multipolar electrostatics: (1) Accuracy: the implementation of multipole moments, compared to point-charges, offers little to no advantage in terms of an accurate representation of a system's energetics, structure and dynamics. (2) Efficiency: atomistic simulation using multipole moments is computationally prohibitive compared to simulation using point-charges. Whilst the second of these may have found some basis when computational power was a limiting factor, the first has no theoretical grounding. In the current work, we disprove the two statements above and systematically demonstrate that multipole moments are not discredited by either. We hope that this perspective will help in catalysing the transition to more realistic electrostatic modelling, to be adopted by popular molecular simulation software. PMID:24741671
10. Hybrid MEG (Magnetoencephalography) source characterization by cortical remapping and imaging of parametric source models
Energy Technology Data Exchange (ETDEWEB)
Baillet, S. (Sylvain); Mosher, J. C. (John C.); Jerbi, K. (Karim); Leahy, R. M. (Richard M.)
2001-01-01
Reliable estimation of the local spatial extent of neural activity is a key to the quantitative analysis of MEG sources across subjects and conditions. In association with an understanding of the temporal dynamics among multiple areas, this would represent a major advance in electrophysiological source imaging. Parametric current dipole approaches to MEG (and EEG) source localization can rapidly generate a physical model of neural current generators using a limited number of parameters. However, physiological interpretation of these models is often difficult, especially in terms of the spatial extent of the true cortical activity. In new approaches using multipolar source models [3, 5], similar problems remain in the analysis of the higher-order source moments as parameters of cortical extent. Image-based approaches to the inverse problem provide a direct estimate of cortical current generators, but computationally expensive nonlinear methods are required to produce focal sources [1,4]. Recent efforts describe how a cortical patch can be grown until a best fit to the data is reached in the least-squares sense [6], but computational considerations necessitate that the growth be seeded in predefined regions of interest. In a previous study [2], a source obtained using a parametric model was remapped onto the cortex by growing a patch of cortical dipoles in the vicinity of the parametric source until the forward MEG or EEG fields of the parametric and cortical sources matched. The source models were dipoles and first-order multipoles. We propose to combine the parametric and imaging methods for MEG source characterization to take advantage of (i) the parsimonious and computationally efficient nature of parametric source localization methods and (ii) the anatomical and physiological consistency of imaging techniques that use relevant a priori information. By performing the cortical remapping imaging step by matching the multipole expansions of the original parametric source and the equivalent cortical patch, rather than their forward fields, we achieve significant reductions in computational complexity.
11. Magnetoencephalography recording and analysis.
Science.gov (United States)
Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy
2014-03-01
Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon. PMID:24791077
12. Libration driven multipolar instabilities
CERN Document Server
Cébron, David; Herreman, Wietze
2014-01-01
We consider rotating flows in non-axisymmetric enclosures that are driven by libration, i.e. by a small periodic modulation of the rotation rate. Thanks to its simplicity, this model is relevant to various contexts, from industrial containers (with small oscillations of the rotation rate) to fluid layers of terrestial planets (with length-of-day variations). Assuming a multipolar $n$-fold boundary deformation, we first obtain the two-dimensional basic flow. We then perform a short-wavelength local stability analysis of the basic flow, showing that an instability may occur in three dimensions. We christen it the Libration Driven Multipolar Instability (LDMI). The growth rates of the LDMI are computed by a Floquet analysis in a systematic way, and compared to analytical expressions obtained by perturbation methods. We then focus on the simplest geometry allowing the LDMI, a librating deformed cylinder. To take into account viscous and confinement effects, we perform a global stability analysis, which shows that...
13. Fully Complex Magnetoencephalography
OpenAIRE
2005-01-01
Complex numbers appear naturally in biology whenever a system can be analyzed in the frequency domain, such as physiological data from magnetoencephalography (MEG). For example, the MEG steady state response to a modulated auditory stimulus generates a complex magnetic field for each MEG channel, equal to the Fourier transform at the stimulus modulation frequency. The complex nature of these data sets, often not taken advantage of, is fully exploited here with new methods. W...
14. Magnetoencephalography in pediatric epilepsy
OpenAIRE
Hunmin Kim; Chun Kee Chung; Hee Hwang
2013-01-01
Magnetoencephalography (MEG) records the magnetic field generated by electrical activity of cortical neurons. The signal is not distorted or attenuated, and it is contactless recording that can be performed comfortably even for longer than an hour. It has excellent and decent temporal resolution, especially when it is combined with the patient’s own brain magnetic resonance imaging (magnetic source imaging). Data of MEG and electroencephalography are not mutually exclusive and it is recorde...
15. A Study on Decoding Models for the Reconstruction of Hand Trajectories from the Human Magnetoencephalography
OpenAIRE
Hong Gi Yeom; Wonjun Hong; Da-Yoon Kang; Chun Kee Chung; June Sic Kim; Sung-Phil Kim
2014-01-01
Decoding neural signals into control outputs has been a key to the development of brain-computer interfaces (BCIs). While many studies have identified neural correlates of kinematics or applied advanced machine learning algorithms to improve decoding performance, relatively less attention has been paid to optimal design of decoding models. For generating continuous movements from neural activity, design of decoding models should address how to incorporate movement dynamics into models and how...
16. Searching for the best model: ambiguity of inverse solutions and application to fetal magnetoencephalography
International Nuclear Information System (INIS)
Fetal brain signals produce weak magnetic fields at the maternal abdominal surface. In the presence of much stronger interference these weak fetal fields are often nearly indistinguishable from noise. Our initial objective was to validate these weak fetal brain fields by demonstrating that they agree with the electromagnetic model of the fetal brain. The fetal brain model is often not known and we have attempted to fit the data to not only the brain source position, orientation and magnitude, but also to the brain model position. Simulation tests of this extended model search on fetal MEG recordings using dipole fit and beamformers revealed a region of ambiguity. The region of ambiguity consists of a family of models which are not distinguishable in the presence of noise, and which exhibit large and comparable SNR when beamformers are used. Unlike the uncertainty of a dipole fit with known model plus noise, this extended ambiguity region yields nearly identical forward solutions, and is only weakly dependent on noise. The ambiguity region is located in a plane defined by the source position, orientation, and the true model centre, and will have a diameter approximately 0.67 of the modelled fetal head diameter. Existence of the ambiguity region allows us to only state that the fetal brain fields do not contradict the electromagnetic model; we can associate them with a family of models belonging to the ambiguity region, but not with any specific model. In addition to prwith any specific model. In addition to providing a level of confidence in the fetal brain signals, the ambiguity region knowledge in combination with beamformers allows detection of undistorted temporal waveforms with improved signal-to-noise ratio, even though the source position cannot be uniquely determined
17. Searching for the best model: ambiguity of inverse solutions and application to fetal magnetoencephalography
Energy Technology Data Exchange (ETDEWEB)
Vrba, J [VSM MedTech Ltd, Coquitlam, BC, V3K 7B2 (Canada); Robinson, S E [VSM MedTech Ltd, Coquitlam, BC, V3K 7B2 (Canada); McCubbin, J [Department of Obstetrics and Gynecology, University of Arkansas for Medical Sciences, Little Rock, AR 72205 (United States); Lowery, C L [Department of Obstetrics and Gynecology, University of Arkansas for Medical Sciences, Little Rock, AR 72205 (United States); Eswaran, H [Department of Obstetrics and Gynecology, University of Arkansas for Medical Sciences, Little Rock, AR 72205 (United States); Murphy, P [Department of Obstetrics and Gynecology, University of Arkansas for Medical Sciences, Little Rock, AR 72205 (United States); Preissl, H [Department of Obstetrics and Gynecology, University of Arkansas for Medical Sciences, Little Rock, AR 72205 (United States)
2007-02-07
Fetal brain signals produce weak magnetic fields at the maternal abdominal surface. In the presence of much stronger interference these weak fetal fields are often nearly indistinguishable from noise. Our initial objective was to validate these weak fetal brain fields by demonstrating that they agree with the electromagnetic model of the fetal brain. The fetal brain model is often not known and we have attempted to fit the data to not only the brain source position, orientation and magnitude, but also to the brain model position. Simulation tests of this extended model search on fetal MEG recordings using dipole fit and beamformers revealed a region of ambiguity. The region of ambiguity consists of a family of models which are not distinguishable in the presence of noise, and which exhibit large and comparable SNR when beamformers are used. Unlike the uncertainty of a dipole fit with known model plus noise, this extended ambiguity region yields nearly identical forward solutions, and is only weakly dependent on noise. The ambiguity region is located in a plane defined by the source position, orientation, and the true model centre, and will have a diameter approximately 0.67 of the modelled fetal head diameter. Existence of the ambiguity region allows us to only state that the fetal brain fields do not contradict the electromagnetic model; we can associate them with a family of models belonging to the ambiguity region, but not with any specific model. In addition to providing a level of confidence in the fetal brain signals, the ambiguity region knowledge in combination with beamformers allows detection of undistorted temporal waveforms with improved signal-to-noise ratio, even though the source position cannot be uniquely determined.
18. Accuracy and tractability of a kriging model of intramolecular polarizable multipolar electrostatics and its application to histidine.
Science.gov (United States)
Kandathil, Shaun M; Fletcher, Timothy L; Yuan, Yongna; Knowles, Joshua; Popelier, Paul L A
2013-08-01
We propose a generic method to model polarization in the context of high-rank multipolar electrostatics. This method involves the machine learning technique kriging, here used to capture the response of an atomic multipole moment of a given atom to a change in the positions of the atoms surrounding this atom. The atoms are malleable boxes with sharp boundaries, they do not overlap and exhaust space. The method is applied to histidine where it is able to predict atomic multipole moments (up to hexadecapole) for unseen configurations, after training on 600 geometries distorted using normal modes of each of its 24 local energy minima at B3LYP/apc-1 level. The quality of the predictions is assessed by calculating the Coulomb energy between an atom for which the moments have been predicted and the surrounding atoms (having exact moments). Only interactions between atoms separated by three or more bonds ("1, 4 and higher" interactions) are included in this energy error. This energy is compared with that of a central atom with exact multipole moments interacting with the same environment. The resulting energy discrepancies are summed for 328 atom-atom interactions, for each of the 29 atoms of histidine being a central atom in turn. For 80% of the 539 test configurations (outside the training set), this summed energy deviates by less than 1 kcal mol(-1). PMID:23720381
19. The scalar magnetic potential in magnetoencephalography
Energy Technology Data Exchange (ETDEWEB)
Dassios, G [Department of Applied Mathematics and Theoretical Physics University of Cambridge, Cambridge (United Kingdom)], E-mail: G.Dassios@damtp.cam.ac.uk
2008-07-15
Two results on Magnetoencephalography (MEG) are reported in this presentation. First, we present an integral formula connecting the scalar magnetic potential with the values of the electric potential on the boundary of a conductive region. This formula provides the magnetic potential analogue of the well known Geselowitz formula. Second, we construct the scalar magnetic potential for the realistic ellipsoidal model of the brain, as an eigenfunction expansion in terms of surface ellipsoidal harmonics.
20. Synchronous dynamic brain networks revealed by magnetoencephalography
OpenAIRE
Langheim, Frederick J. P.; Leuthold, Arthur C.; Georgopoulos, Apostolos P.
2005-01-01
We visualized synchronous dynamic brain networks by using prewhitened (stationary) magnetoencephalography signals. Data were acquired from 248 axial gradiometers while 10 subjects fixated on a spot of light for 45 s. After fitting an autoregressive integrative moving average model and taking the residuals, all pairwise, zero-lag, partial cross-correlations (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\use...
1. A Textural–Contextual Model for Unsupervised Segmentation of Multipolarization Synthetic Aperture Radar Images
OpenAIRE
Akbari, Vahid; Doulgeris, Anthony Paul; Gabriele, Moser; Eltoft, Torbjørn; Sebastiano, B. Serpico; Anfinsen, Stian Normann
2013-01-01
This paper proposes a novel unsupervised, non-Gaussian, and contextual segmentation method that combines an advanced statistical distribution with spatial contextual informa-tion for multilook polarimetric synthetic aperture radar (PolSAR)data. This extends on previous studies that have shown the added value of both non-Gaussian modeling and contextual smoothing individually or for intensity channels only. The method is based on a Markov random field (MRF) model that integrates a K-Wishart d...
2. Detecting forest structure and biomass with C-band multipolarization radar - Physical model and field tests
Science.gov (United States)
Westman, Walter E.; Paris, Jack F.
1987-01-01
The ability of C-band radar (4.75 GHz) to discriminate features of forest structure, including biomass, is tested using a truck-mounted scatterometer for field tests on a 1.5-3.0 m pygmy forest of cypress (Cupressus pygmaea) and pine (Pinus contorta ssp, Bolanderi) near Mendocino, CA. In all, 31 structural variables of the forest are quantified at seven sites. Also measured was the backscatter from a life-sized physical model of the pygmy forest, composed of nine wooden trees with 'leafy branches' of sponge-wrapped dowels. This model enabled independent testing of the effects of stem, branch, and leafy branch biomass, branch angle, and moisture content on radar backscatter. Field results suggested that surface area of leaves played a greater role in leaf scattering properties than leaf biomass per se. Tree leaf area index was strongly correlated with vertically polarized power backscatter (r = 0.94; P less than 0.01). Field results suggested that the scattering role of leaf water is enhanced as leaf surface area per unit leaf mass increases; i.e., as the moist scattering surfaces become more dispersed. Fog condensate caused a measurable rise in forest backscatter, both from surface and internal rises in water content. Tree branch mass per unit area was highly correlated with cross-polarized backscatter in the field (r = 0.93; P less than 0.01), a result also seen in the physical model.
3. Multipolar Planetary Nebulae: Not as Geometrically Diversified as Thought
CERN Document Server
Chong, Sze-Ning; Imai, Hiroshi; Tafoya, Daniel; Chibueze, James; 10.1088/0004-637X/760/2/115
2012-01-01
Planetary nebulae (PNe) have diverse morphological shapes, including point-symmetric and multipolar structures. Many PNe also have complicated internal structures such as torus, lobes, knots, and ansae. A complete accounting of all the morphological structures through physical models is difficult. A first step toward such an understanding is to derive the true three-dimensional structure of the nebulae. In this paper, we show that a multipolar nebula with three pairs of lobes can explain many of such features, if orientation and sensitivity effects are taken into account. Using only six parameters - the inclination and position angles of each pair - we are able to simulate the observed images of 20 PNe with complex structures. We suggest that the multipolar structure is an intrinsic structure of PNe and the statistics of multipolar PNe has been severely underestimated in the past.
4. MULTIPOLAR PLANETARY NEBULAE: NOT AS GEOMETRICALLY DIVERSIFIED AS THOUGHT
Energy Technology Data Exchange (ETDEWEB)
Chong, S.-N.; Imai, H.; Chibueze, J. [Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima 890-0065 (Japan); Kwok, Sun [Department of Physics, University of Hong Kong, Pokfulam Road (Hong Kong); Tafoya, D., E-mail: chongsnco@gmail.com, E-mail: sunkwok@hku.hk [Onsala Space Observatory, SE-439 92 Onsala (Sweden)
2012-12-01
Planetary nebulae (PNe) have diverse morphological shapes, including point-symmetric and multipolar structures. Many PNe also have complicated internal structures such as tori, lobes, knots, and ansae. A complete accounting of all the morphological structures through physical models is difficult. A first step toward such an understanding is to derive the true three-dimensional structure of the nebulae. In this paper, we show that a multipolar nebula with three pairs of lobes can explain many such features, if orientation and sensitivity effects are taken into account. Using only six parameters-the inclination and position angles of each pair-we are able to simulate the observed images of 20 PNe with complex structures. We suggest that multipolar structure is an intrinsic structure of PNe and the statistics of multipolar PNe have been severely underestimated in the past.
5. Direct reconstruction algorithm of current dipoles for vector magnetoencephalography and electroencephalography
International Nuclear Information System (INIS)
This paper presents a novel algorithm to reconstruct parameters of a sufficient number of current dipoles that describe data (equivalent current dipoles, ECDs, hereafter) from radial/vector magnetoencephalography (MEG) with and without electroencephalography (EEG). We assume a three-compartment head model and arbitrary surfaces on which the MEG sensors and EEG electrodes are placed. Via the multipole expansion of the magnetic field, we obtain algebraic equations relating the dipole parameters to the vector MEG/EEG data. By solving them directly, without providing initial parameter guesses and computing forward solutions iteratively, the dipole positions and moments projected onto the xy-plane (equatorial plane) are reconstructed from a single time shot of the data. In addition, when the head layers and the sensor surfaces are spherically symmetric, we show that the required data reduce to radial MEG only. This clarifies the advantage of vector MEG/EEG measurements and algorithms for a generally-shaped head and sensor surfaces. In the numerical simulations, the centroids of the patch sources are well localized using vector/radial MEG measured on the upper hemisphere. By assuming the model order to be larger than the actual dipole number, the resultant spurious dipole is shown to have a much smaller strength magnetic moment (about 0.05 times smaller when the SNR = 16 dB), so that the number of ECDs is reasonably estimated. We consider that our direct method with greatlynsider that our direct method with greatly reduced computational cost can also be used to provide a good initial guess for conventional dipolar/multipolar fitting algorithms
6. Direct reconstruction algorithm of current dipoles for vector magnetoencephalography and electroencephalography
Energy Technology Data Exchange (ETDEWEB)
Nara, Takaaki [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan); Oohama, Junji [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan); Hashimoto, Masaru [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan); Takeda, Tsunehiro [Graduate School of Frontier Science, University of Tokyo, 5-1-5 Kashiwa-no-ha, Kashiwa, Chiba 277-8561 (Japan); Ando, Shigeru [Graduate School of Information Science and Technology, University of Tokyo, 7-3-1, Hongo, Bunkyo, Tokyo 113-8656 (Japan)
2007-07-07
This paper presents a novel algorithm to reconstruct parameters of a sufficient number of current dipoles that describe data (equivalent current dipoles, ECDs, hereafter) from radial/vector magnetoencephalography (MEG) with and without electroencephalography (EEG). We assume a three-compartment head model and arbitrary surfaces on which the MEG sensors and EEG electrodes are placed. Via the multipole expansion of the magnetic field, we obtain algebraic equations relating the dipole parameters to the vector MEG/EEG data. By solving them directly, without providing initial parameter guesses and computing forward solutions iteratively, the dipole positions and moments projected onto the xy-plane (equatorial plane) are reconstructed from a single time shot of the data. In addition, when the head layers and the sensor surfaces are spherically symmetric, we show that the required data reduce to radial MEG only. This clarifies the advantage of vector MEG/EEG measurements and algorithms for a generally-shaped head and sensor surfaces. In the numerical simulations, the centroids of the patch sources are well localized using vector/radial MEG measured on the upper hemisphere. By assuming the model order to be larger than the actual dipole number, the resultant spurious dipole is shown to have a much smaller strength magnetic moment (about 0.05 times smaller when the SNR = 16 dB), so that the number of ECDs is reasonably estimated. We consider that our direct method with greatly reduced computational cost can also be used to provide a good initial guess for conventional dipolar/multipolar fitting algorithms.
7. Direct reconstruction algorithm of current dipoles for vector magnetoencephalography and electroencephalography
Science.gov (United States)
Nara, Takaaki; Oohama, Junji; Hashimoto, Masaru; Takeda, Tsunehiro; Ando, Shigeru
2007-07-01
This paper presents a novel algorithm to reconstruct parameters of a sufficient number of current dipoles that describe data (equivalent current dipoles, ECDs, hereafter) from radial/vector magnetoencephalography (MEG) with and without electroencephalography (EEG). We assume a three-compartment head model and arbitrary surfaces on which the MEG sensors and EEG electrodes are placed. Via the multipole expansion of the magnetic field, we obtain algebraic equations relating the dipole parameters to the vector MEG/EEG data. By solving them directly, without providing initial parameter guesses and computing forward solutions iteratively, the dipole positions and moments projected onto the xy-plane (equatorial plane) are reconstructed from a single time shot of the data. In addition, when the head layers and the sensor surfaces are spherically symmetric, we show that the required data reduce to radial MEG only. This clarifies the advantage of vector MEG/EEG measurements and algorithms for a generally-shaped head and sensor surfaces. In the numerical simulations, the centroids of the patch sources are well localized using vector/radial MEG measured on the upper hemisphere. By assuming the model order to be larger than the actual dipole number, the resultant spurious dipole is shown to have a much smaller strength magnetic moment (about 0.05 times smaller when the SNR = 16 dB), so that the number of ECDs is reasonably estimated. We consider that our direct method with greatly reduced computational cost can also be used to provide a good initial guess for conventional dipolar/multipolar fitting algorithms.
8. Evaluation of the solid state dipole moment and pyroelectric coefficient of phosphangulene by multipolar modeling of X-ray structure factors
DEFF Research Database (Denmark)
2000-01-01
The electron density distribution of the molecular pyroelectric material phosphangulene has been studied by multipolar modeling of X-ray diffraction data. The "in-crystal" molecular dipole moment has been evaluated to 4.7 D corresponding to a 42% dipole moment enhancement compared with the dipole moment measured in a chloroform solution. It is substantiated that the estimated standard deviation of the dipole moment is about 0.8 D. The standard uncertainty (s.u.) of the derived dipole moment has been derived by splitting the dataset into three independent datasets. A novel method for obtaining pyroelectric coefficients has been introduced by combining the derived dipole moment with temperature-dependent measurements of the unit cell volume. The derived pyroelectric coefficient of 3.8(7)x 10(-6) Cm-2K-1 is in very good agreement with the measured pyroelectric coefficient of p = 3 +/- 1 x 10(-6) Cm-2 K-1. This method for obtaining the pyroelectric coefficient uses information from the X-ray diffraction experiment alone and can be applied to much smaller crystals than traditional methods.
9. Energetics and Dynamics of Bipolar and Multipolar CME Source Regions
Science.gov (United States)
Lynch, B. J.; Antiochos, S. K.; DeVore, C. R.; Luhmann, J. G.
2006-12-01
We present results of a numerical experiment which tests the Aly-Sturrock limit in a fully 3-dimensional, spherical geometry. We compare two common magnetic configurations corresponding to bipolar and multipolar "active region" arcades with identical photospheric normal field distributions and applied shearing flows. The bipolar response is a smooth expansion of the stressed fields, void of any explosive behavior, whereas the multipolar configuration results in the rapid expulsion of the low-lying sheared field via the magnetic breakout mechanism for CME initiation. The critical nature of the oppositely-directed overlying field and its topological consequences is discussed in the context of the breakout model.
10. Second Language Research Using Magnetoencephalography: A Review
Science.gov (United States)
Schmidt, Gwen L.; Roberts, Timothy P. L.
2009-01-01
In this review we show how magnetoencephalography (MEG) is a constructive tool for language research and review MEG findings in second language (L2) research. MEG is the magnetic analog of electroencephalography (EEG), and its primary advantage over other cross-sectional (e.g. magnetic resonance imaging, or positron emission tomography) functional…
11. Multipolar surface plasmon peaks on gold nanotriangles
Science.gov (United States)
Félidj, N.; Grand, J.; Laurent, G.; Aubard, J.; Lévi, G.; Hohenau, A.; Galler, N.; Aussenegg, F. R.; Krenn, J. R.
2008-03-01
In this paper, we report on the observation of multipolar surface plasmon excitation in lithographically designed gold nanotriangles, investigated by means of far-field extinction microspectroscopy in the wavelength range of 400-1000 nm. Several bands are observed in the visible and near infrared regions when increasing the side length of the triangles. The assignment of these peaks to successive in-plane multipolar plasmon modes is supported by calculations using the discrete dipole approximation method. We show that the lowest three multipolar excitations are clearly resolved in the visible and near infrared range. These new spectral features could be very promising in nanooptics or for chemosensing and biosensing applications.
12. Functional Neuroimaging of Language Using Magnetoencephalography
OpenAIRE
Frye, Richard E.; Rezaie, Roozbeh; Papanicolaou, Andrew C.
2009-01-01
Magnetoencephalography (MEG) is a novel functional brain mapping technique capable of non-invasively measuring neurophysiological activity based on direct measures of the magnetic flux at the head surface associated with the synchronized electrical activity of neuronal populations. Among the most actively sought applications of MEG has been localization of language-specific cortex. This is in part due to its practical application for pre-surgical evaluation of patients with epilepsy or brain ...
13. Multipolar consensus for phylogenetic trees.
Science.gov (United States)
Bonnard, Cécile; Berry, Vincent; Lartillot, Nicolas
2006-10-01
Collections of phylogenetic trees are usually summarized using consensus methods. These methods build a single tree, supposed to be representative of the collection. However, in the case of heterogeneous collections of trees, the resulting consensus may be poorly resolved (strict consensus, majority-rule consensus, ...), or may perform arbitrary choices among mutually incompatible clades, or splits (greedy consensus). Here, we propose an alternative method, which we call the multipolar consensus (MPC). Its aim is to display all the splits having a support above a predefined threshold, in a minimum number of consensus trees, or poles. We show that the problem is equivalent to a graph-coloring problem, and propose an implementation of the method. Finally, we apply the MPC to real data sets. Our results indicate that, typically, all the splits down to a weight of 10% can be displayed in no more than 4 trees. In addition, in some cases, biologically relevant secondary signals, which would not have been present in any of the classical consensus trees, are indeed captured by our method, indicating that the MPC provides a convenient exploratory method for phylogenetic analysis. The method was implemented in a package freely available at http://www.lirmm.fr/~cbonnard/MPC.html PMID:17060203
14. Methodes entropiques appliquees au probleme inverse en magnetoencephalographie
Science.gov (United States)
Lapalme, Ervig
2005-07-01
This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.
15. Magnetoencephalography from signals to dynamic cortical networks
CERN Document Server
Aine, Cheryl
2014-01-01
"Magnetoencephalography (MEG) provides a time-accurate view into human brain function. The concerted action of neurons generates minute magnetic fields that can be detected---totally noninvasively---by sensitive multichannel magnetometers. The obtained millisecond accuracycomplements information obtained by other modern brain-imaging tools. Accurate timing is quintessential in normal brain function, often distorted in brain disorders. The noninvasiveness and time-sensitivityof MEG are great assets to developmental studies, as well. This multiauthored book covers an ambitiously wide range of MEG research from introductory to advanced level, from sensors to signals, and from focal sources to the dynamics of cortical networks. Written by active practioners of this multidisciplinary field, the book contains tutorials for newcomers and chapters of new challenging methods and emerging technologies to advanced MEG users. The reader will obtain a firm grasp of the possibilities of MEG in the study of audition, vision...
16. SQUID-based multichannel system for Magnetoencephalography
CERN Document Server
Rombetto, S; Vettoliere, A; Trebeschi, A; Rossi, R; Russo, M
2013-01-01
Here we present a multichannel system based on superconducting quantum interference devices (SQUIDs) for magnetoencephalography (MEG) measurements, developed and installed at Istituto di Cibernetica (ICIB) in Naples. This MEG system, consists of 163 full integrated SQUID magnetometers, 154 channels and 9 references, and has been designed to meet specifications concerning noise, dynamic range, slew rate and linearity through optimized design. The control electronics is located at room temperature and all the operations are performed inside a Magnetically Shielded Room (MSR). The system exhibits a magnetic white noise level of approximatively 5 fT/Hz1=2. This MEG system will be employed for both clinical and routine use. PACS numbers: 74.81.Fa, 85.25.Hv, 07.20.Mc, 85.25.Dq, 87.19.le, 87.85.Ng
17. An Optical-Infrared Study of the Young Multipolar Planetary Nebula NGC 6644
CERN Document Server
Hsia, Chih Hao; Zhang, Yong; Koning, Nico; Volk, Kevin
2010-01-01
High-resolution HST imaging of the compact planetary nebula NGC 6644 has revealed two pairs of bipolar lobes and a central ring lying close to the plane of the sky. From mid-infrared imaging obtained with the Gemini Telescope, we have found a dust torus which is oriented nearly perpendicular to one pair of the lobes. We suggest that NGC 6644 is a multipolar nebula and have constructed a 3-D model which allows the visualization of the object from different lines of sight. These results suggest that NGC 6644 may have similar intrinsic structures as other multipolar nebulae and the phenomenon of multipolar nebulosity may be more common than previously believed.
18. AN OPTICAL-INFRARED STUDY OF THE YOUNG MULTIPOLAR PLANETARY NEBULA NGC 6644
International Nuclear Information System (INIS)
High-resolution Hubble Space Telescope imaging of the compact planetary nebula NGC 6644 has revealed two pairs of bipolar lobes and a central ring lying close to the plane of the sky. From mid-infrared imaging obtained with the Gemini Telescope, we have found a dust torus which is oriented nearly perpendicular to one pair of the lobes. We suggest that NGC 6644 is a multipolar nebula and construct a three-dimensional model that allows the visualization of the object from different lines of sight. These results suggest that NGC 6644 may have similar intrinsic structures as other multipolar nebulae and the phenomenon of multipolar nebulosity may be more common than previously believed.
19. Monte Carlo analysis of localization errors in magnetoencephalography
Energy Technology Data Exchange (ETDEWEB)
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
20. Functional neuroimaging of language using magnetoencephalography
Science.gov (United States)
Frye, Richard E.; Rezaie, Roozbeh; Papanicolaou, Andrew C.
2009-03-01
Magnetoencephalography (MEG) is a novel functional brain mapping technique capable of non-invasively measuring neurophysiological activity based on direct measures of the magnetic flux at the head surface associated with the synchronized electrical activity of neuronal populations. Among the most actively sought applications of MEG has been localization of language-specific cortex. This is in part due to its practical application for pre-surgical evaluation of patients with epilepsy or brain tumors. Until recently, comprehensive language mapping during surgical planning has relied on the application of invasive diagnostic methods, namely the Wada procedure and direct electrocortical stimulation mapping, often considered as the “gold standard” techniques for identifying language-specific cortex. In this review, we evaluate the utility of MEG as a tool for functional mapping of language in both clinical and normal populations. In particular, we provide a general description of MEG, with emphasis on facets of the technique related to language mapping. Additionally, we discuss the application of appropriate MEG language-mapping protocols developed to reliably generate spatiotemporal profiles of language activity, and address the validity of the technique against the “gold standards” of the Wada and electrocortical mapping procedures.
1. Magnetoencephalography in the diagnosis of concussion.
Science.gov (United States)
Lee, Roland R; Huang, Mingxiong
2014-01-01
Magnetoencephalography (MEG) is a biomedical technique which measures the magnetic fields emitted by the brain, generated by neuronal activity. Commercial whole-head MEG units have been available for about 15 years, but currently there are only about 20 such units operating in the USA. Here, we review the basic concepts of MEG and list some of the usual clinical indications: noninvasive localization of epileptic spikes and presurgical mapping of eloquent cortex. We then discuss using MEG to diagnose mild traumatic brain injury (mTBI; concussions). Injured brain tissues in TBI patients generate abnormal low-frequency magnetic activity (delta-waves: 1-4 Hz) that can be measured and localized by MEG. These abnormal delta-waves originate from neurons that experience deafferentation from axonal injury to the associated white matter fiber tracts, also manifested on diffusion tensor imaging as reduced fractional anisotropy. Magnetoencephalographic evaluation of abnormal delta-waves (1-4 Hz) is probably the most sensitive objective test to diagnose concussions. An automated MEG low-frequency (slow wave) source imaging method, frequency-domain vector-based spatiotemporal analysis using a L1-minimum norm (VESTAL), achieved a positive finding rate of 87% for diagnosing concussions (blast-induced plus nonblast), 100% for moderate TBI, and no false-positive diagnoses in normal controls. There were also significant correlations between the number of cortical regions generating abnormal slow waves and the total postconcussive symptom scores in TBI patients. PMID:24923396
Science.gov (United States)
Firsching, R; Bondar, I; Heinze, H J; Hinrichs, H; Hagner, T; Heinrich, J; Belau, A
2002-03-01
Magnetoencephalography (MEG) is a noninvasive option for localizing electroneurophysiological activity on the human cortex. The purpose of this study was to evaluate the practicability and reliability of MEG imaging integrated into a neuronavigation system to identify the sensorimotor cortex intraoperatively in patients with brain tumors in or near the central motor strip. It was performed prior to surgery in 30 patients with space-occupying lesions in or around the central region to localize the primary somatosensory cortex. These functional brain maps were superimposed on MR images obtained prior to surgery and transferred in the operating room for intraoperative functional neuronavigation. During surgery, the phase reversal technique identified a generator which coincided with the somatosensory cortex as displayed by the MEG-based functional neuronavigation system. Following surgery, the motor deficit improved in seven patients, was unchanged in five, and showed a slight transient deterioration in five. One patient suffered a deterioration of motor function with incomplete recovery. The MEG-based functional neuronavigation was found to be practicable and useful in finding a safe approach to tumors in or adjacent to the central region. The accuracy of MEG was concluded to be reliable as verified by the phase reversal technique. PMID:11954769
3. SQUID sensor array configurations for magnetoencephalography applications
Energy Technology Data Exchange (ETDEWEB)
Vrba, J.; Robinson, S.E. [CTF Systems Inc., A subsidiary of VSM MedTech Ltd, Port Coquitlam, BC (Canada)
2002-09-01
Electrophysiological activity in the human brain generates a small magnetic field from the spatial superposition of individual neuronal source currents. At a distance of about 15 mm from the scalp, the observed field is of the order of 10{sub -13} to 10{sub -12} T peak-to-peak. This measurement process is termed magnetoencephalography (MEG). In order to minimize instrumental noise, the MEG is usually detected using superconducting flux transformers, coupled to SQUID (superconducting quantum interference device) sensors. Since MEG signals are also measured in the presence of significant environmental magnetic noise, flux transformers must be designed to strongly attenuate environmental noise, maintain low instrumental noise and maximize signals from the brain. Furthermore, the flux transformers must adequately sample spatial field variations if the brain activity is to be imaged. The flux transformer optimization for maximum brain signal-to-noise ratio (SNR) requires analysis of the spatial and temporal properties of brain activity, the environmental noise and how these signals are coupled to the flux transformer. Flux transformers that maximize SNR can detect the smallest brain signals and have the best ability to spatially separate dipolar sources. An optimal flux transformer design is a synthetic higher-order gradiometer based on relatively short-baseline first-order radial gradiometer primary sensors. (author)
4. SQUID sensor array configurations for magnetoencephalography applications
International Nuclear Information System (INIS)
Electrophysiological activity in the human brain generates a small magnetic field from the spatial superposition of individual neuronal source currents. At a distance of about 15 mm from the scalp, the observed field is of the order of 10-13 to 10-12 T peak-to-peak. This measurement process is termed magnetoencephalography (MEG). In order to minimize instrumental noise, the MEG is usually detected using superconducting flux transformers, coupled to SQUID (superconducting quantum interference device) sensors. Since MEG signals are also measured in the presence of significant environmental magnetic noise, flux transformers must be designed to strongly attenuate environmental noise, maintain low instrumental noise and maximize signals from the brain. Furthermore, the flux transformers must adequately sample spatial field variations if the brain activity is to be imaged. The flux transformer optimization for maximum brain signal-to-noise ratio (SNR) requires analysis of the spatial and temporal properties of brain activity, the environmental noise and how these signals are coupled to the flux transformer. Flux transformers that maximize SNR can detect the smallest brain signals and have the best ability to spatially separate dipolar sources. An optimal flux transformer design is a synthetic higher-order gradiometer based on relatively short-baseline first-order radial gradiometer primary sensors. (author)
5. Nonshielded multipolar vortices at high Reynolds number.
Science.gov (United States)
Barba, L A
2006-06-01
Vortex multipoles--consisting of a core of vorticity closely surrounded by several smaller vorticity concentrations of opposite sign--are obtained from the evolution of vorticity in two-dimensional simulations. Using a meshless vortex method, we obtained triangular and square vortices, surrounded by three and four satellites, respectively. These structures have only been observed before to emerge from zero-circulation initial conditions. We also observed a pentagon vortex. Here, we obtain compound vortices of nonzero total circulation, and suggest a gamut of multipolar asymptotic solutions to the Navier-Stokes equations. PMID:16906900
6. High-resolution EEG (HR-EEG) and magnetoencephalography (MEG).
Science.gov (United States)
Gavaret, M; Maillard, L; Jung, J
2015-03-01
High-resolution EEG (HR-EEG) and magnetoencephalography (MEG) allow the recording of spontaneous or evoked electromagnetic brain activity with excellent temporal resolution. Data must be recorded with high temporal resolution (sampling rate) and high spatial resolution (number of channels). Data analyses are based on several steps with selection of electromagnetic signals, elaboration of a head model and use of algorithms in order to solve the inverse problem. Due to considerable technical advances in spatial resolution, these tools now represent real methods of ElectroMagnetic Source Imaging. HR-EEG and MEG constitute non-invasive and complementary examinations, characterized by distinct sensitivities according to the location and orientation of intracerebral generators. In the presurgical assessment of drug-resistant partial epilepsies, HR-EEG and MEG can characterize and localize interictal activities and thus the irritative zone. HR-EEG and MEG often yield significant additional data that are complementary to other presurgical investigations and particularly relevant in MRI-negative cases. Currently, the determination of the epileptogenic zone and functional brain mapping remain rather less well-validated indications. In France, in 2014, HR-EEG is now part of standard clinical investigation of epilepsy, while MEG remains a research technique. PMID:25648821
7. Magnetoencephalography in stroke: a 1-year follow-up study.
Science.gov (United States)
Gallien, P; Aghulon, C; Durufle, A; Petrilli, S; de Crouy, A C; Carsin, M; Toulouse, P
2003-07-01
Recovery after stroke is closely linked to cerebral plasticity. Magnetoencephalography (MEG) is a non-invasive technique, which allows location of cerebral cells activities. In the present work, a cohort of patients has been studied with MEG. Twelve patients with a recent ischemic or hemorragic stroke were included as soon as possible after onset of stroke. Neurologic assessment, including standard neurologic examination, functional independence measure (FIM) and Orgogozo's scale was performed for 1 year in addition to a study of the somatosensory evoked field (SEF) using a 37-channel Biomagnetometer system. No response could be recorded in five patients at the first SEF exploration. In three cases, no response was ever recorded during the study. All these patients had a bad recovery. The location of the SEF sources was always in the normal non-infarcted cortex of the postcentral gyrus. Sensory recovery seemed to be linked to the reorganization of the persistent functional cortex, which was a limiting factor for recovery. These observations confirm the experimental results obtained in animal models. After stroke it can be assumed that in the case of incomplete lesion, an intensive sensory peripheral stimulation could maximize the use of residual sensory function and then contribute to improve the sensory deficit. In case of total sensory loss other techniques have to be used, such as visual monitoring of hand activity in order to improve hand function. PMID:12823488
8. Magnetoencephalography with a two-color pump probe atomic magnetometer.
Energy Technology Data Exchange (ETDEWEB)
Johnson, Cort N.
2010-07-01
The authors have detected magnetic fields from the human brain with a compact, fiber-coupled rubidium spin-exchange-relaxation-free magnetometer. Optical pumping is performed on the D1 transition and Faraday rotation is measured on the D2 transition. The beams share an optical axis, with dichroic optics preparing beam polarizations appropriately. A sensitivity of <5 fT/{radical}Hz is achieved. Evoked responses resulting from median nerve and auditory stimulation were recorded with the atomic magnetometer. Recordings were validated by comparison with those taken by a commercial magnetoencephalography system. The design is amenable to arraying sensors around the head, providing a framework for noncryogenic, whole-head magnetoencephalography.
9. Multipolar radiation of quantum emitters with nanowire optical antennas
OpenAIRE
Curto, Alberto G.; Taminiau, Tim H.; Volpe, Giorgio; Kreuzer, Mark P.; Quidant, Romain; Hulst, Niek F.
2013-01-01
Multipolar transitions other than electric dipoles are generally too weak to be observed at optical frequencies in single quantum emitters. For example, fluorescent molecules and quantum dots have dimensions much smaller than the wavelength of light and therefore emit predominantly as electric dipoles. Here we demonstrate controlled emission of a quantum dot into multipolar radiation through selective coupling to a linear nanowire antenna. The antenna resonance tailors the interaction of the ...
10. THICK DISKS WITH NEWTONIAN MULTIPOLAR MOMENTS / DISCOS GRUESOS CON MOMENTOS MULTIPOLARES NEWTONIANOS
Scientific Electronic Library Online (English)
Framsol, López-Suspes; Guillermo A., González.
2013-09-01
11. Neural Signatures of Phonetic Learning in Adulthood: A Magnetoencephalography Study
OpenAIRE
Zhang, Yang; Kuhl, Patricia K.; Imada, Toshiaki; Iverson, Paul; Pruitt, John; Stevens, Erica B.; Kawakatsu, Masaki; Tohkura, Yoh Ichi; Nemoto, Iku
2009-01-01
The present study used magnetoencephalography (MEG) to examine perceptual learning of American English /r/ and /l/ categories by Japanese adults who had limited English exposure. A training software program was developed based on the principles of infant phonetic learning, featuring systematic acoustic exaggeration, multi-talker variability, visible articulation, and adaptive listening. The program was designed to help Japanese listeners utilize an acoustic dimension relevant for phonemic cat...
12. How to detect amygdala activity with magnetoencephalography using source imaging.
Science.gov (United States)
Balderston, Nicholas L; Schultz, Douglas H; Baillet, Sylvain; Helmstetter, Fred J
2013-01-01
In trace fear conditioning a conditional stimulus (CS) predicts the occurrence of the unconditional stimulus (UCS), which is presented after a brief stimulus free period (trace interval)(1). Because the CS and UCS do not co-occur temporally, the subject must maintain a representation of that CS during the trace interval. In humans, this type of learning requires awareness of the stimulus contingencies in order to bridge the trace interval(2-4). However when a face is used as a CS, subjects can implicitly learn to fear the face even in the absence of explicit awareness*. This suggests that there may be additional neural mechanisms capable of maintaining certain types of "biologically-relevant" stimuli during a brief trace interval. Given that the amygdala is involved in trace conditioning, and is sensitive to faces, it is possible that this structure can maintain a representation of a face CS during a brief trace interval. It is challenging to understand how the brain can associate an unperceived face with an aversive outcome, even though the two stimuli are separated in time. Furthermore investigations of this phenomenon are made difficult by two specific challenges. First, it is difficult to manipulate the subject's awareness of the visual stimuli. One common way to manipulate visual awareness is to use backward masking. In backward masking, a target stimulus is briefly presented (invisible(6-8). Second, masking requires very rapid and precise timing making it difficult to investigate neural responses evoked by masked stimuli using many common approaches. Blood-oxygenation level dependent (BOLD) responses resolve at a timescale too slow for this type of methodology, and real time recording techniques like electroencephalography (EEG) and magnetoencephalography (MEG) have difficulties recovering signal from deep sources. However, there have been recent advances in the methods used to localize the neural sources of the MEG signal(9-11). By collecting high-resolution MRI images of the subject's brain, it is possible to create a source model based on individual neural anatomy. Using this model to "image" the sources of the MEG signal, it is possible to recover signal from deep subcortical structures, like the amygdala and the hippocampus*. PMID:23770774
13. Source cancellation profiles of electroencephalography and magnetoencephalography.
Science.gov (United States)
Irimia, Andrei; Van Horn, John Darrell; Halgren, Eric
2012-02-01
Recorded electric potentials and magnetic fields due to cortical electrical activity have spatial spread even if their underlying brain sources are focal. Consequently, as a result of source cancellation, loss in signal amplitude and reduction in the effective signal-to-noise ratio can be expected when distributed sources are active simultaneously. Here we investigate the cancellation effects of EEG and MEG through the use of an anatomically correct forward model based on structural MRI acquired from 7 healthy adults. A boundary element model (BEM) with four compartments (brain, cerebrospinal fluid, skull and scalp) and highly accurate cortical meshes (~300,000 vertices) were generated. Distributed source activations were simulated using contiguous patches of active dipoles. To investigate cancellation effects in both EEG and MEG, quantitative indices were defined (source enhancement, cortical orientation disparity) and computed for varying values of the patch radius as well as for automatically parcellated gyri and sulci. Results were calculated for each cortical location, averaged over all subjects using a probabilistic atlas, and quantitatively compared between MEG and EEG. As expected, MEG sensors were found to be maximally sensitive to signals due to sources tangential to the scalp, and minimally sensitive to radial sources. Compared to EEG, however, MEG was found to be much more sensitive to signals generated antero-medially, notably in the anterior cingulate gyrus. Given that sources of activation cancel each other according to the orientation disparity of the cortex, this study provides useful methods and results for quantifying the effect of source orientation disparity upon source cancellation. PMID:21959078
14. Multipolar radiation of quantum emitters with nanowire optical antennas
Science.gov (United States)
Curto, Alberto G.; Taminiau, Tim H.; Volpe, Giorgio; Kreuzer, Mark P.; Quidant, Romain; van Hulst, Niek F.
2013-04-01
Multipolar transitions other than electric dipoles are generally too weak to be observed at optical frequencies in single quantum emitters. For example, fluorescent molecules and quantum dots have dimensions much smaller than the wavelength of light and therefore emit predominantly as electric dipoles. Here we demonstrate controlled emission of a quantum dot into multipolar radiation through selective coupling to a linear nanowire antenna. The antenna resonance tailors the interaction of the quantum dot with light, effectively creating a hybrid nanoscale source beyond the simple Hertz dipole. Our findings establish a basis for the controlled driving of fundamental modes in nanoantennas and metamaterials, for the understanding of the coupling of quantum emitters to nanophotonic devices such as waveguides and nanolasers, and for the development of innovative quantum nano-optics components with properties not found in nature.
15. Synchronized brain activity and neurocognitive function in patients with low-grade glioma: A magnetoencephalography study
OpenAIRE
Bosma, Ingeborg; Douw, Linda; Bartolomei, Fabrice; Heimans, Jan J.; Dijk, Bob W.; Postma, Tjeerd J.; Stam, Cornelis J.; Reijneveld, Jaap C.; Klein, Martin
2008-01-01
We investigated the mechanisms underlying neurocognitive dysfunction in patients with low-grade glioma (LGG) by relating functional connectivity revealed by magnetoencephalography to neurocognitive function. We administered a battery of standardized neurocognitive tests measuring six neurocognitive domains to a group of 17 LGG patients and 17 healthy controls, matched for age, sex, and educational level. Magnetoencephalography recordings were conducted during an eyes-closed “resting state,?...
16. The role of angular momentum in the construction of electromagnetic multipolar fields
OpenAIRE
Tischler, Nora; Zambrana-puyalto, Xavier; Molina-terriza, Gabriel
2012-01-01
Multipolar solutions of Maxwell's equations are used in many practical applications and are essential for the understanding of light-matter interactions at the fundamental level. Unlike the set of plane wave solutions of electromagnetic fields, the multipolar solutions do not share a standard derivation or notation. As a result, expressions originating from different derivations can be difficult to compare. Some of the derivations of the multipolar solutions do not explicitl...
17. Multipolar excitations in small metallic spheres
International Nuclear Information System (INIS)
A dielectric function E(?,l) appropriate to a small metallic sphere is obtained within the semiclassical infinite barrier model, where l is the multipole order. An excitation diagram in the l,? plane based on the structure of this function is proposed. It represents the spherical analog of the excitation structure of an infinite medium in the k,? plane. 8 refs., 1 fig
18. Investigating the neural correlates of the Stroop effect with magnetoencephalography.
Science.gov (United States)
Galer, Sophie; Op De Beeck, Marc; Urbain, Charline; Bourguignon, Mathieu; Ligot, Noémie; Wens, Vincent; Marty, Brice; Van Bogaert, Patrick; Peigneux, Philippe; De Tiège, Xavier
2015-01-01
Reporting the ink color of a written word when it is itself a color name incongruent with the ink color (e.g. "red" printed in blue) induces a robust interference known as the Stroop effect. Although this effect has been the subject of numerous functional neuroimaging studies, its neuronal substrate is still a matter of debate. Here, we investigated the spatiotemporal dynamics of interference-related neural events using magnetoencephalography (MEG) and voxel-based analyses (SPM8). Evoked magnetic fields (EMFs) were acquired in 12 right-handed healthy subjects performing a color-word Stroop task. Behavioral results disclosed a classic interference effect with longer mean reaction times for incongruent than congruent stimuli. At the group level, EMFs' differences between incongruent and congruent trials spanned from 380 to 700 ms post-stimulus onset. Underlying neural sources were identified in the left pre-supplementary motor area (pre-SMA) and in the left posterior parietal cortex (PPC) confirming the role of these regions in conflict processing. PMID:24752907
19. Real-time robust signal space separation for magnetoencephalography.
Science.gov (United States)
Guo, Chenlei; Li, Xin; Taulu, Samu; Wang, Wei; Weber, Douglas J
2010-08-01
20. Complexity Measures in Magnetoencephalography: Measuring "Disorder" in Schizophrenia
Science.gov (United States)
Brookes, Matthew J.; Hall, Emma L.; Robson, Siân E.; Price, Darren; Palaniyappan, Lena; Liddle, Elizabeth B.; Liddle, Peter F.; Robinson, Stephen E.; Morris, Peter G.
2015-01-01
This paper details a methodology which, when applied to magnetoencephalography (MEG) data, is capable of measuring the spatio-temporal dynamics of ‘disorder’ in the human brain. Our method, which is based upon signal entropy, shows that spatially separate brain regions (or networks) generate temporally independent entropy time-courses. These time-courses are modulated by cognitive tasks, with an increase in local neural processing characterised by localised and transient increases in entropy in the neural signal. We explore the relationship between entropy and the more established time-frequency decomposition methods, which elucidate the temporal evolution of neural oscillations. We observe a direct but complex relationship between entropy and oscillatory amplitude, which suggests that these metrics are complementary. Finally, we provide a demonstration of the clinical utility of our method, using it to shed light on aberrant neurophysiological processing in schizophrenia. We demonstrate significantly increased task induced entropy change in patients (compared to controls) in multiple brain regions, including a cingulo-insula network, bilateral insula cortices and a right fronto-parietal network. These findings demonstrate potential clinical utility for our method and support a recent hypothesis that schizophrenia can be characterised by abnormalities in the salience network (a well characterised distributed network comprising bilateral insula and cingulate cortices). PMID:25886553
1. Design and performance of the LANL 158-channel magnetoencephalography system
Energy Technology Data Exchange (ETDEWEB)
Matlachov, A. N. (Andrei N.); Kraus, Robert H., Jr.; Espy, M. A. (Michelle A.); Best, E. D. (Elaine D.); Briles, M. Carolyn; Raby, E. Y. (Eric Y.); Flynn, E. R.
2002-01-01
Design and performance for a recently completed whole-head magnetoencephalography (MEG) system using a superconducting imaging-surface (SIS) surrounding an array of SQUID magnetometers is reported. The helmet-like SIS is hemispherical in shape with a brim. The SIS images nearby sources while shields sensors from ambient magnetic noise. The shielding factor depends on magnetometer position and orientation. Typical shielding values of 200 in central sulcus area have been observed. Nine reference channels form three vector magnetometers, which are placed outside SIS. Signal channels consist of 149 SQUID magnetometers with 0.84nT/{Phi}{sub 0} field sensitivity and less then 3 fT/{radical}Hz noise. Typical SQUID - room temperature separations are about 20mm in the cooled state. Twelve 16-channel flux-lock loop units are connected to two 96-channel control units allowing up to 192 total SQUID channels. The control unit includes signal conditioning circuits as well as system test and control circuits. After conditioning all signals are fed to 192-channel, 24-bit data acquisition system capable of sampling up to 48kSa/sec/channel. The SIS-MEG system enables high-quality human functional brain data to be recorded in a one-layer magnetically shielded room.
2. Cortical locations of maximal spindle activity: magnetoencephalography (MEG) study.
Science.gov (United States)
Gumenyuk, Valentina; Roth, Thomas; Moran, John E; Jefferson, Catherine; Bowyer, Susan M; Tepley, Norman; Drake, Christopher L
2009-06-01
The aim of this study was to determine the main cortical regions related to maximal spindle activity of sleep stage 2 in healthy individual subjects during a brief morning nap using magnetoencephalography (MEG). Eight volunteers (mean age: 26.1 +/- 8.7, six women) all right handed, free of any medical psychiatric or sleep disorders were studied. Whole-head 148-channel MEG and a conventional polysomnography montage (EEG; C3, C4, O1 and O2 scalp electrodes and EOG, EMG and ECG electrodes) were used for data collection. Sleep MEG/EEG spindles were visually identified during 15 min of stage 2 sleep for each participant. The distribution of brain activity corresponding to each spindle was calculated using a combination of independent component analysis and a current source density technique superimposed upon individual MRIs. The absolute maximum of spindle activation was localized to frontal, temporal and parietal lobes. However, the most common cortical regions for maximal source spindle activity were precentral and/or postcentral areas across all individuals. The present study suggests that maximal spindle activity localized to these two regions may represent a single event for two types of spindle frequency: slow (at 12 Hz) and fast (at 14 Hz) within global thalamocortical coherence. PMID:19645968
3. Using variance information in magnetoencephalography measures of functional connectivity.
Science.gov (United States)
Hall, Emma L; Woolrich, Mark W; Thomaz, Carlos E; Morris, Peter G; Brookes, Matthew J
2013-02-15
The use of magnetoencephalography (MEG) to assess long range functional connectivity across large scale distributed brain networks is gaining popularity. Recent work has shown that electrodynamic networks can be assessed using both seed based correlation or independent component analysis (ICA) applied to MEG data and further that such metrics agree with fMRI studies. To date, techniques for MEG connectivity assessment have typically used a variance normalised approach, either through the use of Pearson correlation coefficients or via variance normalisation of envelope timecourses prior to ICA. Here, we show that the use of variance information (i.e. data that have not been variance normalised) in source space projected Hilbert envelope time series yields important spatial information, and is of significant functional relevance. Further, we show that employing this information in functional connectivity analyses improves the spatial delineation of network nodes using both seed based and ICA approaches. The use of variance is particularly important in MEG since the non-independence of source space voxels (brought about by the ill-posed MEG inverse problem) means that spurious signals can exist in areas of low signal variance. We therefore suggest that this approach be incorporated into future studies. PMID:23165323
4. BRICS and the myth of the multipolar world
Directory of Open Access Journals (Sweden)
Takis Fotopoulos
2014-12-01
Full Text Available The aim of this article is to show that the BRICS countries not only don’t form part of a multi-polar world, but in reality are far from sovereign states in any sense of the word. In fact, if their real goal was indeed the creation of an alternative pole of sovereign nation-states, they should have planned at the outset to break their direct dependence on the globalized capitalist market economy, cutting their ties with global institutions controlled by the Transnational Elite (WTO, IMF and World Bank, and moving towards self-reliant economies, so that they could regain their sovereignty.
5. Multipolar localized resonances for multi-band metamaterial perfect absorbers
Science.gov (United States)
Dayal, Govind; Ramakrishna, S. Anantha
2014-09-01
A metamaterial structure, comprising of metallic circular micro-discs (gold or aluminum) separated from a metallic thin film by a dielectric zinc sulphide film, behaves as a multi-band perfect absorber at infra red wavelengths due to the excitation of multipole resonances. With micro-discs of 3.2 ?m diameter, the fabricated metamaterial absorber shows peak absorbance of over 90% in multiple selected bands spanning the 3-14 ?m wavelengths. Absorption bands corresponding to the different resonance modes have been measured and computational simulations show these resonances originate from the higher order multipolar resonances of the disk.
6. Multipolar Electrode and Preamplifier Design for ENG-Signal Acquisition
Science.gov (United States)
Soulier, Fabien; Gouyet, Lionel; Cathébras, Guy; Bernard, Serge; Guiraud, David; Bertrand, Yves
Cuff electrodes have several advantages for in situ recording ENG signal. They are easy to implant and not very invasive for the patient. Nevertheless, they are subject to background parasitic noise, especially the EMG generated by the muscles. We show that the use of cuff electrodes with large numbers of poles can increase their sensitivity and their selectivity with respect to an efficient noise rejection. We investigate several configurations and compare the performances of a tripolar cuff electrode versus a multipolar one in numerical simulation.
7. A multipolar SR motor and its application in EV
International Nuclear Information System (INIS)
In order to bring out the advanced features of EVs, a direct-drive (DD) with in-wheel (IW) layout has been considered, but it requires more motors than the conventional layout and the motors will be used in a hard environment. Because switched reluctance motors (SRMs) are simple and strong, we have developed a new outer-rotor-type multipolar SRM suitable for DD-IW EVs through simulations and experiments. We have implemented the developed SRMs into a prototype EV. This is the first-ever in-vehicle research to our knowledge; the developing process and the road test results will bring many useful guidelines for future developments
8. Magnetoencephalography reveals early activation of V4 in grapheme-color synesthesia.
Science.gov (United States)
Brang, D; Hubbard, E M; Coulson, S; Huang, M; Ramachandran, V S
2010-10-15
Grapheme-color synesthesia is a neurological phenomenon in which letters and numbers (graphemes) consistently evoke particular colors (e.g. A may be experienced as red). The cross-activation theory proposes that synesthesia arises as a result of cross-activation between posterior temporal grapheme areas (PTGA) and color processing area V4, while the disinhibited feedback theory proposes that synesthesia arises from disinhibition of pre-existing feedback connections. Here we used magnetoencephalography (MEG) to test whether V4 and PTGA activate nearly simultaneously, as predicted by the cross-activation theory, or whether V4 activation occurs only after the initial stages of grapheme processing, as predicted by the disinhibited feedback theory. Using our high-resolution MEG source imaging technique (VESTAL), PTGA and V4 regions of interest (ROIs) were separately defined, and activity in response to the presentation of achromatic graphemes was measured. Activation levels in PTGA did not significantly differ between synesthetes and controls (suggesting similar grapheme processing mechanisms), whereas activation in V4 was significantly greater in synesthetes. In synesthetes, PTGA activation exceeded baseline levels beginning 105-109ms, and V4 activation did so 5ms later, suggesting nearly simultaneous activation of these areas. Results are discussed in the context of an updated version of the cross-activation model, the cascaded cross-tuning model of grapheme-color synesthesia. PMID:20547226
9. Estimation of Soil Moisture with L-band Multi-polarization Radar
Science.gov (United States)
Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.
2004-01-01
Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.
10. The Role of Angular Momentum in the Construction of Electromagnetic Multipolar Fields
Science.gov (United States)
Tischler, Nora; Zambrana-Puyalto, Xavier; Molina-Terriza, Gabriel
2012-01-01
Multipolar solutions of Maxwell's equations are used in many practical applications and are essential for the understanding of light-matter interactions at the fundamental level. Unlike the set of plane wave solutions of electromagnetic fields, the multipolar solutions do not share a standard derivation or notation. As a result, expressions…
11. Transition between viscous dipolar and inertial multipolar dynamos
Science.gov (United States)
Oruba, Ludivine; Dormy, Emmanuel
2014-10-01
We investigate the transition from steady dipolar to reversing multipolar dynamos. The Earth has been argued to lie close to this transition, which could offer a scenario for geomagnetic reversals. We show that the transition between dipolar and multipolar dynamos is characterized by a three terms balance (as opposed to the usually assumed two terms balance), which involves the nongradient parts of inertial, viscous and Coriolis forces. We introduce from this equilibrium the sole parameter RoE-1/3?ReE2/3, which accurately describes the transition for a wide database of 132 fully three-dimensional direct numerical simulations of spherical rotating dynamos (courtesy of U. Christensen). This resolves earlier contradictions in the literature on the relevant two terms balance at the transition. Considering only a two terms balance between the nongradient part of the Coriolis force and of inertial forces provides the classical Ro/?u. This transition can be equivalently described by Re ?u2, which corresponds to the two terms balance between the nongradient part of inertial forces and viscous forces.
12. Anatomy of the Binary Black Hole Recoil: A Multipolar Analysis
Science.gov (United States)
Schnittman, Jeremy; Buonanno, Alessandra; vanMeter, James R.; Baker, John G.; Boggs, William D.; Centrella, Joan; Kelly, Bernard J.; McWilliams, Sean T.
2007-01-01
We present a multipolar analysis of the recoil velocity computed in recent numerical simulations of binary black hole coalescence, for both unequal masses and non-zero, non-precessing spins. We show that multipole moments up to and including 1 = 4 are sufficient to accurately reproduce the final recoil velocity (= 98%) and that only a few dominant modes contribute significantly to it (2 95%). We describe how the relative amplitude, and more importantly, the relative phase, of these few modes control the way in which the recoil builds up throughout the inspiral, merger, and ring-down phases. We also find that the numerical results can be reproduced, to a high level of accuracy, by an effective Newtonian formula for the multipole moments obtained by replacing in the Newtonian formula the radial separation with an effective radius computed from the numerical data. Beyond the merger, the numerical results are reproduced by a superposition of three Kerr quasi-normal modes. Analytic formulae, obtained by expressing the multipole moments in terms of the fundamental QNMs of a Kerr BH, are able to explain the onset and amount of '.anti-kick" for each of the simulations. Lastly, we apply this multipolar analysis to understand the remarkable difference between the amplitudes of planar and non-planar kicks for equal-mass spinning black holes.
13. Anatomy of the binary black hole recoil: A multipolar analysis
International Nuclear Information System (INIS)
We present a multipolar analysis of the gravitational recoil computed in recent numerical simulations of binary black hole coalescence, for both unequal masses and nonzero, nonprecessing spins. We show that multipole moments up to and including l=4 are sufficient to accurately reproduce the final recoil velocity (within ?2%) and that only a few dominant modes contribute significantly to it (within ?5%). We describe how the relative amplitudes, and more importantly, the relative phases, of these few modes control the way in which the recoil builds up throughout the inspiral, merger, and ringdown phases. We also find that the numerical results can be reproduced by an 'effective Newtonian' formula for the multipole moments obtained by replacing the radial separation in the Newtonian formulas with an effective radius computed from the numerical data. Beyond the merger, the numerical results are reproduced by a superposition of three Kerr quasinormal modes. Analytic formulas, obtained by expressing the multipole moments in terms of the fundamental quasinormal modes of a Kerr black hole, are able to explain the onset and amount of 'antikick' for each of the simulations. Lastly, we apply this multipolar analysis to help explain the remarkable difference between the amplitudes of planar and nonplanar kicks for equal-mass spinning black holes
14. Hepatic radiofrequency ablation using multiple probes: vivo and in vivo comparative studies of monopolar versus multipolar modes
International Nuclear Information System (INIS)
We wanted to compare the efficiency of multipolar radiofrequency ablation (RFA) using three perfused-cooled electrodes with multiple overlapping- and simultaneous monopolar techniques for creating an ablation zone in ex vivo bovine livers and in in vivo porcine livers. In the ex vivo experiments, we used a 200 W generator (Valleylab, CC-3 model) and three perfused-cooled electrodes or internally cooled electrodes to create 30 coagulation zones by performing consecutive monopolar RFA (group A, n=10), simultaneous monopolar RFA (group B, n=10) or multipolar RFA (group C, n=10) in explanted bovine livers. In the consecutive mode, three ablation spheres were created by sequentially applying 150 watts radiofrequency (RF) energy to the internally cooled electrodes for 12 minutes each for a total of 36 minutes. In the simultaneous monopolar and multipolar modes, RF energy was concurrently applied to the three perfused-cooled electrodes for 20 minutes at 150 watt with instillation of 6% hypertonic saline at 2 mL/min. During RFA, we measured the temperatures of the treated area at its center. The changes in impedance, the current and liver temperature during RFA, as well as the dimensions of the thermal ablation zones, were compared among the three groups. In the in vivo experiments, three coagulations were created by performing multipolar RFA in a pig via laparotomy with using same parameter as the ex vivo study. In the ex vivo experiments, the impedance was gradually decreasments, the impedance was gradually decreased during the RFA in groups B and C, but in group A, the impedance was increased during RFA and this induced activation by the pulsed RF technique. In groups A, B and C, the mean final-temperature values were 80 ± 10 ?, 69 ± 18 ? and 79 ± 12 ?, respectively (? 3 (group A); 44.9 ± 12.7 cm3 (group B); and 78.9 ± 6.9 cm3 (group C) (? 3. For the multiple probe RFA, the multipolar mode with hypertonic saline instillation was more efficient in generating larger areas of thermal ablation than either the consecutive or simultaneous monopolar modes
15. The Neural Dynamics of Fronto-Parietal Networks in Childhood Revealed using Magnetoencephalography.
Science.gov (United States)
Astle, Duncan E; Luckhoo, Henry; Woolrich, Mark; Kuo, Bo-Cheng; Nobre, Anna C; Scerif, Gaia
2014-11-19
Our ability to hold information in mind is limited, requires a high degree of cognitive control, and is necessary for many subsequent cognitive processes. Children, in particular, are highly variable in how, trial-by-trial, they manage to recruit cognitive control in service of memory. Fronto-parietal networks, typically recruited under conditions where this cognitive control is needed, undergo protracted development. We explored, for the first time, whether dynamic changes in fronto-parietal activity could account for children's variability in tests of visual short-term memory (VSTM). We recorded oscillatory brain activity using magnetoencephalography (MEG) as 9- to 12-year-old children and adults performed a VSTM task. We combined temporal independent component analysis (ICA) with general linear modeling to test whether the strength of fronto-parietal activity correlated with VSTM performance on a trial-by-trial basis. In children, but not adults, slow frequency theta (4-7 Hz) activity within a right lateralized fronto-parietal network in anticipation of the memoranda predicted the accuracy with which those memory items were subsequently retrieved. These findings suggest that inconsistent use of anticipatory control mechanism contributes significantly to trial-to-trial variability in VSTM maintenance performance. PMID:25410426
16. Cytokinesis failure and successful multipolar mitoses drive aneuploidy in glioblastoma cells.
Science.gov (United States)
Telentschak, Sergej; Soliwoda, Mark; Nohroudi, Klaus; Addicks, Klaus; Klinz, Franz-Josef
2015-04-01
Glioblastoma (GB) is the most frequent human brain tumor and is associated with a poor prognosis. Multipolar mitosis and spindles have occasionally been observed in cultured glioblastoma cells and in glioblastoma tissues, but their mode of origin and relevance have remained unclear. In the present study, we investigated a novel GB cell line (SGB4) exhibiting mitotic aberrations and established a functional link between cytokinesis failure, centrosome amplification, multipolar mitosis and aneuploidy in glioblastoma. Long-term live cell imaging showed that >3% of mitotic SGB4 cells underwent multipolar mitosis (tripolar > tetrapolar > pentapolar). A significant amount of daugther cells generated by multipolar mitosis were viable and completed several rounds of mitosis. Pedigree analysis of mitotic events revealed that in many cases a bipolar mitosis with failed cytokinesis occurred prior to a multipolar mitosis. Additionally, we observed that SGB4 cells were also able to undergo a bipolar mitosis after failed cytokinesis. Colchicine-induced mitotic arrest and metaphase spreads demonstrated that SGB4 cells had a modal chromosome number of 58 ranging from 23 to 170. Approximately 82% of SGB4 cells were hyperdiploid (47-57 chromosomes) or hypotriploid (58-68 chromosomes). In conclusion, SGB4 cells passed through multipolar cell divisions and generated viable progeny by reductive mitoses. Our results identified cytokinesis failure occurring before and after multipolar or bipolar mitoses as important mechanisms to generate chromosomal heterogeneity in glioblastoma cells. PMID:25625503
17. Transition between viscous dipolar and inertial multipolar dynamos
CERN Document Server
Oruba, Ludivine
2014-01-01
We show that the transition between steady dipolar and fluctuating multipolar dynamos is characterized by a three terms balance between the non-gradient parts of inertial, viscous and Coriolis forces. We derive from this equilibrium the sole parameter Ro E$^{-1/3} \\equiv$ Re E$^{2/3}$, which accurately describes the transition for a wide database of 132 fully three dimensional direct numerical simulations of spherical rotating dynamos (courtesy of U. Christensen). This transition can be equivalently described by Ro/l$^\\star_u$ (resp. Re l$^{\\star\\, 2}_u$), which correspond to the two terms balance between the non-gradient part of the Coriolis force and of inertial (resp. viscous) forces. An appropriate definition of the non-dimensional dissipation length scale l$^\\star_u$ (as introduced in Oruba and Dormy, 2014) provides a critical value of this parameter of order unity at the transition.
18. Anatomy of the binary black hole recoil: A multipolar analysis
CERN Document Server
Schnittman, Jeremy D; van Meter, James R; Baker, John G; Boggs, William D; Centrella, Joan; Kelly, Bernard J; McWilliams, Sean T
2007-01-01
We present a multipolar analysis of the gravitational recoil computed in recent numerical simulations of binary black hole (BH) coalescence, for both unequal masses and non-zero, non-precessing spins. We show that multipole moments up to and including l=4 are sufficient to accurately reproduce the final recoil velocity (within ~2%) and that only a few dominant modes contribute significantly to it (within ~5%). We describe how the relative amplitudes, and more importantly, the relative phases, of these few modes control the way in which the recoil builds up throughout the inspiral, merger, and ringdown phases. We also find that the numerical results can be reproduced by an effective Newtonian'' formula for the multipole moments obtained by replacing the radial separation in the Newtonian formulae with an effective radius computed from the numerical data. Beyond the merger, the numerical results are reproduced by a superposition of three Kerr quasi-normal modes (QNMs). Analytic formulae, obtained by expressin...
19. Rf multipolar plasma for broad and reactive ion beams
International Nuclear Information System (INIS)
Hot cathode dc multipolar plasma sources are very efficient but have lifetime and contamination problems when they are operated with chemically active gases. As an alternative solution the rf excitation of a triode structure immersed in a multicusp magnetic field has been developed. The structure has an internal cathode, an anode around which are the magnet lines, and a third electrode which is either the target electrode in the case of 'plasma processing' or the beam forming electrode in the case of 'ion beam processing'. The source has been operated with oxygen and fluorocarbon gases without any lifetime problems. The discharge may be run down to 10-4 torr (within source chamber) and creates a plasma which is homogeneous to +- 1.5% over 175 mm diameter section and which delivers at the beam forming electrode a current density of about 1 mAcm-2 for 500 W rf power. (author)
20. Multipolar Black Body Radiation Shifts for the Single Ion Clocks
CERN Document Server
Arora, Bindiya; Sahoo, B K
2011-01-01
Appraising the projected $10^{-18}$ fractional uncertainty in the optical frequency standards using singly ionized ions, we estimate the black-body radiation (BBR) shifts due to the magnetic dipole (M1) and electric quadrupole (E2) multipoles of the magnetic and electric fields, respectively. Multipolar scalar polarizabilities are determined for the singly ionized calcium (Ca$^+$) and strontium (Sr$^+$) ions using the relativistic coupled-cluster method; though the theory can be exercised for any single ion clock proposal. The expected energy shifts for the respective clock transitions are estimated to be $4.38(3) \\times 10^{-4}$ Hz for Ca$^+$ and $9.50(7) \\times 10^{-5}$ Hz for Sr$^+$. These shifts are large enough and may be prerequisite for the frequency standards to achieve the foreseen $10^{-18}$ precision goal.
1. Simulated multipolarized MAPSAR images to distinguish agricultural crops
Directory of Open Access Journals (Sweden)
Wagner Fernando Silva
2012-06-01
Full Text Available Many researchers have shown the potential of Synthetic Aperture Radar (SAR images for agricultural applications, particularly for monitoring regions with limitations in terms of acquiring cloud free optical images. Recently, Brazil and Germany began a feasibility study on the construction of an orbital L-band SAR sensor referred to as MAPSAR (Multi-Application Purpose SAR. This sensor provides L-band images in three spatial resolutions and polarimetric, interferometric and stereoscopic capabilities. Thus, studies are needed to evaluate the potential of future MAPSAR images. The objective of this study was to evaluate multipolarized MAPSAR images simulated by the airborne SAR-R99B sensor to distinguish coffee, cotton and pasture fields in Brazil. Discrimination among crops was evaluated through graphical and cluster analysis of mean backscatter values, considering single, dual and triple polarizations. Planting row direction of coffee influenced the backscatter and was divided into two classes: parallel and perpendicular to the sensor look direction. Single polarizations had poor ability to discriminate the crops. The overall accuracies were less than 59 %, but the understanding of the microwave interaction with the crops could be explored. Combinations of two polarizations could differentiate various fields of crops, highlighting the combination VV-HV that reached 78 % overall accuracy. The use of three polarizations resulted in 85.4 % overall accuracy, indicating that the classes pasture and parallel coffee were fully discriminated from the other classes. These results confirmed the potential of multipolarized MAPSAR images to distinguish the studied crops and showed considerable improvement in the accuracy of the results when the number of polarizations was increased.
2. Simulated multipolarized MAPSAR images to distinguish agricultural crops
Scientific Electronic Library Online (English)
Wagner Fernando, Silva; Bernardo Friedrich Theodor, Rudorff; Antonio Roberto, Formaggio; Waldir Renato, Paradella; José Claudio, Mura.
2012-06-01
Full Text Available SciELO Brazil | Language: English Abstract in english Many researchers have shown the potential of Synthetic Aperture Radar (SAR) images for agricultural applications, particularly for monitoring regions with limitations in terms of acquiring cloud free optical images. Recently, Brazil and Germany began a feasibility study on the construction of an orb [...] ital L-band SAR sensor referred to as MAPSAR (Multi-Application Purpose SAR). This sensor provides L-band images in three spatial resolutions and polarimetric, interferometric and stereoscopic capabilities. Thus, studies are needed to evaluate the potential of future MAPSAR images. The objective of this study was to evaluate multipolarized MAPSAR images simulated by the airborne SAR-R99B sensor to distinguish coffee, cotton and pasture fields in Brazil. Discrimination among crops was evaluated through graphical and cluster analysis of mean backscatter values, considering single, dual and triple polarizations. Planting row direction of coffee influenced the backscatter and was divided into two classes: parallel and perpendicular to the sensor look direction. Single polarizations had poor ability to discriminate the crops. The overall accuracies were less than 59 %, but the understanding of the microwave interaction with the crops could be explored. Combinations of two polarizations could differentiate various fields of crops, highlighting the combination VV-HV that reached 78 % overall accuracy. The use of three polarizations resulted in 85.4 % overall accuracy, indicating that the classes pasture and parallel coffee were fully discriminated from the other classes. These results confirmed the potential of multipolarized MAPSAR images to distinguish the studied crops and showed considerable improvement in the accuracy of the results when the number of polarizations was increased.
3. High-resolution imaging and spectroscopy of multipolar plasmonic resonances in aluminum nanoantennas.
Science.gov (United States)
Martin, Jérôme; Kociak, Mathieu; Mahfoud, Zackaria; Proust, Julien; Gérard, Davy; Plain, Jérôme
2014-10-01
We report on the high resolution imaging of multipolar plasmonic resonances in aluminum nanoantennas using electron energy loss spectroscopy (EELS). Plasmonic resonances ranging from near-infrared to ultraviolet (UV) are measured. The spatial distributions of the multipolar resonant modes are mapped and their energy dispersion is retrieved. The losses in the aluminum antennas are studied through the full width at half-maximum of the resonances, unveiling the weight of both interband and radiative damping mechanisms of the different multipolar resonances. In the blue-UV spectral range, high order resonant modes present a quality factor up to 8, two times higher than low order resonant modes at the same energy. This study demonstrates that near-infrared to ultraviolet tunable multipolar plasmonic resonances in aluminum nanoantennas with relatively high quality factors can be engineered. Aluminum nanoantennas are thus an appealing alternative to gold or silver ones in the visible and can be efficiently used for UV plasmonics. PMID:25207386
4. Average multipolarity of continuum transitions in nuclei at high angular momentum
International Nuclear Information System (INIS)
The multipolarity of continuum transitions deexciting high-spin states has been deduced from measured conversion coefficients. The investigated 146Nd(20Ne, 4n or 5n) 162 161Yb reactions were selected by gating on discrete lines. The average multipolarity gradually changes from E2 at 0.5 MeV to E1 above 1.5 MeV. (Auth.)
5. Inferring task-related networks using independent component analysis in magnetoencephalography.
Science.gov (United States)
Luckhoo, H; Hale, J R; Stokes, M G; Nobre, A C; Morris, P G; Brookes, M J; Woolrich, M W
2012-08-01
A novel framework for analysing task-positive data in magnetoencephalography (MEG) is presented that can identify task-related networks. Techniques that combine beamforming, the Hilbert transform and temporal independent component analysis (ICA) have recently been applied to resting-state MEG data and have been shown to extract resting-state networks similar to those found in fMRI. Here we extend this approach in two ways. First, we systematically investigate optimisation of time-frequency windows for connectivity measurement. This is achieved by estimating the distribution of functional connectivity scores between nodes of known resting-state networks and contrasting it with a distribution of artefactual scores that are entirely due to spatial leakage caused by the inverse problem. We find that functional connectivity, both in the resting-state and during a cognitive task, is best estimated via correlations in the oscillatory envelope in the 8-20 Hz frequency range, temporally down-sampled with windows of 1-4s. Second, we combine ICA with the general linear model (GLM) to incorporate knowledge of task structure into our connectivity analysis. The combination of ICA with the GLM helps overcome problems of these techniques when used independently: namely, the interpretation and separation of interesting independent components from those that represent noise in ICA and the correction for multiple comparisons when applying the GLM. We demonstrate the approach on a 2-back working memory task and show that this novel analysis framework is able to elucidate the functional networks involved in the task beyond that which is achieved using the GLM alone. We find evidence of localised task-related activity in the area of the hippocampus, which is difficult to detect reliably using standard methods. Task-positive ICA, coupled with the GLM, has the potential to be a powerful tool in the analysis of MEG data. PMID:22569064
6. Study of the multipolarity distribution in 12C and 16O nuclei up to 30MeV excitation energy by proton and alpha inelastic scattering
International Nuclear Information System (INIS)
The high-lying states of 12C and 16O nuclei were investigated and the distribution of the different multipolarities extracted up to 30MeV excitation energy. The fine structure of the giant dipole resonance was studied and the dipole cross section compared to those expected from the model of Satchler. Other multipolarities (2+ 3- 4+ T=0) were also observed. The observed cross sections exhibit a significant fraction of the sum rule (EWSR) but never exceeds 50% (50% for 1- T=1, 20 to 40% for 2+ T=0, approximately 15% for 3- T=0 and + T=0). The distribution observed compares fairly well with the one expected from nuclear structure calculations except for 2+ T=0 in 16O where complete disagreement is observed between experiment and theory
7. Word repetition priming induced oscillations in auditory cortex: a magnetoencephalography study
OpenAIRE
Tavabi, Kambiz; Embick, David; Roberts, Timothy P. L.
2011-01-01
Magnetoencephalography was used in a passive repetition priming paradigm. Words in two frequency bins (high/low) were presented to subjects auditorily. Subjects’ brain responses to these stimuli were analyzed using synthetic aperture magnetometry. The main finding is that single word repetition of low frequency word pairs significantly attenuated the post-second word event related desynchronization in the theta-alpha (5–15Hz) bands, 200–600ms post second word stimulus onset. Peak signif...
8. Magnetoencephalography non-invasively reveals a unique neurophysiological profile of focal-onset epileptic spasms
OpenAIRE
Kakisaka, Yosuke; Gupta, Ajay; Enatsu, Rei; Wang, Zhong I.; Alexopoulos, Andreas V.; Mosher, John C.; Dubarry, Anne-sophie; Hino-fukuyo, Naomi; Burgess, Richard C.
2013-01-01
Epilepsy is defined as a disorder of the brain characterized by an enduring predisposition to experience epileptic seizures and the neurobiological, cognitive, psychological, and social difficulties relating to the condition. An epileptic spasm (ES) is a type of seizure characterized by clusters of short contractions involving axial muscles and proximal segments. However, the precise mechanism of ESs remains unknown. Despite the potential of magnetoencephalography (MEG) as a tool for investig...
9. Inferring task-related networks using independent component analysis in magnetoencephalography.
OpenAIRE
Luckhoo, H.; Hale, Jr; Stokes, Mg; Nobre, Ac; Morris, PG; Brookes, Mj; Woolrich, Mw
2012-01-01
A novel framework for analysing task-positive data in magnetoencephalography (MEG) is presented that can identify task-related networks. Techniques that combine beamforming, the Hilbert transform and temporal independent component analysis (ICA) have recently been applied to resting-state MEG data and have been shown to extract resting-state networks similar to those found in fMRI. Here we extend this approach in two ways. First, we systematically investigate optimisation of time-frequency wi...
10. Cortical magnetoencephalography of deep brain stimulation for the treatment of postural tremor
OpenAIRE
Connolly, Allison T.; Bajwa, Jawad A.; Johnson, Matthew D.
2012-01-01
The effects of deep brain stimulation (DBS) on motor cortex circuitry in Essential tremor (ET) and Parkinson’s disease (PD) patients are not well understood, in part, because most imaging modalities have difficulty capturing and localizing motor cortex dynamics on the same temporal scale as motor symptom expression. Here, we report on the use of magnetoencephalography (MEG) to characterize sources of postural tremor activity within the brain of an ET/PD patient and the effects of bilateral ...
11. FMRP regulates multipolar to bipolar transition affecting neuronal migration and cortical circuitry.
Science.gov (United States)
La Fata, Giorgio; Gärtner, Annette; Domínguez-Iturza, Nuria; Dresselaers, Tom; Dawitz, Julia; Poorthuis, Rogier B; Averna, Michele; Himmelreich, Uwe; Meredith, Rhiannon M; Achsel, Tilmann; Dotti, Carlos G; Bagni, Claudia
2014-12-01
Deficiencies in fragile X mental retardation protein (FMRP) are the most common cause of inherited intellectual disability, fragile X syndrome (FXS), with symptoms manifesting during infancy and early childhood. Using a mouse model for FXS, we found that Fmrp regulates the positioning of neurons in the cortical plate during embryonic development, affecting their multipolar-to-bipolar transition (MBT). We identified N-cadherin, which is crucial for MBT, as an Fmrp-regulated target in embryonic brain. Furthermore, spontaneous network activity and high-resolution brain imaging revealed defects in the establishment of neuronal networks at very early developmental stages, further confirmed by an unbalanced excitatory and inhibitory network. Finally, reintroduction of Fmrp or N-cadherin in the embryo normalized early postnatal neuron activity. Our findings highlight the critical role of Fmrp in the developing cerebral cortex and might explain some of the clinical features observed in patients with FXS, such as alterations in synaptic communication and neuronal network connectivity. PMID:25402856
12. Magnetar Giant Flares in Multipolar Magnetic Fields --- II. Flux Rope Eruptions With Current Sheets
CERN Document Server
Huang, Lei
2014-01-01
We propose a physical mechanism to explain giant flares and radio afterglows in terms of a magnetospheric model containing both a helically twisted flux rope and a current sheet (CS). With the appearance of CS, we solve a mixed boundary value problem to get the magnetospheric field based on a domain decomposition method. We investigate properties of the equilibrium curve of the flux rope when the CS is present in background multipolar fields. In response to the variations at the magnetar surface, it quasi-statically evolves in stable equilibrium states. The loss of equilibrium occurs at a critical point and, beyond that point, it erupts catastrophically. New features show up when the CS is considered. Especially, we find two kinds of physical behaviors, i.e., catastrophic state transition and catastrophic escape. Magnetic energy would be released during state transitions. The released magnetic energy is sufficient to drive giant flares. The flux rope would go away from the magnetar quasi-statically, which is ...
13. The role of angular momentum in the construction of electromagnetic multipolar fields
International Nuclear Information System (INIS)
Multipolar solutions of Maxwell’s equations are used in many practical applications and are essential for the understanding of light-matter interactions at the fundamental level. Unlike the set of plane wave solutions of electromagnetic fields, the multipolar solutions do not share a standard derivation or notation. As a result, expressions originating from different derivations can be difficult to compare. Some of the derivations of the multipolar solutions do not explicitly show their relation to the angular momentum operators, thus hiding important properties of these solutions. In this paper, the relation between two of the most common derivations of this set of solutions is explicitly shown and their relation to the angular momentum operators is exposed. (paper)
14. Connexin 43 controls the multipolar phase of neuronal migration to the cerebral cortex.
Science.gov (United States)
Liu, Xiuxin; Sun, Lin; Torii, Masaaki; Rakic, Pasko
2012-05-22
The prospective pyramidal neurons, migrating from the proliferative ventricular zone to the overlaying cortical plate, assume multipolar morphology while passing through the transient subventricular zone. Here, we show that this morphogenetic transformation, from the bipolar to the mutipolar and then back to bipolar again, is associated with expression of connexin 43 (Cx43) and, that knockdown of Cx43 retards, whereas its overexpression enhances, this morphogenetic process. In addition, we have observed that knockdown of Cx43 reduces expression of p27, whereas overexpression of p27 rescues the effect of Cx43 knockdown in the multipolar neurons. Furthermore, functional gap junction/hemichannel domain, and the C-terminal domain of Cx43, independently enhance the expression of p27 and promote the morphological transformation and migration of the multipolar neurons in the SVZ/IZ. Collectively, these results indicate that Cx43 regulates the passage of migrating neurons through their multipolar stage via p27 signaling and that interference with this process, by either genetic and/or environmental factors, may cause cortical malformations. PMID:22566616
15. On the exterior magnetic field and silent sources in magnetoencephalography
OpenAIRE
George Dassios; Fotini Kariotou
2004-01-01
Two main results are included in this paper. The first one deals with the leading asymptotic term of the magnetic field outside any conductive medium. In accord with physical reality, it is proved mathematically that the leading approximation is a quadrupole term which means that the conductive brain tissue weakens the intensity of the magnetic field outside the head. The second one concerns the orientation of the silent sources when the geometry of the brain model is not a sphere but an elli...
16. Spherical tensor multipolar electrostatics and smooth particle mesh Ewald summation: a theoretical study.
Science.gov (United States)
Zielinski, François; Popelier, Paul L A
2014-07-01
The point-charge approximation, typically used by classical molecular mechanics force-fields, can be overcome by a multipolar expansion. For decades multipole moments were only used in the context of the rigid body approximation but recently it has become possible to combine multipolar electrostatics with molecular flexibility. The program DL_MULTI, which is derived from DL_POLY_2, includes efficient multipolar Ewald functionality up to the hexadecapole moment but the code is restricted to rigid bodies. The incorporation of flexibility into DL_MULTI would cause too large an impact on its architecture whereas the package DL_POLY_4 offers a more attractive and sustainable route to handle multipolar electrostatics. This package inherently handles molecular flexibility, which warrants sufficiently transferable atoms or atoms that are "knowledgeable" about their chemical environment (as made possible by quantum chemical topology and machine learning). DL_MULTI uses the spherical multipole formalism, which is mathematically more involved than the Cartesian one but which is more compact. DL_POLY_4 uses the computationally efficient method of smooth particle mesh Ewald (SPME) summation, which has also been parallellized by others. Therefore, combining the strengths of DL_POLY_4 and DL_MULTI poses the challenge of merging SPME with multipolar electrostatics by spherical multipole. In an effort to recast as clearly as possible the principles behind DL_MULTI, its key equations have been reformulated by the more streamlined route involving the algebra of complex numbers, and some of these equations' peculiarities clarified. This article explores theoretically the repercussions of the merging of SPME with spherical multipole electrostatics (as implemented in DL_MULTI). Difficulties in design and implementation of possible future code are discussed. PMID:24958301
17. On the exterior magnetic field and silent sources in magnetoencephalography
Directory of Open Access Journals (Sweden)
Fotini Kariotou
2004-04-01
Full Text Available Two main results are included in this paper. The first one deals with the leading asymptotic term of the magnetic field outside any conductive medium. In accord with physical reality, it is proved mathematically that the leading approximation is a quadrupole term which means that the conductive brain tissue weakens the intensity of the magnetic field outside the head. The second one concerns the orientation of the silent sources when the geometry of the brain model is not a sphere but an ellipsoid which provides the best possible mathematical approximation of the human brain. It is shown that what characterizes a dipole source as “silent” is not the collinearity of the dipole moment with its position vector, but the fact that the dipole moment lives in the Gaussian image space at the point where the position vector meets the surface of the ellipsoid. The appropriate representation for the spheroidal case is also included.
18. Synchronized brain activity and neurocognitive function in patients with low-grade glioma: A magnetoencephalography study
Science.gov (United States)
Bosma, Ingeborg; Douw, Linda; Bartolomei, Fabrice; Heimans, Jan J.; van Dijk, Bob W.; Postma, Tjeerd J.; Stam, Cornelis J.; Reijneveld, Jaap C.; Klein, Martin
2008-01-01
We investigated the mechanisms underlying neurocognitive dysfunction in patients with low-grade glioma (LGG) by relating functional connectivity revealed by magnetoencephalography to neurocognitive function. We administered a battery of standardized neurocognitive tests measuring six neurocognitive domains to a group of 17 LGG patients and 17 healthy controls, matched for age, sex, and educational level. Magnetoencephalography recordings were conducted during an eyes-closed “resting state,” and synchronization likelihood (a measure of statistical correlation between signals) was computed from the delta to gamma frequency bands to assess functional connectivity between different brain areas. We found that, compared with healthy controls, LGG patients performed more poorly in psychomotor function, attention, information processing, and working memory. LGG patients also had significantly higher long-distance synchronization scores in the delta, theta, and lower gamma frequency bands than did controls. In contrast, patients displayed a decline in synchronization likelihood in the lower alpha frequency band. Within the delta, theta, and lower and upper gamma bands, increasing short-and long-distance connectivity was associated with poorer neurocognitive functioning. In summary, LGG patients showed a complex overall pattern of differences in functional resting-state connectivity compared with healthy controls. The significant correlations between neurocognitive performance and functional connectivity in various frequencies and across multiple brain areas suggest that the observed neurocognitive deficits in these patients can possibly be attributed to differences in functional connectivity due to tumor and/or treatment. PMID:18650489
19. Magnetoencephalography demonstrates multiple asynchronous generators during human sleep spindles.
Science.gov (United States)
Dehghani, Nima; Cash, Sydney S; Rossetti, Andrea O; Chen, Chih Chuan; Halgren, Eric
2010-07-01
Sleep spindles are approximately 1 s bursts of 10-16 Hz activity that occur during stage 2 sleep. Spindles are highly synchronous across the cortex and thalamus in animals, and across the scalp in humans, implying correspondingly widespread and synchronized cortical generators. However, prior studies have noted occasional dissociations of the magnetoencephalogram (MEG) from the EEG during spindles, although detailed studies of this phenomenon have been lacking. We systematically compared high-density MEG and EEG recordings during naturally occurring spindles in healthy humans. As expected, EEG was highly coherent across the scalp, with consistent topography across spindles. In contrast, the simultaneously recorded MEG was not synchronous, but varied strongly in amplitude and phase across locations and spindles. Overall, average coherence between pairs of EEG sensors was approximately 0.7, whereas MEG coherence was approximately 0.3 during spindles. Whereas 2 principle components explained approximately 50% of EEG spindle variance, >15 were required for MEG. Each PCA component for MEG typically involved several widely distributed locations, which were relatively coherent with each other. These results show that, in contrast to current models based on animal experiments, multiple asynchronous neural generators are active during normal human sleep spindles and are visible to MEG. It is possible that these multiple sources may overlap sufficiently in different EEG sensors to appear synchronous. Alternatively, EEG recordings may reflect diffusely distributed synchronous generators that are less visible to MEG. An intriguing possibility is that MEG preferentially records from the focal core thalamocortical system during spindles, and EEG from the distributed matrix system. PMID:20427615
20. Multipolar radiofrequency ablation using internally cooled electrodes in ex vivo bovine liver: Correlation between volume of coagulation and amount of applied energy
International Nuclear Information System (INIS)
Purpose: To evaluate the relationship between applied energy and volume of coagulation induced by multipolar radiofrequency (RF) ablation. Methods and materials: Multipolar RF ablations (n = 80) were performed in ex vivo bovine liver. Three bipolar applicators with two electrodes located on each applicator shaft were placed in a triangular array. The power-output (75–225 W) and the distance between the different applicators (2, 3, 4, 5 cm) were systematically varied. The volume of confluent white coagulation and the amount of applied energy were assessed. Based on our experimental data the relationship between the volume of coagulation and applied energy was assessed by nonlinear regression analysis. The variability explained by the model was determined by the parameter r2. Results: The volume of coagulation increases with higher amounts of applied energy. The maximum amount of energy was applied at a power-output of 75 W and an applicator distance of 5 cm. The corresponding maximum volume of coagulation was 324 cm3 and required an application of 453 kJ. The relationship between amount of applied energy (E) and volume (V) of coagulation can be described by the function, V = 4.39E0.7 (r2 = 0.88). By approximation the volume of coagulation can be calculated by the linear function V = 0.61E + 40.7 (r2 = 0.87). Conclusion: Ex vivo the relationship between volume of coagulation and amount of applied energy can be des and amount of applied energy can be described by mathematical modeling. The amount of applied energy correlates to the volume of coagulation and may be a useful parameter to monitor multipolar RF ablation.
1. Spatiotemporal neural interactions underlying continuous drawing movements as revealed by magnetoencephalography.
Science.gov (United States)
Christopoulos, Vassilios N; Leuthold, Arthur C; Georgopoulos, Apostolos P
2012-10-01
Continuous and sequential movements are controlled by widely distributed brain regions. A series of studies have contributed to understanding the functional role of these regions in a variety of visuomotor tasks. However, little is known about the neural interactions underpinning continuous movements. In the current study, we examine the spatiotemporal neural interactions underlying continuous drawing movements and the association of them with behavioral components. We conducted an experiment in which subjects copied a pentagon continuously for ~45 s using an XY joystick, while neuromagnetic fluxes were recorded from their head using a 248-sensor whole-head magnetoencephalography (MEG) device. Each sensor time series was rendered stationary and non-autocorrelated by applying an autoregressive integrated moving average model and taking the residuals. We used the directional variability of the movement as a behavioral measure of the controls generated. The main objective of this study was to assess the relation between neural interactions and the variability of movement direction. That is, we divided the continuous recordings into consecutive periods (i.e., time-bins) of 51 steps duration and computed the pairwise cross-correlations between the prewhitened time series in each time-bin. The circular standard deviation of the movement direction within each time-bin provides an estimate of the directional variability of the 51-ms trajectory segment. We looked at the association between neural interactions and variability of movement direction, separately for each pair of sensors, by running a cross-correlation analysis between the strength of the MEG pairwise cross-correlations and the circular standard deviations. We identified two types of neuronal networks: in one, the neural interactions are correlated with the directional variability of the movement at negative time-lags (feedforward), and in the other, the neural interactions are correlated with the directional variability of the movement at positive time-lags (feedback). Sensors associated mostly with feedforward processes are distributed in the left hemisphere and the right occipital-temporal junction, whereas sensors related to feedback processes are distributed in the right hemisphere and the left cerebellar hemisphere. These results are in line with findings from a series of previous studies showing that specific brain regions are involved in feedforward and feedback control processes to plan, perform, and correct movements. Additionally, we looked at whether changes in movement direction modulate the neural interactions. Interestingly, we found a preponderance of sensors associated with changes in movement direction over the right hemisphere-ipsilateral to the moving hand. These sensors exhibit stronger coupling with the rest of the sensors for trajectory segments with high rather than low directional movement variability. We interpret these results as evidence that ipsilateral cortical regions are recruited for continuous movements when the curvature of the trajectory increases. To the best of our knowledge, this is the first study that shows how neural interactions are associated with a behavioral control parameter in continuous and sequential movements. PMID:22923206
2. Wnt Signaling Regulates Multipolar-to-Bipolar Transition of Migrating Neurons in the Cerebral Cortex.
Science.gov (United States)
Boitard, Michael; Bocchi, Riccardo; Egervari, Kristof; Petrenko, Volodymyr; Viale, Beatrice; Gremaud, Stéphane; Zgraggen, Eloisa; Salmon, Patrick; Kiss, Jozsef Z
2015-03-01
The precise timing of pyramidal cell migration from the ventricular germinal zone to the cortical plate is essential for establishing cortical layers, and migration errors can lead to neurodevelopmental disorders underlying psychiatric and neurological diseases. Here, we report that Wnt canonical as well as non-canonical signaling is active in pyramidal precursors during radial migration. We demonstrate using constitutive and conditional genetic strategies that transient downregulation of canonical Wnt/?-catenin signaling during the multipolar stage plays a critical role in polarizing and orienting cells for radial migration. In addition, we show that reduced canonical Wnt signaling is triggered cell autonomously by time-dependent expression of Wnt5A and activation of non-canonical signaling. We identify ephrin-B1 as a canonical Wnt-signaling-regulated target in control of the multipolar-to-bipolar switch. These findings highlight the critical role of Wnt signaling activity in neuronal positioning during cortical development. PMID:25732825
3. Wnt Signaling Regulates Multipolar-to-Bipolar Transition of Migrating Neurons in the Cerebral Cortex
Directory of Open Access Journals (Sweden)
Michael Boitard
2015-03-01
Full Text Available The precise timing of pyramidal cell migration from the ventricular germinal zone to the cortical plate is essential for establishing cortical layers, and migration errors can lead to neurodevelopmental disorders underlying psychiatric and neurological diseases. Here, we report that Wnt canonical as well as non-canonical signaling is active in pyramidal precursors during radial migration. We demonstrate using constitutive and conditional genetic strategies that transient downregulation of canonical Wnt/?-catenin signaling during the multipolar stage plays a critical role in polarizing and orienting cells for radial migration. In addition, we show that reduced canonical Wnt signaling is triggered cell autonomously by time-dependent expression of Wnt5A and activation of non-canonical signaling. We identify ephrin-B1 as a canonical Wnt-signaling-regulated target in control of the multipolar-to-bipolar switch. These findings highlight the critical role of Wnt signaling activity in neuronal positioning during cortical development.
4. The emerging multi-polar world and China's grand game
Energy Technology Data Exchange (ETDEWEB)
Gupta, Rajan [Los Alamos National Laboratory
2011-01-19
This talk outlines a scenario describing an emerging multipolar world that is aligned with geographical regions. The stability and security of this multipolar world is examined with respect to demographics, trade (economics), resource constraints, and development. In particular I focus on Asia which has two large countries, China and India, competing for resources and markets and examine the emerging regional relations, opportunities and threats. These relationships must overcome many hurdles - the Subcontinent is in a weak position politically and strategically and faces many threats, and China's growing power could help stabilize it or create new threats. Since the fate of 1.5 billion (2.4 billion by 2050) people depends on how the Subcontinent evolves, this talk is meant to initiates a discussion of what China and India can do to help the region develop and stabilize.
5. Calculation of multipolar exchange interactions in spin-orbital coupled systems.
Science.gov (United States)
Pi, Shu-Ting; Nanguneri, Ravindra; Savrasov, Sergey
2014-02-21
A new method of computing multipolar exchange interaction in spin-orbit coupled systems is developed using multipolar tensor expansion of the density matrix in local density approximation+U electronic structure calculation. Within the mean field approximation, exchange constants can be mapped into a series of total energy calculations by the pair-flip approximation technique. The application to uranium dioxide shows an antiferromagnetic superexchange coupling in dipoles but a ferromagnetic one in quadrupoles which is very different from past studies. Further calculation of the spin-lattice interaction indicates it is of the same order with the superexchange and characterizes the overall behavior of the quadrupolar part as a competition between them. PMID:24579631
6. Multipolar, magnetic and vibrational lattice dynamics in the low temperature phase of uranium dioxide
OpenAIRE
Caciuffo, R.; Santini, P.; Carretta, S.; Amoretti, G.; Hiess, A.; Magnani, N.; Regnault, L. -p; Lander, G. H.
2013-01-01
We report the results of inelastic neutron scattering experiments performed with triple-axis spectrometers to investigate the low-temperature collective dynamics in the ordered phase of uranium dioxide. The results are in excellent agreement with the predictions of mean-field RPA calculations emphasizing the importance of multipolar superexchange interactions. By comparing neutron scattering intensities in different polarization channels and at equivalent points in different...
7. Connexin 43 controls the multipolar phase of neuronal migration to the cerebral cortex
OpenAIRE
Liu, Xiuxin; Sun, Lin; Torii, Masaaki; Rakic, Pasko
2012-01-01
The prospective pyramidal neurons, migrating from the proliferative ventricular zone to the overlaying cortical plate, assume multipolar morphology while passing through the transient subventricular zone. Here, we show that this morphogenetic transformation, from the bipolar to the mutipolar and then back to bipolar again, is associated with expression of connexin 43 (Cx43) and, that knockdown of Cx43 retards, whereas its overexpression enhances, this morphogenetic process. In addition, we ha...
8. Angular distribution ratios. A method for determining gamma-ray multipolarities in projectile fragmentation reactions
CERN Document Server
Dombrádi, Z; Timar, J; Azaiez, F; Sorlin, O; Amorini, F; Belleguic, M; Baiborodin, D; Bauchet, A; Becker, F
2003-01-01
The angular distribution of a gamma ray emitted from an aligned state is inhomogeneous and its pattern is characteristic for the amount of angular momentum transferred by the gamma transition. Depending on the relative weight of the different components 15-35% alignment of different fragments have already been observed, suggesting a possibility for measuring inhomogeneous gamma-ray angular distribution, useful for multipolarity determination. (R.P.)
9. Determination of the multipolarity of prompt electromagnetic transitions from angular distributions of conversion electrons. Pt. 1
International Nuclear Information System (INIS)
The formalism for the angular distribution of conversion electrons from aligned states is described for transitions with multipole order - and #betta# angular distribution function provides an excellent method for assigning multipolarities E1, M1, E2 and (M1 + E2) to prompt decay lines. The applicability of the method is investigated for different spins, electron energies and Z-values. The influence of attenuation factors in angular distribution measurement is discussed and the effect on the multipole assignment is examined. (orig.)
10. An axosomatic and axodendritic multipolar neuron in the lizard cerebral cortex.
OpenAIRE
Bernabeu, A.; Martinez-guijarro, F. J.; La Iglesia, J. A.; Lopez-garcia, C.
1994-01-01
The morphology and synaptic organisation of a type of multipolar neuron of the lizard cerebral cortex were studied by Golgi impregnation, intracellular injection of horseradish peroxidase, electron microscopy, and immunocytochemistry. It is a GABA-immunoreactive interneuron and most likely parvalbumin-immunoreactive. Its conspicuous axonal arbor is characterised by an initial segment arising from the soma or from a juxtasomatic dendritic segment. The initial axon segment ramifies and gives ri...
11. Equations of motion in scalar-tensor theories of gravity: A covariant multipolar approach
OpenAIRE
Obukhov, Yuri N.; Puetzfeld, Dirk
2014-01-01
We discuss the dynamics of extended test bodies for a large class of scalar-tensor theories of gravitation. A covariant multipolar Mathisson-Papapetrou-Dixon type of approach is used to derive the equations of motion in a systematic way for both Jordan and Einstein formulations of these theories. The results obtained provide the framework to experimentally test scalar-tensor theories by means of extended test bodies.
12. Multi-user Linear Precoding for Multi-polarized Massive MIMO System under Imperfect CSIT
OpenAIRE
Park, Jaehyun; Clerckx, Bruno
2014-01-01
The space limitation and the channel acquisition prevent Massive MIMO from being easily deployed in a practical setup. Motivated by current deployments of LTE-Advanced, the use of multi-polarized antennas can be an efficient solution to address the space constraint. Furthermore, the dual-structured precoding, in which a preprocessing based on the spatial correlation and a subsequent linear precoding based on the short-term channel state information at the transmitter (CSIT) ...
13. Denoising and Frequency Analysis of Noninvasive Magnetoencephalography Sensor Signals for Functional Brain Mapping
CERN Document Server
Ukil, A
2015-01-01
Magnetoencephalography (MEG) is an important noninvasive, nonhazardous technology for functional brain mapping, measuring the magnetic fields due to the intracellular neuronal current flow in the brain. However, most often, the inherent level of noise in the MEG sensor data collection process is large enough to obscure the signal(s) of interest. In this paper, a denoising technique based on the wavelet transform and the multiresolution signal decomposition technique along with thresholding is presented, substantiated by application results. Thereafter, different frequency analysis are performed on the denoised MEG signals to identify the major frequencies of the brain oscillations present in the denoised signals. Time-frequency plots (spectrograms) of the denoised signals are also provided.
14. Analysis of magnetoencephalography recordings from Alzheimer's disease patients using embedding entropies.
Science.gov (United States)
Gomez, Carlos; Poza, Jesus; Monge, Jesus; Fernandez, Alberto; Hornero, Roberto
2014-08-01
The aim of this study was to examine the magnetoencephalography (MEG) background activity in Alzheimer's disease (AD) using three embedding entropies: approximate entropy (ApEn), sample entropy (SampEn), and fuzzy entropy (FuzzyEn). These three methods measure the time series regularity. Five minutes of recording were acquired with a 148-channel whole-head magnetometer from 36 AD patients and 24 elderly control subjects. Our results showed that MEG activity was more regular in AD patients than in controls. Additionally, FuzzyEn revealed statistically significant differences between the two groups (p SampEn did not. The better discriminating results of FuzzyEn in comparison with the other entropy algorithms suggest that it is more efficient for the characterization of MEG activity in AD. PMID:25570055
15. Large-scale spontaneous fluctuations and correlations in brain electrical activity observed with magnetoencephalography.
Science.gov (United States)
Liu, Zhongming; Fukunaga, Masaki; de Zwart, Jacco A; Duyn, Jeff H
2010-05-15
Knowledge about the intrinsic functional architecture of the human brain has been greatly expanded by the extensive use of resting-state functional magnetic resonance imaging (fMRI). However, the neurophysiological correlates and origins of spontaneous fMRI signal changes remain poorly understood. In the present study, we characterized the power modulations of spontaneous magnetoencephalography (MEG) rhythms recorded from human subjects during wakeful rest (with eyes open and eyes closed) and light sleep. Through spectral, correlation and coherence analyses, we found that resting-state MEG rhythms demonstrated ultraslow (synchronized over a large spatial distance, especially between bilaterally homologous regions in opposite hemispheres. These observations are in line with the known spatio-temporal properties of spontaneous fMRI signals, and further suggest that the coherent power modulation of spontaneous rhythmic activity reflects the electrophysiological signature of the large-scale functional networks previously observed with fMRI in the resting brain. PMID:20123024
16. Use of magnetoencephalography (MEG) to study functional brain networks in neurodegenerative disorders.
Science.gov (United States)
Stam, C J
2010-02-15
The pathophysiological mechanisms underlying clinical symptoms in neurodegenerative disorders such as Parkinson's disease (PD) and Alzheimer's disease (AD) are incompletely understood. Magnetoencephalography (MEG) is a relatively new functional neuroimaging technique, which allows the simultaneous recording of the brain's magnetic activity from large arrays of sensors covering the whole head. MEG studies in PD and AD have identified characteristic patterns of abnormal oscillatory activity in different frequency bands. Furthermore, MEG studies aimed at the characterization of distributed functional networks have demonstrated distinct patterns of abnormal connectivity in demented and non-demented PD, as well as in AD. In PD abnormal oscillatory activity and disturbed connectivity may respond differently to dopaminergic treatment. Further studies in this field could benefit from new technological developments such as ultra low field MRI and from the application of a well-defined theoretical framework such as graph theory to the study of disturbed brain networks. PMID:19729174
17. POWER-SHIFTS IN THE GLOBAL ECONOMY. TRANSITION TOWARDS A MULTIPOLAR WORLD ORDER
Directory of Open Access Journals (Sweden)
Ion IGNAT
2013-12-01
Full Text Available The paper aims to analyze the new realities and trends related to the new polarity of the global economy, and thus the reconfiguration of global power centers, a process characterized by two simultaneous trends: the rise of new powers and the relative decline of traditional powers. At the beginning of 21st century, global power is suffering two major changes: on the one hand it manifests a transition from West to East, from Atlantic to the Asia-Pacific, and on the other hand, a diffusion from state to non-state actors. Current global economic power has a multipolar distribution, shared between the United States, European Union, Japan and BRICs, with no balance of power between these poles, opposed by the strong ambition of rising countries, China especially, China that rivals the traditional powers represented by the developed countries. The evolution of the main macroeconomic indicators given by the most important global organizations, shows a gradual transition towards a multipolar world. Therefore, the United States is and will remain for a long period of time the global economic leader. However, as China, India and Brazil are growing rapidly, and Russia is looking for lost status, the world is becoming multipolar.
18. RP58 Regulates the Multipolar-Bipolar Transition of Newborn Neurons in the Developing Cerebral Cortex
Directory of Open Access Journals (Sweden)
Chiaki Ohtaka-Maruyama
2013-02-01
Full Text Available Accumulating evidence suggests that many brain diseases are associated with defects in neuronal migration, suggesting that this step of neurogenesis is critical for brain organization. However, the molecular mechanisms underlying neuronal migration remain largely unknown. Here, we identified the zinc-finger transcriptional repressor RP58 as a key regulator of neuronal migration via multipolar-to-bipolar transition. RP58?/? neurons exhibited severe defects in the formation of leading processes and never shifted to the locomotion mode. Cre-mediated deletion of RP58 using in utero electroporation in RP58flox/flox mice revealed that RP58 functions in cell-autonomous multipolar-to-bipolar transition, independent of cell-cycle exit. Finally, we found that RP58 represses Ngn2 transcription to regulate the Ngn2-Rnd2 pathway; Ngn2 knockdown rescued migration defects of the RP58?/? neurons. Our findings highlight the critical role of RP58 in multipolar-to-bipolar transition via suppression of the Ngn2-Rnd2 pathway in the developing cerebral cortex.
19. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography
Directory of Open Access Journals (Sweden)
Tollkötter Melanie
2006-08-01
Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.
20. Does IQ affect the functional brain network involved in pseudoword reading in students with reading disability? A magnetoencephalography study
OpenAIRE
Simos, Panagiotis G.; Fletcher, Jack M.; Papanicolaou, Andrew C.
2014-01-01
The study examined whether individual differences in performance and verbal IQ affect the profiles of reading-related regional brain activation in 127 students experiencing reading difficulties and typical readers. Using magnetoencephalography in a pseudoword read-aloud task, we compared brain activation profiles of students experiencing word-level reading difficulties who did (n=29) or did not (n=36) meet the IQ-reading achievement discrepancy criterion. Typical readers assigned to a lower-I...
1. Localization of Interictal Epileptiform Activity Using Magnetoencephalography with Synthetic Aperture Magnetometry in Patients with a Vagus Nerve Stimulator
OpenAIRE
Stapleton-kotloski, Jennifer R.; Kotloski, Robert J.; Boggs, Jane A.; Popli, Gautam; O’donovan, Cormac A.; Couture, Daniel E.; Cornell, Cassandra; Godwin, Dwayne W.
2014-01-01
Magnetoencephalography (MEG) provides useful and non-redundant information in the evaluation of patients with epilepsy, and in particular, during the pre-surgical evaluation of pharmaco-resistant epilepsy. Vagus nerve stimulation (VNS) is a common treatment for pharmaco-resistant epilepsy. However, interpretation of MEG recordings from patients with a VNS is challenging due to the severe magnetic artifacts produced by the VNS. We used synthetic aperture magnetometry (g2) [SAM(g2)], an adaptiv...
2. Auditory and Cognitive Deficits Associated with Acquired Amusia after Stroke: A Magnetoencephalography and Neuropsychological Follow-Up Study
OpenAIRE
Sa?rka?mo?, Teppo; Tervaniemi, Mari; Soinila, Seppo; Autti, Taina; Silvennoinen, Heli M.; Laine, Matti; Hietanen, Marja; Pihko, Elina
2010-01-01
Acquired amusia is a common disorder after damage to the middle cerebral artery (MCA) territory. However, its neurocognitive mechanisms, especially the relative contribution of perceptual and cognitive factors, are still unclear. We studied cognitive and auditory processing in the amusic brain by performing neuropsychological testing as well as magnetoencephalography (MEG) measurements of frequency and duration discrimination using magnetic mismatch negativity (MMNm) recordings. Fifty-three p...
3. The value of magnetoencephalography for seizure-onset zone localization in magnetic resonance imaging-negative partial epilepsy
OpenAIRE
Jung, Julien; Bouet, Romain; Delpuech, Claude; Ryvlin, Philippe; Isnard, Jean; Guenot, Marc; Bertrand, Olivier; Hammers, Alexander; Mauguie?re, Franc?ois
2013-01-01
Surgical treatment of epilepsy is a challenge for patients with non-contributive brain magnetic resonance imaging. However, surgery is feasible if the seizure-onset zone is precisely delineated through intracranial electroencephalography recording. We recently described a method, volumetric imaging of epileptic spikes, to delineate the spiking volume of patients with focal epilepsy using magnetoencephalography. We postulated that the extent of the spiking volume delineated with volumetric ima...
4. Saccadic Preparation in the Frontal Eye Field Is Modulated by Distinct Trial History Effects as Revealed by Magnetoencephalography
OpenAIRE
Lee, Adrian K. C.; Ha?ma?la?inen, Matti S.; Dyckman, Kara A.; Barton, Jason J. S.; Manoach, Dara S.
2010-01-01
Optimizing outcomes involves rapidly and continuously adjusting behavior based on context. While most behavioral studies focus on immediate task conditions, responses to events are also influenced by recent history. We used magnetoencephalography and a saccadic paradigm to investigate the neural bases of 2 trial history effects that are well characterized in the behavioral eye movement literature: task-switching and the prior-antisaccade effect. We found that switched trials were associated w...
5. Magnetoencefalografía: mapeo de la dinámica espaciotemporal de la actividad neuronal / Magnetoencephalography: mapping the spatiotemporal dynamics of neuronal activity
Scientific Electronic Library Online (English)
Yang, Zhang; Wenbo, Zhang; Vicenta, Reynoso Alcántara; Juan, Silva-Pereyra.
2014-01-01
Full Text Available La magnetoencefalografía es una técnica de neuroimagen no invasiva que mide, con gran exactitud temporal, los campos magnéticos en la superficie de la cabeza producidos por corrientes neuronales en regiones cerebrales. Esta técnica es sumamente útil en la investigación básica y clínica, porque ademá [...] s permite ubicar el origen de la actividad neural en el cerebro. En esta revisión se abordan aspectos básicos de la biofísica del método y se discuten los hallazgos sobre procesos como la percepción del habla, la atención auditiva y la integración de la información visual y auditiva, que son importantes en la investigación. Igualmente, se ilustran sus ventajas, sus limitaciones y las nuevas tendencias en la investigación con magnetoencefalografía. Abstract in english Magnetoencephalography is a noninvasive imaging technique that measures the magnetic fields on the surface of the head --produced by neuronal currents in brain regions -- and provides highly accurate temporal information. Magnetoencephalography is extremely useful in basic and clinical research as i [...] t can also locate the sources of neural activity in the brain. This review chiefly approaches biophysics-related aspects of the method; findings are also discussed on issues such as speech perception, auditory attention and integration of visual-auditory information, which are quintessential in this type of research. Lastly, this review discusses the benefits and limitations of magnetoencephalography and outlines new trends in research with this technique.
6. Left atrial voltage remodeling after pulmonary venous isolation with multipolar radiofrequency ablation
Directory of Open Access Journals (Sweden)
Francesco Laurenzi
2013-11-01
Full Text Available Purpose: Pulmonary vein isolation (PVI is the accepted primary endpoint for catheter ablation of atrial fibrillation (AF. The aim of this study was to evaluate the level of PVI by PVAC, a multipolar circular catheter utilizing bipolar/unipolar radiofrequency (RF energy. Methods: Twenty patients with paroxysmal AF underwent PVAC ablation. PVI was validated by voltage reduction and pacing tests. Before and after RF ablation, left atrium (LA and PV electroanatomic mapping (EAM were performed by EnSite NavX system. Voltage abatement was considered for potentials 24mm: 9/20 (45% vs 11/57 (19%, p
7. The vapor-liquid interface potential of (multi)polar fluids and its influence on ion solvation
Science.gov (United States)
Horváth, Lorand; Beu, Titus; Manghi, Manoel; Palmeri, John
2013-04-01
The interface between the vapor and liquid phase of quadrupolar-dipolar fluids is the seat of an electric interfacial potential whose influence on ion solvation and distribution is not yet fully understood. To obtain further microscopic insight into water specificity we first present extensive classical molecular dynamics simulations of a series of model liquids with variable molecular quadrupole moments that interpolates between SPC/E water and a purely dipolar liquid. We then pinpoint the essential role played by the competing multipolar contributions to the vapor-liquid and the solute-liquid interface potentials in determining an important ion-specific direct electrostatic contribution to the ionic solvation free energy for SPC/E water—dominated by the quadrupolar and dipolar parts—beyond the dominant polarization one. Our results show that the influence of the vapor-liquid interfacial potential on ion solvation is strongly reduced due to the strong partial cancellation brought about by the competing solute-liquid interface potential.
8. The value of magnetoencephalography for seizure-onset zone localization in magnetic resonance imaging-negative partial epilepsy
Science.gov (United States)
Bouet, Romain; Delpuech, Claude; Ryvlin, Philippe; Isnard, Jean; Guenot, Marc; Bertrand, Olivier; Hammers, Alexander; Mauguière, François
2013-01-01
Surgical treatment of epilepsy is a challenge for patients with non-contributive brain magnetic resonance imaging. However, surgery is feasible if the seizure-onset zone is precisely delineated through intracranial electroencephalography recording. We recently described a method, volumetric imaging of epileptic spikes, to delineate the spiking volume of patients with focal epilepsy using magnetoencephalography. We postulated that the extent of the spiking volume delineated with volumetric imaging of epileptic spikes could predict the localizability of the seizure-onset zone by intracranial electroencephalography investigation and outcome of surgical treatment. Twenty-one patients with non-contributive magnetic resonance imaging findings were included. All patients underwent intracerebral electroencephalography investigation through stereotactically implanted depth electrodes (stereo-electroencephalography) and magnetoencephalography with delineation of the spiking volume using volumetric imaging of epileptic spikes. We evaluated the spatial congruence between the spiking volume determined by magnetoencephalography and the localization of the seizure-onset zone determined by stereo-electroencephalography. We also evaluated the outcome of stereo-electroencephalography and surgical treatment according to the extent of the spiking volume (focal, lateralized but non-focal or non-lateralized). For all patients, we found a spatial overlap between the seizure-onset zone and the spiking volume. For patients with a focal spiking volume, the seizure-onset zone defined by stereo-electroencephalography was clearly localized in all cases and most patients (6/7, 86%) had a good surgical outcome. Conversely, stereo-electroencephalography failed to delineate a seizure-onset zone in 57% of patients with a lateralized spiking volume, and in the two patients with bilateral spiking volume. Four of the 12 patients with non-focal spiking volumes were operated upon, none became seizure-free. As a whole, patients having focal magnetoencephalography results with volumetric imaging of epileptic spikes are good surgical candidates and the implantation strategy should incorporate volumetric imaging of epileptic spikes results. On the contrary, patients with non-focal magnetoencephalography results are less likely to have a localized seizure-onset zone and stereo electroencephalography is not advised unless clear localizing information is provided by other presurgical investigation methods. PMID:24014520
9. Magnetoencephalography based on high-Tc superconductivity: a closer look into the brain?
CERN Document Server
Öisjöen, F; Figueras, G A; Chukharkin, M L; Kalabukhov, A; Hedström, A; Elam, M; Winkler, D
2011-01-01
Magnetoencephalography (MEG) enables the study of brain activity by recording the magnetic fields generated by neural currents and has become an important technique for neuroscientists in research and clinical settings. Unlike the liquid-helium cooled low-Tc superconducting quantum interference devices (SQUIDs) that have been at the heart of modern MEG systems since their invention, high-Tc SQUIDs can operate with liquid nitrogen cooling. The relaxation of thermal insulation requirements allows for a reduction in the stand-off distance between the sensor and the room-temperature environment from a few centimeters to less than a millimeter, where MEG signal strength is significantly higher. Despite this advantage, high-Tc SQUIDs have only been used for proof-of-principle MEG recordings of well-understood evoked activity. Here we show high-Tc SQUID-based MEG may be capable of providing novel information about brain activity due to the close proximity of the sensor to the head. We have performed single- and two-...
10. Cognitive impairments in schizophrenia as assessed through activation and connectivity measures of magnetoencephalography (MEG data
Directory of Open Access Journals (Sweden)
LeightonBHinkley
2010-11-01
Full Text Available The cognitive dysfunction present in patients with schizophrenia is thought to be driven in part by disorganized connections between higher-order cortical fields. Although studies utilizing EEG, PET and fMRI have contributed significantly to our understanding of these mechanisms, magnetoencephalography (MEG possesses great potential to answer long-standing questions linking brain interactions to cognitive operations in the disorder. Many experimental paradigms employed in EEG and fMRI are readily extendible to MEG and have expanded our understanding of the neurophysiological architecture present in schizophrenia. Source reconstruction techniques, such as adaptive spatial filtering, take advantage of the spatial localization abilities of MEG, allowing us to evaluate which specific structures contribute to atypical cognition in schizophrenia. Finally, both bivariate and multivariate functional connectivity metrics of MEG data are useful for understanding how these interactions in the brain are impaired in schizophrenia, and how cognitive and clinical outcomes are affected as a result. We also present here data from our own laboratory that illustrates how some of these novel functional connectivity measures, specifically imaginary coherence (IC, are quite powerful in relating disconnectivity in the brain to characteristic behavioral findings in the disorder.
11. Differential spectral power alteration following acupuncture at different designated places revealed by magnetoencephalography
Science.gov (United States)
You, Youbo; Bai, Lijun; Dai, Ruwei; Xue, Ting; Zhong, Chongguang; Liu, Zhenyu; Wang, Hu; Feng, Yuanyuan; Wei, Wenjuan; Tian, Jie
2012-03-01
As an ancient therapeutic technique in Traditional Chinese Medicine, acupuncture has been used increasingly in modern society to treat a range of clinical conditions as an alternative and complementary therapy. However, acupoint specificity, lying at the core of acupuncture, still faces many controversies. Considering previous neuroimaging studies on acupuncture have mainly employed functional magnetic resonance imaging, which only measures the secondary effect of neural activity on cerebral metabolism and hemodynamics, in the current study, we adopted an electrophysiological measurement technique named magnetoencephalography (MEG) to measure the direct neural activity. 28 healthy college students were recruited in this study. We filtered MEG data into 5 consecutive frequency bands (delta, theta, alpha, beta and gamma band) and grouped 140 sensors into 10 main brain regions (left/right frontal, central, temporal, parietal and occipital regions). Fast Fourier Transformation (FFT) based spectral analysis approach was further performed to explore the differential band-limited power change patterns of acupuncture at Stomach Meridian 36 (ST36) using a nearby nonacupoint (NAP) as control condition. Significantly increased delta power and decreased alpha as well as beta power in bilateral frontal ROIs were observed following stimulation at ST36. Compared with ST36, decreased alpha power in left and right central, right parietal as well as right temporal ROIs were detected in NAP group. Our research results may provide additional evidence for acupoint specificity.
12. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.
Science.gov (United States)
Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo
2015-01-01
Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. PMID:25363137
13. Functional mapping of the sensorimotor cortex: combined use of magnetoencephalography, functional MRI, and motor evoked potentials
Energy Technology Data Exchange (ETDEWEB)
Morioka, T. [Dept. of Neurosurgery, Neurological Inst., Kyshu Univ., Fukuoka (Japan); Fujii, K. [Dept. of Neurosurgery, Neurological Inst., Kyshu Univ., Fukuoka (Japan); Fukui, M. [Dept. of Neurosurgery, Neurological Inst., Kyshu Univ., Fukuoka (Japan); Mizushima, A. [Dept. of Radiology, Kyushu Univ. Fukuoka (Japan); Matsumoto, S. [Dept. of Radiology, Kyushu Univ. Fukuoka (Japan); Hasuo, K. [Dept. of Radiology, Kyushu Univ. Fukuoka (Japan); Yamamoto, T. [Dept. of Otolaryngology, Kyushu Univ. Fukuoka (Japan); Tobimatsu, S. [Dept. of Clinical Neurophysiology, Neurological Inst., Kyushu Univ., Fukuoka (Japan)
1995-10-01
Combined use of magnetoencephalography (MEG), functional magnetic resonance imaging (f-MRI), and motor evoked potentials (MEPs) was carried out on one patient in an attempt to localise precisely a structural lesion to the central sulcus. A small cyst in the right frontoparietal region was thought to be the cause of generalised seizures in an otherwise asymptomatic woman. First the primary sensory cortex was identified with magnetic source imaging (MSI) of somatosensory evoked magnetic fields using MEG and MRI. Second, the motor area of the hand was identified using f-MRI during handsqueezing. Then transcranial magnetic stimulation localised the hand motor area on the scalp, which was mapped onto the MRI. There was a good agreement between MSI, f-MRI and MEP as to the location of the sensorimotor cortex and its relationship to the lesion. Multimodality mapping techniques may thus prove useful in the precise localisation of cortical lesions, and in the preoperative determination of the best treatment for peri-rolandic lesions. (orig.)
14. Functional mapping of the sensorimotor cortex: combined use of magnetoencephalography, functional MRI, and motor evoked potentials
International Nuclear Information System (INIS)
Combined use of magnetoencephalography (MEG), functional magnetic resonance imaging (f-MRI), and motor evoked potentials (MEPs) was carried out on one patient in an attempt to localise precisely a structural lesion to the central sulcus. A small cyst in the right frontoparietal region was thought to be the cause of generalised seizures in an otherwise asymptomatic woman. First the primary sensory cortex was identified with magnetic source imaging (MSI) of somatosensory evoked magnetic fields using MEG and MRI. Second, the motor area of the hand was identified using f-MRI during handsqueezing. Then transcranial magnetic stimulation localised the hand motor area on the scalp, which was mapped onto the MRI. There was a good agreement between MSI, f-MRI and MEP as to the location of the sensorimotor cortex and its relationship to the lesion. Multimodality mapping techniques may thus prove useful in the precise localisation of cortical lesions, and in the preoperative determination of the best treatment for peri-rolandic lesions. (orig.)
15. The neural processing of musical instrument size information in the brain investigated by magnetoencephalography
Science.gov (United States)
Rupp, Andre; van Dinther, Ralph; Patterson, Roy D.
2005-04-01
The specific cortical representation of size was investigated by recording auditory evoked fields (AEFs) elicited by changes of instrument size and pitch. In Experiment 1, a French horn and one scaled to double the size played a three note melody around F3 or its octave, F4. Many copies of these four melodies were played in random order and the AEF was measured continuously. A similar procedure was applied to saxophone sounds in a separate run. In Experiment 2, the size and type of instrument (French horn and saxophone) were varied without changing the octave. AEFs were recorded in five subjects using magnetoencephalography and evaluated by spatio-temporal source analysis with one equivalent dipole in each hemisphere. The morphology of the source waveforms revealed that each note within the melody elicits a well-defined P1-N1-P2 AEF-complex with adaptation for the 2nd and 3rd note. At the transition of size, pitch, or both, a larger AEF-complex was evoked. However, size changes elicited a stronger N1 than pitch changes. Furthermore, this size-related N1 enhancement was larger for French horn than saxophone. The results indicate that the N1 plays an important role in the specific representation of instrument size.
16. Changes in language-specific brain activation after therapy for aphasia using magnetoencephalography: a case study.
Science.gov (United States)
Breier, Joshua I; Maher, Lynn M; Schmadeke, Stephanie; Hasan, Khader M; Papanicolaou, Andrew C
2007-06-01
A patient with chronic aphasia underwent functional imaging during a language comprehension task using magnetoencephalography (MEG) before and after constraint induced language therapy (CILT). In the pre- and immediate post-treatment (TX) scans MEG activity sources were observed within right hemisphere only, and were located in areas homotopic to left hemisphere language areas. There was a significant increase in activation in these areas between the two sessions. This change was not observed in an age-matched patient with chronic aphasia who underwent sequential language testing and MEG scanning across a similar time period without being administered therapy. In the 3-month post-TX scan bilateral activation was observed, including significant activation within the left temporal lobe. The changes in the spatial parameters of the maps of receptive language function after therapy were accompanied by improvement in language function. Results provide support, in the same individual, for a role for both hemispheres in recovery of language function after therapy for chronic aphasia. PMID:17786776
17. A precorrected-fFT method to accelerate the solution of the forward problem in magnetoencephalography.
Science.gov (United States)
Tissari, Satu; Rahola, Jussi
2003-02-21
Accurate localization of brain activity recorded by magnetoencephalography (MEG) requires that the forward problem, i.e. the magnetic field caused by a dipolar source current in a homogeneous volume conductor, be solved precisely. We have used the Galerkin method with piecewise linear basis functions in the boundary element method to improve the solution of the forward problem. In addition, we have replaced the direct method, i.e. the LU decomposition, by a modern iterative method to solve the dense linear system of equations arising from the boundary element discretization. In this paper we describe a precorrected-FFT method which we have combined with the iterative method to accelerate the solution of the forward problem and to avoid the explicit formation of the dense coefficient matrix. For example, with a triangular mesh of 18,000 triangles, the CPU time to solve the forward problem was decreased from 3.5 h to less than 5 min, and the computer memory requirements were decreased from 1.3 GB to 156 MB. The method makes it possible to solve quickly significantly larger problems with widely-used workstations. PMID:12630746
18. A constrained ICA approach for real-time cardiac artifact rejection in magnetoencephalography.
Science.gov (United States)
Breuer, Lukas; Dammers, Jürgen; Roberts, Timothy P L; Shah, N Jon
2014-02-01
Recently, magnetoencephalography (MEG)-based real-time brain computing interfaces (BCI) have been developed to enable novel and promising methods of neuroscience research and therapy. Artifact rejection prior to source localization largely enhances the localization accuracy. However, many BCI approaches neglect real-time artifact removal due to its time consuming processing. With cardiac artifact rejection for real-time analysis (CARTA), we introduce a novel algorithm capable of real-time cardiac artifact (CA) rejection. The method is based on constrained independent component analysis (ICA), where a priori information of the underlying source signal is used to optimize and accelerate signal decomposition. In CARTA, this is performed by estimating the subject's individual density distribution of the cardiac activity, which leads to a subject-specific signal decomposition algorithm. We show that the new method is capable of effectively reducing CAs within one iteration and a time delay of 1 ms. In contrast, Infomax and Extended Infomax ICA converged not until seven iterations, while FastICA needs at least ten iterations. CARTA was tested and applied to data from three different but most common MEG systems (4-D-Neuroimaging, VSM MedTech Inc., and Elekta Neuromag). Therefore, the new method contributes to reliable signal analysis utilizing BCI approaches. PMID:24001953
19. NMR relaxation rate and dynamical structure factors in nematic and multipolar liquids of frustrated spin chains under magnetic fields
OpenAIRE
Sato, Masahiro; Momoi, Tsutomu; Furusaki, Akira
2008-01-01
Recently, it has been shown that spin nematic (quadrupolar) or higher multipolar correlation functions exhibit a quasi long-range order in the wide region of the field-induced Tomonaga-Luttinger-liquid (TLL) phase in spin-1/2 zigzag chains. In this paper, we point out that the temperature dependence of the NMR relaxation rate 1/T_1 in these multipolar TLLs is qualitatively different from that in more conventional TLLs of one-dimensional quantum magnets (e.g., the spin-1/2 He...
20. Multipolar hepatic radiofrequency ablation using up to six applicators: preliminary results
Energy Technology Data Exchange (ETDEWEB)
Bruners, P.; Schmitz-Rode, T. [RWTH Aachen (Germany). Lehrstuhl fuer Angewandte Medizintechnik; Guenther, R.W.; Mahnken, A. [Universitaetsklinikum RWTH Aachen (Germany). Klinik fuer Radiologische Diagnostik
2008-03-15
Purpose: to evaluate the clinical feasibility and safety of hepatic radiofrequency (RF) ablation using a multipolar RF system permitting the simultaneous use of up to six electrodes. Materials and methods: ten patients (3 female, 7 male, mean age 61) suffering from 29 hepatic metastases (range: 1-5) of different tumors were treated with a modified multipolar RF system (CelonLab Power, Celon Medical Instruments, Teltow, Germany) operating four to six needle-shaped internally cooled RF applicators. The procedure duration, applied energy and generator output were recorded during the intervention. The treatment result and procedure-related complications were analyzed. The achieved coagulation volume was calculated on the basis of contrast-enhanced CT scans 24 hours after RF ablation. Results: complete tumor ablation was achieved in all cases as determined by the post-interventional lack of contrast enhancement in the target region using four applicators in five patients, five applicators in one patient and six applicators in four patients. A mean energy deposition of 353.9 {+-} 176.2 kJ resulted in a mean coagulation volume of 115.9 {+-} 79.5 cm{sup 3}. The mean procedure duration was 74.9 {+-} 21.2 minutes. Four patients showed an intraabdominal hemorrhage which necessitated further interventional treatment (embolization; percutaneous histoacryl injection) in two patients. (orig.)
1. Multipolar electromagnetic fields around neutron stars: exact vacuum solutions and related properties
CERN Document Server
Petri, Jerome
2015-01-01
The magnetic field topology in the surrounding of neutron stars is one of the key questions in pulsar magnetospheric physics. A very extensive literature exists about the assumption of a dipolar magnetic field but very little progress has been made in attempts to include multipolar components in a self-consistent way. In this paper, we study the effect of multipolar electromagnetic fields anchored in the star. We give exact analytical solutions in closed form for any order $l$ and apply them to the retarded point quadrupole ($l=2$), hexapole ($l=3$) and octopole ($l=4$), a generalization of the retarded point dipole ($l=1$). We also compare the Poynting flux from each multipole and show that the spin down luminosity depends on the ratio $R/r_{\\rm L}$, $R$ being the neutron star radius and $r_{\\rm L}$ the light-cylinder radius. Therefore the braking index also depends on $R/r_{\\rm L}$. As such multipole fields possess very different topology, most importantly smaller length scales compared to the dipolar field...
2. Anisotropic multipolar exchange interactions in systems with strong spin-orbit coupling
Science.gov (United States)
Pi, Shu-Ting; Nanguneri, Ravindra; Savrasov, Sergey
2014-07-01
We introduce a theoretical framework for computations of anisotropic multipolar exchange interactions found in many spin-orbit coupled magnetic systems and propose a method to extract these coupling constants using a density functional total energy calculation. This method is developed using a multipolar expansion of local density matrices for correlated orbitals that are responsible for magnetic degrees of freedom. Within the mean-field approximation, we show that each coupling constant can be recovered from a series of total energy calculations via what we call the "pair-flip" technique. This technique flips the relative phase of a pair of multipoles and computes the corresponding total energy cost associated with the given exchange constant. To test it, we apply our method to uranium dioxide, which is a system known to have pseudospin J =1 superexchange induced dipolar, and superexchange plus spin-lattice induced quadrupolar orderings. Our calculation reveals that the superexchange and spin-lattice contributions to the quadrupolar exchange interactions are about the same order with ferro- and antiferromagnetic contributions, respectively. This highlights a competition rather than a cooperation between them. Our method could be a promising tool to explore magnetic properties of rare-earth compounds and hidden-order materials.
3. An analysis of the electromagnetic field in multi-polar linear induction system
International Nuclear Information System (INIS)
In this paper a new method for determination of the electromagnetic field vectors in a multi-polar linear induction system (LIS) is described. The analysis of the electromagnetic field has been done by four dimensional electromagnetic potentials in conjunction with theory of the magnetic loops . The electromagnetic field vectors are determined in the Minkovski's space as elements of the Maxwell's tensor. The results obtained are compared with those got from the analysis made by the finite elements method (FEM).With the method represented in this paper one can determine the electromagnetic field vectors in the multi-polar linear induction system using four-dimensional potential. A priority of this method is the obtaining of analytical results for the electromagnetic field vectors. These results are also valid for linear media. The dependencies are valid also at high speeds of movement. The results of the investigated linear induction system are comparable to those got by the finite elements method. The investigations may be continued in the determination of other characteristics such as drag force, levitation force, etc. The method proposed in this paper for an analysis of linear induction system can be used for optimization calculations. (Author)
4. Oscillatory neuronal dynamics associated with manual acupuncture: a magnetoencephalography study using beamforming analysis
Directory of Open Access Journals (Sweden)
AzizAsghar
2012-11-01
Full Text Available Magnetoencephalography (MEG enables non-invasive recording of neuronal activity, with reconstruction methods providing estimates of underlying brain source locations and oscillatory dynamics from externally recorded neuromagnetic fields. The aim of our study was to use MEG to determine the effect of manual acupuncture on neuronal oscillatory dynamics. A major problem in MEG investigations of manual acupuncture is the absence of onset times for each needle manipulation. Given that beamforming (spatial filtering analysis is not dependent upon stimulus-driven responses being phase-locked to stimulus onset, we postulated that beamforming could reveal source locations and induced changes in neuronal activity during manual acupuncture. In a beamformer analysis, a two-minute period of manual acupuncture needle manipulation delivered to the ipsilateral right LI-4 (Hegu acupoint was contrasted with a two-minute baseline period. We considered oscillatory power changes in the theta (4-8Hz, alpha (8-13Hz, beta (13-30Hz and gamma (30-100Hz frequency bands. We found significant decreases in beta band power in the contralateral primary somatosensory cortex and superior frontal gyrus. In the ipsilateral cerebral hemisphere, we found significant power decreases in beta and gamma frequency bands in only the superior frontal gyrus. No significant power modulations were found in theta and alpha bands. Our results indicate that beamforming is a useful analytical tool to reconstruct underlying neuronal activity associated with manual acupuncture. Our main finding was of beta power decreases in primary somatosensory cortex and superior frontal gyrus, which opens up a line of future investigation regarding whether this contributes towards an underlying mechanism of acupuncture.
5. Producing speech with a newly learned morphosyntax and vocabulary: an magnetoencephalography study.
Science.gov (United States)
Hultén, Annika; Karvonen, Leena; Laine, Matti; Salmelin, Riitta
2014-08-01
Ten participants learned a miniature language (Anigram), which they later employed to verbally describe a pictured event. Using magnetoencephalography, the cortical dynamics of sentence production in Anigram was compared with that in the native tongue from the preparation phase up to the production of the final word. At the preparation phase, a cartoon image with two animals prompted the participants to plan either the corresponding simple sentence (e.g., "the bear hits the lion") or a grammar-free list of the two nouns ("the bear, the lion"). For the newly learned language, this stage induced stronger left angular and adjacent inferior parietal activations than for the native language, likely reflecting a higher load on lexical retrieval and STM storage. The preparation phase was followed by a cloze task where the participants were prompted to produce the last word of the sentence or word sequence. Production of the sentence-final word required retrieval of rule-based inflectional morphology and was accompanied by increased activation of the left middle superior temporal cortex that did not differ between the two languages. Activation of the right temporal cortex during the cloze task suggested that this area plays a role in integrating word meanings into the sentence frame. The present results indicate that, after just a few days of exposure, the newly learned language harnesses the neural resources for multiword production much the same way as the native tongue and that the left and right temporal cortices seem to have functionally different roles in this processing. PMID:24392893
6. [Magnetoencephalography: a method for the study of brain function in neurosurgery].
Science.gov (United States)
Braun, Christoph
2007-01-01
Magnetoencephalography (MEG) is a non-invasive method for the study of electro-magnetic brain activity. Using multi-channel recordings the topography of the magnetic field can be recorded above the scalp with a temporal resolution of less than one millisecond. The method is suitable for the description and localization of cortical brain functions. The magnetic field strength that can be measured at up to 300 sensors is in the range of a few femto Tesla (10(-15) T) to somepico Tesla (10(-12) T). In order to measure these low magnetic fields highly sensitive SQUID-detectors are used on the one hand. On the other hand appropriate shielding equipment is employed to reduce effects of noise. Besides brain responses evoked by internal and external events (event-related magnetic fields), state-dependant oscillatory brain activity MEG can be recorded (spontaneous activity). Slow cortical oscillations in the range of 1 to 4 Hz are generated by damage of brain tissue and in the surrounding of brain tumors. In neurosurgery these activities can be used to monitor therapeutic success. Furthermore, oscillatory activities provide information about cortical regions involved in motor control. The measurement of motor related activities allows for the identification of recovery processes and reorganization after brain injury. Event-related magnetic brain responses are used in pre-surgical diagnosis and planning of treatment in epilepsy. In addition, they can be utilized to assess alterations in the functional organization of the cortex following injuries, tumor growth and neurosurgical interventions. PMID:18254551
7. The Neural Mechanisms of Re-Experiencing Mental Fatigue Sensation: A Magnetoencephalography Study
Science.gov (United States)
Ishii, Akira; Karasuyama, Takuma; Kikuchi, Taiki; Tanaka, Masaaki; Yamano, Emi; Watanabe, Yasuyoshi
2015-01-01
There have been several studies which have tried to clarify the neural mechanisms of fatigue sensation; however fatigue sensation has multiple aspects. We hypothesized that past experience related to fatigue sensation is an important factor which contributes to future formation of fatigue sensation through the transfer to memories that are located within specific brain structures. Therefore, we aimed to investigate the neural mechanisms of fatigue sensation related to memory. In the present study, we investigated the neural activity caused by re-experiencing the fatigue sensation that had been experienced during a fatigue-inducing session. Thirteen healthy volunteers participated in fatigue and non-fatigue experiments in a crossover fashion. In the fatigue experiment, they performed a 2-back test session for 40 min to induce fatigue sensation, a rest session for 15 min to recover from fatigue, and a magnetoencephalography (MEG) session in which they were asked to re-experience the state of their body with fatigue that they had experienced in the 2-back test session. In the non-fatigue experiment, the participants performed a free session for 15 min, a rest session for 15 min, and an MEG session in which they were asked to re-experience the state of their body without fatigue that they had experienced in the free session. Spatial filtering analyses of oscillatory brain activity showed that the delta band power in the left Brodmann’s area (BA) 39, alpha band power in the right pulvinar nucleus and the left BA 40, and beta band power in the left BA 40 were lower when they re-experienced the fatigue sensation than when they re-experienced the fatigue-free sensation, indicating that these brain regions are related to re-experiencing the fatigue sensation. Our findings may help clarify the neural mechanisms underlying fatigue sensation. PMID:25826300
8. Plasma diffusion through a two-dimensional magnetic field. Application to multipolar discharge
International Nuclear Information System (INIS)
In this work, a collisional plasma diffusion theory through a two dimensional magnetic field is presented. This study allows to define two types of diffusion domains: the weak field domain, where diffusion is practically isotropic, and strong field domain where diffusion is only parallel to field lines. The inversion and ion confinement by ambipolar electric field, perpendicular to line fields, is also understood. This theory is applied to a multipolar discharge. A sheath thickness can be defined, which is the width of the region in which the plasma diffusion is limited by the magnetic field. Little dependence with magnetic field is found. All these results have been observed experimentally. The diffusion equation numerical solution allows to find the density and potential profiles. The comparison of the density in the middle of the plasma with and without multicusp field is done
9. Dual-symmetric Lagrangians in quantum electrodynamics: I. Conservation laws and multi-polar coupling
International Nuclear Information System (INIS)
By using a complex field with a symmetric combination of electric and magnetic fields, a first-order covariant Lagrangian for Maxwell's equations is obtained, similar to the Lagrangian for the Dirac equation. This leads to a dual-symmetric quantum electrodynamic theory with an infinite set of local conservation laws. The dual symmetry is shown to correspond to a helical phase, conjugate to the conserved helicity. There is also a scaling symmetry, conjugate to the conserved entanglement. The results include a novel form of the photonic wavefunction, with a well-defined helicity number operator conjugate to the chiral phase, related to the fundamental dual symmetry. Interactions with charged particles can also be included. Transformations from minimal coupling to multi-polar or more general forms of coupling are particularly straightforward using this technique. The dual-symmetric version of quantum electrodynamics derived here has potential applications to nonlinear quantum optics and cavity quantum electrodynamics
10. Role of pairing degrees of freedom and higher multipolarity deformations in spontaneous fission process
International Nuclear Information System (INIS)
Spontaneous fission (Tsf) and alpha-decay half-lives (T?) of the heaviest nuclei with atomic number 100 ? Z ? 114 are calculated on the basis of the deformed Woods-Saxon potential. The calculations of (Tsf) are performed by the WKB approximation, in the multi-dimensional dynamical-programing method (MDP). We have examined three different effects: the effect of higher even-multipolarity shape parameters (?6 and ?8), the role of reflection-asymmetry (?3 and ?5) and the influence of pairing degrees of freedom (?p and ?n). Alpha-decay half-lives (T?) have been calculated by the Viola-Seaborg (V-S) formula with the parameters modified to the latest experimental data
11. Substitution effect on the multipolar transitions in Pr(FexRu1-x)4P12
International Nuclear Information System (INIS)
Substitution effect has been studied for the transition metal site on both of the metal-insulator transition in PrRu4P12 and the anomalous nonmagnetic transition (probably of multipolar origin) in PrFe4P12 by mutual alloying. It has been found that both of the transitions are significantly suppressed by the substitution. This observation supports the scenario that the expected Fermi-surface-nesting instability is an essential ingredient for both of the orderings. In the nonordered state, logarithmic temperature dependence in the electrical resistivity indicative of Kondo-like scatterings has been found only around the high Fe concentration, suggesting important roles of the Fe 3d electrons for the strongly correlated behavior
12. An automated algorithm for determining conduction velocity, wavefront direction and origin of focal cardiac arrhythmias using a multipolar catheter.
Science.gov (United States)
Roney, Caroline H; Cantwell, Chris D; Qureshi, Norman A; Ali, Rheeda L; Chang, Eugene T Y; Phang Boon Lim; Sherwin, Spencer J; Peters, Nicholas S; Siggers, Jennifer H; Fu Siong Ng
2014-08-01
Determining locations of focal arrhythmia sources and quantifying myocardial conduction velocity (CV) are two major challenges in clinical catheter ablation cases. CV, wave-front direction and focal source location can be estimated from multipolar catheter data, but currently available methods are time-consuming, limited to specific electrode configurations, and can be inaccurate. We developed automated algorithms to rapidly identify CV from multipolar catheter data with any arrangement of electrodes, whilst providing estimates of wavefront direction and focal source position, which can guide the catheter towards a focal arrhythmic source. We validated our methods using simulations on realistic human left atrial geometry. We subsequently applied them to clinically-acquired intracardiac electrogram data, where CV and wavefront direction were accurately determined in all cases, whilst focal source locations were correctly identified in 2/3 cases. Our novel automated algorithms can potentially be used to guide ablation of focal arrhythmias in real-time in cardiac catheter laboratories. PMID:25570274
13. Multipolar radiofrequency ablation using 4–6 applicators simultaneously: A study in the ex vivo bovine liver
International Nuclear Information System (INIS)
In this study the volume and shape of coagulation zones after multipolar radiofrequency ablation (RFA) with simultaneous use of 4–6 applicators in the ex vivo bovine liver were investigated. The RF-applicators were positioned in 13 different configurations to simulate ablation of large solitary tumors and simultaneous ablation of multiple lesions with 120 kJ of applied energy/session. In total, 110 coagulation zones were induced. Standardized measurements of the volume and shape of the coagulation zones were carried out on magnetic resonance images and statistically analyzed. The coagulation zones induced with solitary applicators and with 2 applicators were imperceptibly small and incomplete, respectively. At 20 mm applicator distance, the total ablated volume was significantly larger if all applicators were arranged in a single group compared to placement in 2 distant applicator groups, each consisting of 3 applicators (p = .001). The mean total coagulated volume ranged from immeasurably small (if 6 solitary applicators were applied simultaneously) to 74.7 cc (if 6 applicators at 30 mm distance between neighboring applicators were combined to a single group). Applicator distance, number and positioning array impacted time and shape. The coagulation zones surrounding groups with 4–6 applicators were regularly shaped, homogeneous and completely fused, and the axial diameters were almost constant. In conclusion, multipolar RFA with 4–6 applicators is feasible. The multipolar simultaneous mode should be applied for large and solitary lesions only, small and multiple tumors should be ablated consecutively in standard multipolar mode with up to 3 applicators
14. ADAM17 is critical for multipolar exit and radial migration of neuronal intermediate progenitor cells in mice cerebral cortex.
Science.gov (United States)
Li, Qingyu; Zhang, Zhengyu; Li, Zengmin; Zhou, Mei; Liu, Bin; Pan, Le; Ma, Zhixing; Zheng, Yufang
2013-01-01
15. ADAM17 Is Critical for Multipolar Exit and Radial Migration of Neuronal Intermediate Progenitor Cells in Mice Cerebral Cortex
OpenAIRE
Li, Qingyu; Zhang, Zhengyu; Li, Zengmin; Zhou, Mei; Liu, Bin; Pan, Le; Ma, Zhixing; Zheng, Yufang
2013-01-01
The radial migration of neuronal progenitor cells is critical for the development of cerebral cortex layers. They go through a critical step transforming from multipolar to bipolar before outward migration. A Disintegrin and Metalloprotease 17 (ADAM17) is a transmembrane protease which can process many substrates involved in cell-cell interaction, including Notch, ligands of EGFR, and some cell adhesion molecules. In this study, we used in utero electroporation to knock down or overexpress AD...
16. Dynamic FoxG1 expression coordinates the integration of multipolar pyramidal neuron precursors into the cortical plate
OpenAIRE
Miyoshi, Goichi; Fishell, Gord
2012-01-01
Pyramidal cells of the cerebral cortex are born in the ventricular zone and migrate radially through the intermediate zone to enter into the cortical plate. In the intermediate zone, these migrating precursors are able to move tangentially and initiate the extension of their axons by transiently adopting a characteristic multipolar morphology. We observe that expression of the forkhead transcription factor FoxG1 is dynamically regulated during this transitional period. By utilizing conditiona...
17. An iterative algorithm for sparse and constrained recovery with applications to divergence-free current reconstructions in magneto-encephalography
CERN Document Server
Loris, Ignace
2012-01-01
We propose an iterative algorithm for the minimization of a $\\ell_1$-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in the problem (in the linear constraint, in the data misfit part and in penalty term of the functional). None of the three matrices must be invertible. Convergence is proven in a finite-dimensional setting. We apply the algorithm to a synthetic problem in magneto-encephalography where it is used for the reconstruction of divergence-free current densities subject to a sparsity promoting penalty on the wavelet coefficients of the current densities. We discuss the effects of imposing zero divergence and of imposing joint sparsity (of the vector components of the current density) on the current density reconstruction.
18. SQUID-based systems for co-registration of ultra-low field nuclear magnetic resonance images and magnetoencephalography
International Nuclear Information System (INIS)
The ability to perform magnetic resonance imaging (MRI) in ultra-low magnetic fields (ULF) of ?100 ?T, using superconducting quantum interference device (SQUID) detection, has enabled a new class of magnetoencephalography (MEG) instrumentation capable of recording both anatomical (via the ULF MRI) and functional (biomagnetic) information about the brain. The combined ULF MRI/MEG instrument allows both structural and functional information to be co-registered to a single coordinate system and acquired in a single device. In this paper we discuss the considerations and challenges required to develop a combined ULF MRI/MEG device, including pulse sequence development, magnetic field generation, SQUID operation in an environment of pulsed pre-polarization, and optimization of pick-up coil geometries for MRI in different noise environments. We also discuss the design of a “hybrid” ULF MRI/MEG system under development in our laboratory that uses SQUID pick-up coils separately optimized for MEG and ULF MRI.
19. The Slope Imaging Multi-Polarization Photon-Counting Lidar: Development and Performance Results
Science.gov (United States)
Dabney, Phillip
2010-01-01
The Slope Imaging Multi-polarization Photon-counting Lidar is an airborne instrument developed to demonstrate laser altimetry measurement methods that will enable more efficient observations of topography and surface properties from space. The instrument was developed through the NASA Earth Science Technology Office Instrument Incubator Program with a focus on cryosphere remote sensing. The SIMPL transmitter is an 11 KHz, 1064 nm, plane-polarized micropulse laser transmitter that is frequency doubled to 532 nm and split into four push-broom beams. The receiver employs single-photon, polarimetric ranging at 532 and 1064 nm using Single Photon Counting Modules in order to achieve simultaneous sampling of surface elevation, slope, roughness and depolarizing scattering properties, the latter used to differentiate surface types. Data acquired over ice-covered Lake Erie in February, 2009 are documenting SIMPL s measurement performance and capabilities, demonstrating differentiation of open water and several ice cover types. ICESat-2 will employ several of the technologies advanced by SIMPL, including micropulse, single photon ranging in a multi-beam, push-broom configuration operating at 532 nm.
20. Neutron star deformation due to arbitrary-order multipolar magnetic fields
CERN Document Server
Mastrano, Alpha; Melatos, Andrew
2013-01-01
Certain multi-wavelength observations of neutron stars, such as intermittent radio emissions from rotation-powered pulsars beyond the pair-cascade death line, the pulse profile of the magnetar SGR 1900+14 after its 1998 August 27 giant flare, and X-ray spectral features of PSR J0821-4300 and SGR 0418+5729, suggest that the magnetic fields of non-accreting neutron stars are not purely dipolar and may contain higher-order multipoles. Here, we calculate the ellipticity of a non-barotropic neutron star with (i) a quadrupole poloidal-toroidal field, and (ii) a purely poloidal field containing arbitrary multipoles, deriving the relation between the ellipticity and the multipole amplitudes. We present, as a worked example, a purely poloidal field comprising dipole, quadrupole, and octupole components. We show the correlation between field energy and ellipticity for each multipole, that the l=4 multipole has the lowest energy, and that l=5 has the lowest ellipticity. We show how a mixed multipolar field creates an ob...
1. Effective plasma confinement by applying multipolar magnetic fields in an internal linear inductively coupled plasma system
International Nuclear Information System (INIS)
A novel internal-type linear inductive antenna referred to as 'double comb-type antenna' was used for a large-area plasma source with the substrate area of 880 mmx660 mm and the effect of plasma confinement by applying multi-polar magnetic field was investigated. High-density plasmas on the order of 3.18x1011 cm-3, which is 50% higher than that obtained for the source without the magnetic field, could be obtained at the pressure of 15 mTorr Ar and at the inductive power of 5000 W with good plasma stability. The plasma uniformity less than 3% could also be obtained within the substrate area. When SiO2 film was etched using the double comb-type antenna, the average etch rate of about 2100 A/min could be obtained with the etch uniformity of 5.4% on the substrate area using 15 mTorr SF6, 5000 W of rf power, and -34 V of dc bias voltage
2. The Evolutionary Dynamics of Biofuel Value Chains : From Unipolar and Government-Driven to Multipolar Governance
DEFF Research Database (Denmark)
Ponte, Stefano
2014-01-01
In this paper I propose to push the frontier of global value chain (GVC) governance analysis through the concept of ‘polarity’. Much of the existing GVC literature has focused on ‘unipolar’ value chains, where one group of ‘lead firms’ inhabiting a specific function in a chain plays a dominant role in governing it. Some scholars have explored the dynamics of governance in GVCs characterized as ‘bipolar’, where two sets of actors in different functional positions both drive the chain. I expand this direction further to suggest conceptualizing governance within a continuum between unipolarity and multipolarity. Empirically, I do so by examining the evolutionary dynamics of governance in biofuel value chains, with specific focus on the key regulatory and institutional features that facilitated their emergence and expansion. First, I examine the formation, evolution, and governance of three national/regional value chains (in Brazil, the US, and the EU); then, I provide evidence to support a trend towards the increasing but still partial formation of a global biofuel value chain and examine its governance traits.
3. CT-guided Bipolar and Multipolar Radiofrequency Ablation (RF Ablation) of Renal Cell Carcinoma: Specific Technical Aspects and Clinical Results
International Nuclear Information System (INIS)
Purpose. This study was designed to evaluate the clinical efficacy of CT-guided bipolar and multipolar radiofrequency ablation (RF ablation) of renal cell carcinoma (RCC) and to analyze specific technical aspects between both technologies. Methods. We included 22 consecutive patients (3 women; age 74.2 ± 8.6 years) after 28 CT-guided bipolar or multipolar RF ablations of 28 RCCs (diameter 2.5 ± 0.8 cm). Procedures were performed with a commercially available RF system (Celon AG Olympus, Berlin, Germany). Technical aspects of RF ablation procedures (ablation mode [bipolar or multipolar], number of applicators and ablation cycles, overall ablation time and deployed energy, and technical success rate) were analyzed. Clinical results (local recurrence-free survival and local tumor control rate, renal function [glomerular filtration rate (GFR)]) and complication rates were evaluated. Results. Bipolar RF ablation was performed in 12 procedures and multipolar RF ablation in 16 procedures (2 applicators in 14 procedures and 3 applicators in 2 procedures). One ablation cycle was performed in 15 procedures and two ablation cycles in 13 procedures. Overall ablation time and deployed energy were 35.0 ± 13.6 min and 43.7 ± 17.9 kJ. Technical success rate was 100 %. Major and minor complication rates were 4 and 14 %. At an imaging follow-up of 15.2 ± 8.8 months, local recurrence-free survival was 14.4 ± 8.8 months and local tumor control rate was 93 %. GFR did not deterioratete was 93 %. GFR did not deteriorate after RF ablation (50.8 ± 16.6 ml/min/1.73 m2 before RF ablation vs. 47.2 ± 11.9 ml/min/1.73 m2 after RF ablation; not significant). Conclusions. CT-guided bipolar and multipolar RF ablation of RCC has a high rate of clinical success and low complication rates. At short-term follow-up, clinical efficacy is high without deterioration of the renal function.
4. CT-guided Bipolar and Multipolar Radiofrequency Ablation (RF Ablation) of Renal Cell Carcinoma: Specific Technical Aspects and Clinical Results
Energy Technology Data Exchange (ETDEWEB)
Sommer, C. M., E-mail: christof.sommer@med.uni-heidelberg.de [University Hospital Heidelberg, INF 110, Department of Diagnostic and Interventional Radiology (Germany); Lemm, G.; Hohenstein, E. [Minimally Invasive Therapies and Nuclear Medicine, SLK Kliniken Heilbronn GmbH, Clinic for Radiology (Germany); Bellemann, N.; Stampfl, U. [University Hospital Heidelberg, INF 110, Department of Diagnostic and Interventional Radiology (Germany); Goezen, A. S.; Rassweiler, J. [Clinic for Urology, SLK Kliniken Heilbronn GmbH (Germany); Kauczor, H. U.; Radeleff, B. A. [University Hospital Heidelberg, INF 110, Department of Diagnostic and Interventional Radiology (Germany); Pereira, P. L. [Minimally Invasive Therapies and Nuclear Medicine, SLK Kliniken Heilbronn GmbH, Clinic for Radiology (Germany)
2013-06-15
Purpose. This study was designed to evaluate the clinical efficacy of CT-guided bipolar and multipolar radiofrequency ablation (RF ablation) of renal cell carcinoma (RCC) and to analyze specific technical aspects between both technologies. Methods. We included 22 consecutive patients (3 women; age 74.2 {+-} 8.6 years) after 28 CT-guided bipolar or multipolar RF ablations of 28 RCCs (diameter 2.5 {+-} 0.8 cm). Procedures were performed with a commercially available RF system (Celon AG Olympus, Berlin, Germany). Technical aspects of RF ablation procedures (ablation mode [bipolar or multipolar], number of applicators and ablation cycles, overall ablation time and deployed energy, and technical success rate) were analyzed. Clinical results (local recurrence-free survival and local tumor control rate, renal function [glomerular filtration rate (GFR)]) and complication rates were evaluated. Results. Bipolar RF ablation was performed in 12 procedures and multipolar RF ablation in 16 procedures (2 applicators in 14 procedures and 3 applicators in 2 procedures). One ablation cycle was performed in 15 procedures and two ablation cycles in 13 procedures. Overall ablation time and deployed energy were 35.0 {+-} 13.6 min and 43.7 {+-} 17.9 kJ. Technical success rate was 100 %. Major and minor complication rates were 4 and 14 %. At an imaging follow-up of 15.2 {+-} 8.8 months, local recurrence-free survival was 14.4 {+-} 8.8 months and local tumor control rate was 93 %. GFR did not deteriorate after RF ablation (50.8 {+-} 16.6 ml/min/1.73 m{sup 2} before RF ablation vs. 47.2 {+-} 11.9 ml/min/1.73 m{sup 2} after RF ablation; not significant). Conclusions. CT-guided bipolar and multipolar RF ablation of RCC has a high rate of clinical success and low complication rates. At short-term follow-up, clinical efficacy is high without deterioration of the renal function.
5. Evaluation of focused multipolar stimulation for cochlear implants in acutely deafened cats
Science.gov (United States)
George, Shefin S.; Wise, Andrew K.; Shivdasani, Mohit N.; Shepherd, Robert K.; Fallon, James B.
2014-12-01
Objective. The conductive nature of the fluids and tissues of the cochlea can lead to broad activation of spiral ganglion neurons using contemporary cochlear implant stimulation configurations such as monopolar (MP) stimulation. The relatively poor spatial selectivity is thought to limit implant performance, particularly in noisy environments. Several current focusing techniques have been proposed to reduce the spread of activation with the aim towards achieving improved clinical performance. Approach. The present research evaluated the efficacy of focused multipolar (FMP) stimulation, a relatively new focusing technique in the cochlea, and compared its efficacy to both MP stimulation and tripolar (TP) stimulation. The spread of neural activity across the inferior colliculus (IC), measured by recording the spatial tuning curve, was used as a measure of spatial selectivity. Adult cats (n = 6) were acutely deafened and implanted with an intracochlear electrode array before multi-unit responses were recorded across the cochleotopic gradient of the contralateral IC. Recordings were made in response to acoustic and electrical stimulation using the MP, TP and FMP configurations. Main results. FMP and TP stimulation resulted in greater spatial selectivity than MP stimulation. However, thresholds were significantly higher (p < 0.001) for FMP and TP stimulation compared to MP stimulation. There were no differences found in spatial selectivity and threshold between FMP and TP stimulation. Significance. The greater spatial selectivity of FMP and TP stimulation would be expected to result in improved clinical performance. However, further research will be required to demonstrate the efficacy of these modes of stimulation after longer durations of deafness.
6. Analysis of the electromagnetic excitation of the discharge in an ECR multipolar plasma source
International Nuclear Information System (INIS)
The excitation of the electron gas by 2.45 GHz microwave input power is studied using simulation and analysis techniques for a multipolar permanent magnet ECR plasma source. This source produces a low-pressure, high-density and low-temperature plasma that is useful for many plasma processing applications. The efficiency, uniformity and temperature of the plasma are all influenced by the transfer of input microwave energy to the discharge and have all been studied experimentally in previous work. In this paper, an analysis of this excitation is done which uses numerical simulation techniques to understand the absorption of microwave power by the discharge as a function of permanent magnet configuration (8 pole/8 magnet, 4 pole/8 magnet and all like poles pointing inward), electromagnetic excitation mode (TE211 and TE311), and orientation of the static magnetic field relative to the excitation mode. The plasma source studied has a microwave cavity diameter of 7 inches and a discharge diameter of 12.5 cm. The excitation of the plasma has been studied by examining self-consistently (1) the electromagnetic fields inside the resonator cavity with a discharge present and (2) the energy absorption by the electrons due to the microwave electric field. The electromagnetic fields are studied using a time-domain finite-difference solution of the Maxwell equations in a cylindrical cavity with a magnetized plasma present. The absorption of microwave energy by nt. The absorption of microwave energy by the electrons is investigated using plasma theory that incorporates collisionless heating
7. Invariant Form of Hyperfine Interaction with Multipolar Moments - Observation of Octupolar Moments in NpO$_{2}$ and CeB$_{6}$ by NMR -
CERN Document Server
Sakai, O; Shiba, H; Sakai, Osamu; Shiina, Ryousuke; Shiba, Hiroyuki
2004-01-01
The invariant form of the hyperfine interaction between multipolar moments and the nuclear spin is derived, and applied to discuss possibilities to identify the antiferro-octupolar (AFO) moments by NMR experiments. The ordered phase of NpO$_{2}$ and the phase IV of Ce$_{1-x}$La$_{x}$B$_{6}$ are studied in detail. Recent $^{17}$O NMR for polycrystalline samples of NpO$_{2}$ are discussed theoretically from our formulation. The observed feature of the splitting of $^{17}$O NMR spectrum into a sharp line and a broad line, their intensity ratio, and the magnetic field dependence of the shift and of the width can be consistently explained on the basis of the triple $\\bq$ AFO ordering model proposed by Paix\\~{a}o {\\it et. al.} Thus, the present theory shows that the $^{17}$O NMR spectrum gives a strong support to the model. The 4 O sites in the fcc NpO$_2$ become inequivalent due to the secondary triple $\\bq$ ordering of AF-quadrupoles: one cubic and three non-cubic sites. It turns out that the hyperfine field due ...
8. The Slope Imaging Multi-polarization Photon-counting Lidar: an Advanced Technology Airborne Laser Altimeter
Science.gov (United States)
Dabney, P.; Harding, D. J.; Huss, T.; Valett, S.; Yu, A. W.; Zheng, Y.
2009-12-01
The Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) is an airborne laser altimeter developed through the NASA Earth Science Technology Office Instrument Incubator Program with a focus on cryopshere remote sensing. The SIMPL instrument incorporates a variety of advanced technologies in order to demonstrate measurement approaches of potential benefit for improved airborne laser swath mapping and spaceflight laser altimeter missions. SIMPL incorporates beam splitting, single-photon ranging and polarimetry technologies at green and near-infrared wavelengths in order to achieve simultaneous sampling of surface elevation, slope, roughness and scattering properties, the latter used to differentiate surface types. The transmitter is a 1 nsec pulse width, 11 kHz, 1064 nm microchip laser, frequency doubled to 532 nm and split into four plane-polarized beams using birefringent calcite crystal in order to maintain co-alignment of the two colors. The 16 channel receiver splits the received energy for each beam into the two colors and each color is split into energy parallel and perpendicular to the transmit polarization plane thereby proving a measure of backscatter depolarization. The depolarization ratio is sensitive to the proportions of specular reflection and surface and volume scattering, and is a function of wavelength. The ratio can differentiate, for example, water, young translucent ice, older granular ice and snow. The solar background count rate is controlled by spatial filtering using a pinhole array and by spectral filtering using temperature-controlled narrow bandwidth filters. The receiver is fiber coupled to 16 Single Photon Counting Modules (SPCMs). To avoid range biases due to the long dead time of these detectors the probability of detection per laser fire on each channel is controlled to be below 30%, using mechanical irises and flight altitude. Event timers with 0.1 nsec resolution in combination the narrow transmit pulse yields single photon ranging precision of 8 cm. The high speed, high throughput data system is capable of recording 22 million time-tagged photon detection events per second. At typical aircraft flight speeds, each of the 16 channels acquires a single photon range every 5 to 15 cm along the four profiles providing a highly sampled measure of surface roughness. The nominal flight altitude is 5 km yielding 10 m spacing between the four beam profiles, providing a measure of surface slope at 10 m length scales. The altitude is currently constrained by the low signal level of the NIR cross-polarized channels. SIMPL’s measurement capabilities provide information about surface elevation, roughness, slope and type of value in characterizing ice sheet surfaces and sea ice, including their melt state. Capabilities will be illustrated using data acquired over Lake Erie ice cover in February, 2009.
9. Does IQ affect the functional brain network involved in pseudoword reading in students with reading disability? A magnetoencephalography study
Directory of Open Access Journals (Sweden)
Panagiotis G Simos
2014-01-01
Full Text Available The study examined whether individual differences in performance and verbal IQ affect the profiles of reading-related regional brain activation in 127 students experiencing reading difficulties and typical readers. Using magnetoencephalography in a pseudoword read-aloud task, we compared brain activation profiles of students experiencing word-level reading difficulties who did (n=29 or did not (n=36 meet the IQ-reading achievement discrepancy criterion. Typical readers assigned to a lower-IQ (n=18 or a higher IQ (n=44 subgroup served as controls. Minimum norm estimates of regional cortical activity revealed that the degree of hypoactivation in the left superior temporal and supramarginal gyri in both RD subgroups was not affected by IQ. Moreover, IQ did not moderate the positive association between degree of activation in the left fusiform gyrus and phonological decoding ability. We did find, however, that the hypoactivation of the left pars opercularis in RD was restricted to lower-IQ participants. In accordance with previous morphometric and fMRI studies, degree of activity in inferior frontal and inferior parietal regions correlated with IQ across reading ability subgroups. Results are consistent with current views questioning the relevance of IQ measures and IQ-discrepancy criteria in the diagnosis of dyslexia.
10. SQUID-based systems for co-registration of ultra-low field nuclear magnetic resonance images and magnetoencephalography
Energy Technology Data Exchange (ETDEWEB)
Matlashov, A.N., E-mail: matlach@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS-D454, Los Alamos, NM 87545 (United States); Burmistrov, E.; Magnelind, P.E.; Schultz, L.; Urbaitis, A.V.; Volegov, P.L.; Yoder, J.; Espy, M.A. [Los Alamos National Laboratory, P.O. Box 1663, MS-D454, Los Alamos, NM 87545 (United States)
2012-11-20
The ability to perform magnetic resonance imaging (MRI) in ultra-low magnetic fields (ULF) of {approx}100 {mu}T, using superconducting quantum interference device (SQUID) detection, has enabled a new class of magnetoencephalography (MEG) instrumentation capable of recording both anatomical (via the ULF MRI) and functional (biomagnetic) information about the brain. The combined ULF MRI/MEG instrument allows both structural and functional information to be co-registered to a single coordinate system and acquired in a single device. In this paper we discuss the considerations and challenges required to develop a combined ULF MRI/MEG device, including pulse sequence development, magnetic field generation, SQUID operation in an environment of pulsed pre-polarization, and optimization of pick-up coil geometries for MRI in different noise environments. We also discuss the design of a 'hybrid' ULF MRI/MEG system under development in our laboratory that uses SQUID pick-up coils separately optimized for MEG and ULF MRI.
11. Localization of Interictal Epileptiform Activity Using Magnetoencephalography with Synthetic Aperture Magnetometry in Patients with a Vagus Nerve Stimulator
Science.gov (United States)
Stapleton-Kotloski, Jennifer R.; Kotloski, Robert J.; Boggs, Jane A.; Popli, Gautam; O’Donovan, Cormac A.; Couture, Daniel E.; Cornell, Cassandra; Godwin, Dwayne W.
2014-01-01
Magnetoencephalography (MEG) provides useful and non-redundant information in the evaluation of patients with epilepsy, and in particular, during the pre-surgical evaluation of pharmaco-resistant epilepsy. Vagus nerve stimulation (VNS) is a common treatment for pharmaco-resistant epilepsy. However, interpretation of MEG recordings from patients with a VNS is challenging due to the severe magnetic artifacts produced by the VNS. We used synthetic aperture magnetometry (g2) [SAM(g2)], an adaptive beamformer that maps the excessive kurtosis, to map interictal spikes to the coregistered MRI image, despite the presence of contaminating VNS artifact. We present a series of eight patients with a VNS who underwent MEG recording. Localization of interictal epileptiform activity by SAM(g2) is compared to invasive electrophysiologic monitoring and other localizing approaches. While the raw MEG recordings were uninterpretable, analysis of the recordings with SAM(g2) identified foci of peak kurtosis and source signal activity that was unaffected by the VNS artifact. SAM(g2) analysis of MEG recordings in patients with a VNS produces interpretable results and expands the use of MEG for the pre-surgical evaluation of epilepsy. PMID:25505894
12. Assessment of language dominance by event-related oscillatory changes in an auditory language task: magnetoencephalography study.
Science.gov (United States)
Lee, Seo-Young; Kim, June Sic; Chung, Chun Kee; Lee, Sang Kun; Kim, Won Sup
2010-08-01
The authors investigated the oscillatory changes induced by auditory language task to assess hemispheric dominance of language. Magnetoencephalography studies were conducted during word listening in 6 normal right-handed volunteers and 13 epilepsy patients who underwent Wada test. We carried out a time-frequency analysis of event-related desynchronization (ERD)/event-related synchronization (ERS) and intertrial coherence. We localized ERD/ERS on each subject's magnetic resonance images using beamformer. We compared ERD/ERS values between the left and right side of regions of interest in inferior frontal and superior temporal areas. We assessed the target frequency range that correlated best with the Wada test results. In all normal subjects, gamma ERD was lateralized to the left side in both the inferior frontal and superior temporal areas. In epilepsy patients, the concordance rate of gamma ERD and the Wada test results was 76.9% for the inferior frontal area and 69.2% for the superior temporal area. Gamma ERD can be considered as an indicator of language function, although it was not sufficient to replace the Wada test in the evaluation of epilepsy patients. The gamma ERD value of the inferior frontal area was more reliable for the assessment of language dominance compared with that obtained in the superior temporal area. PMID:20634707
13. Excitatory cortical neurons with multipolar shape establish neuronal polarity by forming a tangentially oriented axon in the intermediate zone.
Science.gov (United States)
Hatanaka, Yumiko; Yamauchi, Kenta
2013-01-01
The formation of axon-dendrite polarity is crucial for neuron to make the proper information flow within the brain. Although the processes of neuronal polarity formation have been extensively studied using neurons in dissociated culture, the corresponding developmental processes in vivo are still unclear. Here, we illuminate the initial steps of morphological polarization of excitatory cortical neurons in situ, by sparsely labeling their neuroepithelial progenitors using in utero electroporation and then examining their neuronal progeny in brain sections and in slice cultures. Morphological analysis showed that an axon-like long tangential process formed in progeny cells in the intermediate zone (IZ). Time-lapse imaging analysis using slice culture revealed that progeny cells with multipolar shape, after alternately extending and retracting their short processes for several hours, suddenly elongated a long process tangentially. These cells then transformed into a bipolar shape, extending a pia-directed leading process, and migrated radially leaving the tangential process behind, which gave rise to an "L-shaped" axon. Our findings suggest that neuronal polarity in these cells is established de novo from a nonpolarized stage in vivo and indicate that excitatory cortical neurons with multipolar shape in the IZ initiate axon outgrowth before radial migration into the cortical plate. PMID:22267309
14. Si(3P) + OH(X2?) Interaction: Long-Range Multipolar Potentials of the Eighteen Spin-Orbit States
Science.gov (United States)
Bussery-Honvault, Béatrice; Dayou, Fabrice
2009-09-01
Eighteen spin-orbit states are generated from the open-shell open-shell Si(3P) + OH(X2?) interacting system. We present here the behavior of the associated long-range intermolecular potentials, following a multipolar expansion of the Coulombic interaction treated up to second order of the perturbation theory, giving rise to a series of terms varying in R-n. In the present work, we have considered the electrostatic dipole-quadrupole (n = 4) and quadrupole-quadrupole (n = 5) interactions, as well as the dipole-induced dipole-induced dispersion (n = 6) and dipole-dipole-induced induction (n = 6) contributions. The diatomic OH is kept fixed at its ground state-averaged distance, (r)v=0 = 1.865 bohr, so that the long-range potentials are two-dimensional potential energy surfaces (PESs) that depend on the intermolecular distance R and on the bending angle ? = ?SiGH, where G represents the mass center of OH. From the calculated properties of the monomers, such as the dipole and quadrupole moments and static and dynamic polarizabilities, we have determined and tabulated the long-range coefficients of the multipolar expansion of the potentials for each matrix elements. The isolated monomer spin-orbit splittings have been included in the final matrix, whose diagonalization gives rise to 18 adiabatic potentials. Then, the adiabatic states have been compared to potential energies given by supermolecular ab initio calculations resulting in a general good overall agreement.
15. Comparative analysis of transverse intrafascicular multichannel, longitudinal intrafascicular and multipolar cuff electrodes for the selective stimulation of nerve fascicles
Science.gov (United States)
Badia, Jordi; Boretius, Tim; Andreu, David; Azevedo-Coste, Christine; Stieglitz, Thomas; Navarro, Xavier
2011-06-01
The selection of a suitable nerve electrode for neuroprosthetic applications implies a trade-off between invasiveness and selectivity, wherein the ultimate goal is achieving the highest selectivity for a high number of nerve fascicles by the least invasiveness and potential damage to the nerve. The transverse intrafascicular multichannel electrode (TIME) is intended to be transversally inserted into the peripheral nerve and to be useful to selectively activate subsets of axons in different fascicles within the same nerve. We present a comparative study of TIME, LIFE and multipolar cuff electrodes for the selective stimulation of small nerves. The electrodes were implanted on the rat sciatic nerve, and the activation of gastrocnemius, plantar and tibialis anterior muscles was recorded by EMG signals. Thus, the study allowed us to ascertain the selectivity of stimulation at the interfascicular and also at the intrafascicular level. The results of this study indicate that (1) intrafascicular electrodes (LIFE and TIME) provide excitation circumscribed to the implanted fascicle, whereas extraneural electrodes (cuffs) predominantly excite nerve fascicles located superficially; (2) the minimum threshold for muscle activation with TIME and LIFE was significantly lower than with cuff electrodes; (3) TIME allowed us to selectively activate the three tested muscles when stimulating through different active sites of one device, both at inter- and intrafascicular levels, whereas selective activation using multipolar cuff (with a longitudinal tripolar stimulation configuration) was only possible for two muscles, at the interfascicular level, and LIFE did not activate selectively more than one muscle in the implanted nerve fascicle.
16. Calculation of vapor-liquid equilibrium and PVTx properties of geological fluid system with SAFT-LJ EOS including multi-polar contribution. Part III. Extension to water-light hydrocarbons systems
Science.gov (United States)
Sun, Rui; Lai, Shaocong; Dubessy, Jean
2014-01-01
The SAFT-LJ EOS improved by Sun and Dubessy (2010, 2012) is extended to water-light hydrocarbon systems. Light hydrocarbons (including CH4, C2H6, C3H8 and nC4H10) are modeled as chain molecules without multi-polar moments. The contributions of the shape of molecules and main intermolecular interactions existing in water-light hydrocarbon systems (including repulsive and attractive forces between Lennard-Jones segments, the hydrogen-bonding force and the multi-polar interaction between water molecules) to the residual Helmholtz energy were accounted for by this EOS. The adjustable parameters for the interactions of H2O-CH4, H2O-C2H6, H2O-C3H8, and H2O-nC4H10 pairs were evaluated from mutual solubility data of binary water-hydrocarbon systems at vapor-liquid equilibria. Comparison with the experimental data shows this SAFT-LJ EOS can represent well vapor-liquid (and liquid-liquid) equilibria of binary water-light hydrocarbon systems over a wide P-T range. The accuracy of this EOS for mutual solubilities of methane, ethane, propane and water is within the experimental uncertainty generally. Moreover, the model is able to accurately predict the vapor-liquid equilibria and PVTx properties of multi-component systems composed of water, light hydrocarbon as well as CO2. As we know, this EOS is the first one allowing quantitative calculation of the mutual solubilities of water and light hydrocarbons over a wide P-T range among SAFT-type EOSs. This work indicates that the molecular-based EOS combined with conventional mixing rule can well describe the thermodynamic behavior of highly non-ideal systems such as water-light hydrocarbons mixtures except in the critical region for which long range density fluctuations cannot be taken into account by this analytical model.
17. Injection and Confinement of Plasma in a Stellarator with a Multipolar (? =2) Helical Field
International Nuclear Information System (INIS)
We give the results of external injection of plasma into a closed magnetic trap and on the investigation of the effect of helical fields on the maintenance of the plasma. The ''L-I'' apparatus consists of a toroidal magnetic trap of stellarator type with a continuous ''double-thread'' multipolar (? = 2) helical field. The large diameter of the torus is 120 cm and the diameter of the vacuum chamber section 10 cm. The maximum value of the longitudinal field H is 104 Oe. The magnetic field of the stellarator is variable in time, to enable a study of adiabatic heating of the plasma in a trap of this type. The L-I stellarator and low-energy electron beams were used to investigate the structure of the magnetic surfaces. The method made it possible to determine the existence and form of closed magnetic surfaces over a wide range of the ratio of the helical and longitudinal fields. Resonance perturbations of the magnetic surfaces were detected that led to splitting of the latter and the formation of rosettes. Magnetic measurements confirmed the theoretical postulates regarding the magnetic surfaces and the effect of perturbations in resonance and non-resonance cases. Filling of the trap with plasma was effected by injecting plasma jets from spark guns into the transverse magnetic field. The total number of charged particles generated at each injection was ?5 x 1014. Injection could be made both .while the field was growing, with subsequent adiabatic compression of the plasma, and while the field was quasi-constant. Filling of the trap took place over a time of the order of tens of us. The inital density of the plasma was ?1011 cm-3, and the electron temperature ?15 eV. The density of the plasma was measured by the resonance ultra-high-frequency method and its distribution over the section was determined by twin Langmuir probes. The experiments showed the effective influence of a helical field on plasma. In the absence of a helical field, the density distribution was non-symmetrical relative to the centre of the chamber and the plasma drifted towards the external wall of the torus; its lifetime was the order of 100 to 200 ?s. When a helical field was applied then density distribution was symmetrical about the axis of the chamber and was determined by the form of the magnetic surfaces; the constant of density fall-off time was ?1 to 2 ms. The measured lifetime of the plasma when the apparatus is working as a stellarator cannot be explained by conventional diffusion. The spectrum of oscillations in the plasma electric fields was studied, and we discuss the various mechanisms capable of explaining the anomalously high plasma diffusion rates that we observed. (author)
18. Crisis del lóbulo temporal registrada mediante magnetoencefalografía: caso clínico Temporal lobe seizure recorded by magnetoencephalography: case report
Directory of Open Access Journals (Sweden)
Carlos Amo
2004-09-01
Full Text Available La localización del inicio de las crisis es un factor importante para la evaluación prequirúrgica de la epilepsia. En este trabajo se describe la localización del inicio de una crisis registrada mediante magnetoencefalografía (MEG en un niño de 12 años que presenta crisis parciales complejas farmacorresistentes. La RM muestra una lesión de 20mm de diámetro en el hipocampo izquierdo. EEG de superficie con ondas theta temporales izquierdas. Registro MEG interictal con punta-onda aislada posterior e inferior a la lesión de la RM. Registro MEG ictal con punta-onda (2 Hz. La localización de los dipolos indica el inicio de la crisis en la circunvolución temporal inferior en la misma localización que la actividad interictal MEG. Esta actividad ictal se propaga bilateralmente a áreas frontales. El registro corticográfico intraquirúrgico confirma los resultados de la localización interictal mediante MEG.Ictal onset localization is a important factor in presurgical evaluation of epilepsy. This paper describes the localization of a seizure onset recorded by magnetoencephalography (MEG from a 12-year-old male patient who suffered from complex partial drug-resistant seizures. MRI revealed a 20mm diameter lesion located in left hippocampus. Scalp EEG showed left temporal theta waves. Interictal MEG registrations detected isolated spike-wave activity posterior and inferior to the MRI lesion. Ictal MEG showed continuous spike-wave activity (2 Hz. Dipole localization sited seizure onset in the inferior left temporal gyrus, the same localization of the interictal MEG activity. This ictal activity spreads bilaterally to frontal areas. Intrasurgical electrocorticography recording confirmed interictal MEG results.
19. Auditory and cognitive deficits associated with acquired amusia after stroke: a magnetoencephalography and neuropsychological follow-up study.
Science.gov (United States)
Särkämö, Teppo; Tervaniemi, Mari; Soinila, Seppo; Autti, Taina; Silvennoinen, Heli M; Laine, Matti; Hietanen, Marja; Pihko, Elina
2010-01-01
Acquired amusia is a common disorder after damage to the middle cerebral artery (MCA) territory. However, its neurocognitive mechanisms, especially the relative contribution of perceptual and cognitive factors, are still unclear. We studied cognitive and auditory processing in the amusic brain by performing neuropsychological testing as well as magnetoencephalography (MEG) measurements of frequency and duration discrimination using magnetic mismatch negativity (MMNm) recordings. Fifty-three patients with a left (n?=?24) or right (n?=?29) hemisphere MCA stroke (MRI verified) were investigated 1 week, 3 months, and 6 months after the stroke. Amusia was evaluated using the Montreal Battery of Evaluation of Amusia (MBEA). We found that amusia caused by right hemisphere damage (RHD), especially to temporal and frontal areas, was more severe than amusia caused by left hemisphere damage (LHD). Furthermore, the severity of amusia was found to correlate with weaker frequency MMNm responses only in amusic RHD patients. Additionally, within the RHD subgroup, the amusic patients who had damage to the auditory cortex (AC) showed worse recovery on the MBEA as well as weaker MMNm responses throughout the 6-month follow-up than the non-amusic patients or the amusic patients without AC damage. Furthermore, the amusic patients both with and without AC damage performed worse than the non-amusic patients on tests of working memory, attention, and cognitive flexibility. These findings suggest domain-general cognitive deficits to be the primary mechanism underlying amusia without AC damage whereas amusia with AC damage is associated with both auditory and cognitive deficits. PMID:21152040
20. Application of high-quality SiO2 grown by multipolar ECR source to Si/SiGe MISFET
Science.gov (United States)
Sung, K. T.; Li, W. Q.; Li, S. H.; Pang, S. W.; Bhattacharya, P. K.
1993-01-01
A 5 nm-thick SiO2 gate was grown on an Si(p+)/Si(0.8)Ge(0.2) modulation-doped heterostructure at 26 C with an oxygen plasma generated by a multipolar electron cyclotron resonance source. The ultrathin oxide has breakdown field above 12 MV/cm and fixed charge density about 3 x 10 exp 10/sq cm. Leakage current as low as 1/micro-A was obtained with the gate biased at 4 V. The MISFET with 0.25 x 25 sq m gate shows maximum drain current of 41.6 mA/mm and peak transconductance of 21 mS/mm.
1. Properties of highly electronegative plasmas produced in a multipolar magnetic-confined device with a transversal magnetic filter
DEFF Research Database (Denmark)
Draghici, Mihai; Stamate, Eugen
2010-01-01
Highly electronegative plasmas were produced in Ar/SF6 gas mixtures in a dc discharge with multipolar magnetic confinement and transversal magnetic filter. Langmuir probe and mass spectrometry were used for plasma diagnostics. Plasma potential drift, the influence of small or large area biased electrodes on plasma parameters, the formation of the negative ion sheath and etching rates by positive and negative ions have been investigated for different experimental conditions. When the electron temperature was reduced below 1 eV the density ratio of negative ion to electron exceeded 100 even for very low amounts of SF6 gas. The plasma potential drift could be controlled by proper wall conditioning. A large electrode biased positively had no effect on plasma potential for density ratios of negative ions to electrons larger than 50. For similar electronegativities or higher a negative ion sheath could be formed by applying a positive bias of a few hundred volts.
2. Imaging of biogenic and anthropogenic ocean surface films by the multifrequency/multipolarization SIR-C/X-SAR
Science.gov (United States)
Gade, Martin; Alpers, Werner; Hühnerfuss, Heinrich; Masuko, Harunobu; Kobayashi, Tatsuharu
1998-08-01
Results from the analyses of several spaceborne imaging radar-C/X-band synthetic aperture radar (SIR-C/X-SAR) images are presented, which were acquired during the two SIR-C/X-SAR missions in April and October 1994 by the L-, C-, and X-band multipolarization SAR aboard the space shuttle Endeavour. The images showing natural (biogenic) surface slicks as well as man-made (anthropogenic) mineral oil spills were analyzed with the aim to study whether or not active radar techniques can be applied to discriminating between these two kinds of surface films. Controlled slick experiments were carried out during both shuttle missions in the German Bight of the North Sea as well as in the northern part of the Sea of Japan and the Kuroshio Stream region, where surface films of different viscoelastic properties were deployed within the swath of the shuttle radars. The results show that the damping behavior of the same substance is strongly dependent on wind speed. At high wind speed (8-12 m/s) the ratio of the radar backscatter from a slick-free and a slick-covered water surface (damping ratio) is smaller than at low to moderate wind speeds (4-7 m/s). At 12 m/s, only slight differences in the damping behavior of different substances were measured by SIR-C/X-SAR. Furthermore, several SAR scenes from various parts of the world's oceans showing radar signatures of biogenic as well as anthropogenic surface films at low to moderate wind speeds are analyzed. The damping behavior of these different kinds of oceanic surface films varies particularly at Lband where the biogenic surface films exhibit larger damping characteristics. Results of polarimetric studies from multipolarization SAR images showing various surface films are presented. It can be delineated from these results that Bragg scattering as well as specular reflection contribute to the backscattered radar signal at low incidence angles (up to 30°). It is concluded that at low to moderate wind speeds, multifrequency radar techniques seem to be capable of discriminating between the different surface films, whereas at high wind conditions a discrimination seems to be difficult.
3. A novel strategy for targeted killing of tumor cells: Induction of multipolar acentrosomal mitotic spindles with a quinazolinone derivative mdivi-1.
Science.gov (United States)
Wang, Jingnan; Li, Jianfeng; Santana-Santos, Lucas; Shuda, Masahiro; Sobol, Robert W; Van Houten, Bennett; Qian, Wei
2015-02-01
Traditional antimitotic drugs for cancer chemotherapy often have undesired toxicities to healthy tissues, limiting their clinical application. Developing novel agents that specifically target tumor cell mitosis is needed to minimize the toxicity and improve the efficacy of this class of anticancer drugs. We discovered that mdivi-1 (mitochondrial division inhibitor-1), which was originally reported as an inhibitor of mitochondrial fission protein Drp1, specifically disrupts M phase cell cycle progression only in human tumor cells, but not in non-transformed fibroblasts or epithelial cells. The antimitotic effect of mdivi-1 is Drp1 independent, as mdivi-1 induces M phase abnormalities in both Drp1 wild-type and Drp1 knockout SV40-immortalized/transformed MEF cells. We also identified that the tumor transformation process required for the antimitotic effect of mdivi-1 is downstream of SV40 large T and small t antigens, but not hTERT-mediated immortalization. Mdivi-1 induces multipolar mitotic spindles in tumor cells regardless of their centrosome numbers. Acentrosomal spindle poles, which do not contain the bona-fide centrosome components ?-tubulin and centrin-2, were found to contribute to the spindle multipolarity induced by mdivi-1. Gene expression profiling revealed that the genes involved in oocyte meiosis and assembly of acentrosomal microtubules are highly expressed in tumor cells. We further identified that tumor cells have enhanced activity in the nucleation and assembly of acentrosomal kinetochore-attaching microtubules. Mdivi-1 inhibited the integration of acentrosomal microtubule-organizing centers into centrosomal asters, resulting in the development of acentrosomal mitotic spindles preferentially in tumor cells. The formation of multipolar acentrosomal spindles leads to gross genome instability and Bax/Bak-dependent apoptosis. Taken together, our studies indicate that inducing multipolar spindles composing of acentrosomal poles in mitosis could achieve tumor-specific antimitotic effect, and mdivi-1 thus represents a novel class of compounds as acentrosomal spindle inducers (ASI). PMID:25458053
4. Occurrence of multipolar mitoses and association with Aurora-A/-B kinases and p53 mutations in aneuploid esophageal carcinoma cells
Directory of Open Access Journals (Sweden)
Münch Claudia
2011-04-01
Full Text Available Abstract Background Aurora kinases and loss of p53 function are implicated in the carcinogenesis of aneuploid esophageal cancers. Their association with occurrence of multipolar mitoses in the two main histotypes of aneuploid esophageal squamous cell carcinoma (ESCC and Barrett's adenocarcinoma (BAC remains unclear. Here, we investigated the occurrence of multipolar mitoses, Aurora-A/-B gene copy numbers and expression/activation as well as p53 alterations in aneuploid ESCC and BAC cancer cell lines. Results A control esophageal epithelial cell line (EPC-hTERT had normal Aurora-A and -B gene copy numbers and expression, was p53 wild type and displayed bipolar mitoses. In contrast, both ESCC (OE21, Kyse-410 and BAC (OE33, OE19 cell lines were aneuploid and displayed elevated gene copy numbers of Aurora-A (chromosome 20 polysomy: OE21, OE33, OE19; gene amplification: Kyse-410 and Aurora-B (chromosome 17 polysomy: OE21, Kyse-410. Aurora-B gene copy numbers were not elevated in OE19 and OE33 cells despite chromosome 17 polysomy. Aurora-A expression and activity (Aurora-A/phosphoT288 was not directly linked to gene copy numbers and was highest in Kyse-410 and OE33 cells. Aurora-B expression and activity (Aurora-B/phosphoT232 was higher in OE21 and Kyse-410 than in OE33 and OE19 cells. The mitotic index was highest in OE21, followed by OE33 > OE19 > Kyse-410 and EPC-hTERT cells. Multipolar mitoses occurred with high frequency in OE33 (13.8 ± 4.2%, followed by OE21 (7.7 ± 5.0% and Kyse-410 (6.3 ± 2.0% cells. Single multipolar mitoses occurred in OE19 (1.0 ± 1.0% cells. Distinct p53 mutations and p53 protein expression patterns were found in all esophageal cancer cell lines, but complete functional p53 inactivation occurred in OE21 and OE33 only. Conclusions High Aurora-A expression alone is not associated with overt multipolar mitoses in aneuploid ESCC and BAC cancer cells, as specifically shown here for OE21 and OE33 cells, respectively. Additional p53 loss of function mutations are necessary for this to occur, at least for invasive esophageal cancer cells. Further assessment of Aurora kinases and p53 interactions in cells or tissue specimens derived from non-invasive dysplasia (ESCC or intestinal metaplasia (BAC are necessary to disclose a potential causative role of Aurora kinases and p53 for development of aneuploid, invasive esophageal cancers.
5. The Swath Imaging Multi-polarization Photon-counting Lidar (SIMPL): A Pathfinder for the LIDAR Surface Topography (LIST) Mission
Science.gov (United States)
Dabney, P.; Harding, D.; Abshire, J.; Seas, A.; Sun, X.; Shuman, C.; Scambos, T.
2007-12-01
The Swath Imaging Multi-polarization Photon-counting Lidar (SIMPL) is an airborne prototype in development to demonstrate laser altimetry measurement methods and components that enable efficient, high-resolution, swath mapping of topography and surface properties from space. This demonstration is advancing technologies that are applicable to the global elevation mapping objectives (5 m spatial resolution, 10 cm vertical precision) of the LIDAR Surface Topography (LIST) mission recommended by the National Research Council in the Earth Science Decadal Survey report to NASA and NOAA. The main focus of this instrument development, sponsored by the NASA Earth Science and Technology Office Instrument Incubator Program, is to demonstrate an approach for detailed monitoring of ice sheet, sea ice and glacier change from a spacecraft in low Earth orbit. Although it currently emphasizes polar-region cryosphere objectives, the SIMPL approach is also applicable in other applications including measuring changes in land topography, forest height and structure, and inland water and snow cover height and extent. SIMPL employs a short-pulse (1 nsec) fiber laser transmitters operating at 1064 nm and 532 nm, a beam splitter to divide the energy into four parallel beams displaced cross-track, single photon counting modules (SPCM) detectors, and high precision timing electronics to achieve laser transmit pulse provides the depolarization ratio of the surface returns at 532 and 1064 nm, in order to differentiate surface types based on their scattering properties. Results of laboratory testing of a single beam breadboard and the design and implementation of the four-beam flight instrument will be described.
6. A Rússia na ordem mundial: com o Ocidente, com o Oriente ou um pólo autônomo em um mundo multipolar?
Scientific Electronic Library Online (English)
Alexander, Zhebit.
2003-06-01
Full Text Available SciELO Brazil | Language: Portuguese Abstract in portuguese O artigo persegue o objetivo de definir o lugar e o papel da Rússia nas relações internacionais contemporâneas nos últimos anos. Ao se debruçar sobre o dilema tradicional da política externa russa - Ocidentalismo versus Orientalismo - o autor analisa o cenário de multipolaridade defendido pela nova [...] concepção da política externa russa e o relaciona com a fase do pragmatismo e do multilateralismo que caracteriza a atuação internacional da Rússia de Putin, fazendo considerações, decorrentes do impacto dos ataques terroristas aos Estados Unidos em 11 de setembro de 2001 sobre a política externa russa. A atitude pragmática e a natureza multivetorial da política externa russa contribuem, segundo o autor, para o fortalecimento das posições internacionais da Rússia em comparação com a perda ou a natureza incerta das alianças e dos relacionamentos do período da transição pós-soviética. Abstract in english The article pursues the purpose to place Russia and its politics within the context of today's international relations. While discussing the traditional dilemma of the Russian foreign politics - Occidentalism versus Orientalism - the author analyses the scenario of multipolarity, backed up by the ne [...] w Russian foreign policy concept. Hence it is related to the pragmatism and the multilateralism of the international posture of Putin's Russia, the author makes several considerations, which follow from the impact of the September 11th, 2001, terrorist attacks on the United States of America with regard to Russia's foreign policy. The pragmatic attitude and the multi-axis nature of the Russian foreign policy nowadays contribute, according to the author, to strengthen Russia's international background in comparison with the loss or the uncertain nature of alliances and relationships of the post-Soviet transition period.
7. A Rússia na ordem mundial: com o Ocidente, com o Oriente ou um pólo autônomo em um mundo multipolar?
Directory of Open Access Journals (Sweden)
Alexander Zhebit
2003-06-01
Full Text Available O artigo persegue o objetivo de definir o lugar e o papel da Rússia nas relações internacionais contemporâneas nos últimos anos. Ao se debruçar sobre o dilema tradicional da política externa russa - Ocidentalismo versus Orientalismo - o autor analisa o cenário de multipolaridade defendido pela nova concepção da política externa russa e o relaciona com a fase do pragmatismo e do multilateralismo que caracteriza a atuação internacional da Rússia de Putin, fazendo considerações, decorrentes do impacto dos ataques terroristas aos Estados Unidos em 11 de setembro de 2001 sobre a política externa russa. A atitude pragmática e a natureza multivetorial da política externa russa contribuem, segundo o autor, para o fortalecimento das posições internacionais da Rússia em comparação com a perda ou a natureza incerta das alianças e dos relacionamentos do período da transição pós-soviética.The article pursues the purpose to place Russia and its politics within the context of today's international relations. While discussing the traditional dilemma of the Russian foreign politics - Occidentalism versus Orientalism - the author analyses the scenario of multipolarity, backed up by the new Russian foreign policy concept. Hence it is related to the pragmatism and the multilateralism of the international posture of Putin's Russia, the author makes several considerations, which follow from the impact of the September 11th, 2001, terrorist attacks on the United States of America with regard to Russia's foreign policy. The pragmatic attitude and the multi-axis nature of the Russian foreign policy nowadays contribute, according to the author, to strengthen Russia's international background in comparison with the loss or the uncertain nature of alliances and relationships of the post-Soviet transition period.
8. Unusual pressure dependence of the multipolar interactions in CexLa1-xB6
International Nuclear Information System (INIS)
We performed the mean field calculation of the magnetization under pressure for the four sublattice model to understand the unusual pressure effect of CeB6. The calculated results are in good agreement with the experimental results and the canted ferromagnetic ground state is predicted to appear at higher pressure. We studied the electrical resistivity of Ce0.75La0.25B6 under pressure. We found that the phase III is rapidly suppressed by pressure and TIV-I increases with pressure. At P=0.6GPa, the direct phase transition from IV to II is found, which will be the clue to understanding the phase IV
9. Radio phase characteristics of terrain from multipolarized synthetic aperture radar data
Science.gov (United States)
Zebker, H. A.; Held, D. N.
1985-01-01
Recent advances in digital data acquisition and signal processing technology permit simultaneous measurement of the complex (amplitude and phase) radar backscatter from several polarization-diverse antennas. While absolute phase mesurements remain to be analyzed in detail. The differential phase of signals polarized parallel and perpendicular to the plane of incidence provide information on the scattering mechanisms that dominate the interaction of the radio waves with the terrain. Analysis of phase backscatter maps from a typical urban area yields a bimodal distribution with the two peaks separated by approximately 180 degrees, highly indicative of a dominant simple geometric one bounce-two bounce mechanism. Some maps of agricultural areas exhibit a similar distribution, however, other agricultural areas yield a distribution that, while still bimodal, consists of two peaks separated by about 110 deg. Still other agricultural areas exhibit a more complex distribution. All of the observed phase shifts appear to be independent of incidence angle from at least 20 deg to 55 deg, therefore the 110 degree shifts are inconsistent with both the geometric model used for the urban area and with common dielectric slab models.
10. Experimental investigation of microwave interaction with magnetoplasma in miniature multipolar configuration using impedance measurements
Energy Technology Data Exchange (ETDEWEB)
Dey, Indranuj, E-mail: indranuj@aees.kyushu-u.ac.jp; Toyoda, Yuji; Yamamoto, Naoji; Nakashima, Hideki [Department of Advanced Energy Engineering Science, Kyushu University, Kasuga 816-8580 (Japan)
2014-09-15
A miniature microwave plasma source employing both radial and axial magnetic fields for plasma confinement has been developed for micro-propulsion applications. Plasma is initiated by launching microwaves via a short monopole antenna to circumvent geometrical cutoff limitations. The amplitude and phase of the forward and reflected microwave power is measured to obtain the complex reflection coefficient from which the equivalent impedance of the plasma source is determined. Effect of critical plasma density condition is reflected in the measurements and provides insight into the working of the miniature plasma source. A basic impedance calculation model is developed to help in understanding the experimental observations. From experiment and theory, it is seen that the equivalent impedance magnitude is controlled by the coaxial discharge boundary conditions, and the phase is influenced primarily by the plasma immersed antenna impedance.
11. Experimental investigation of microwave interaction with magnetoplasma in miniature multipolar configuration using impedance measurements
International Nuclear Information System (INIS)
A miniature microwave plasma source employing both radial and axial magnetic fields for plasma confinement has been developed for micro-propulsion applications. Plasma is initiated by launching microwaves via a short monopole antenna to circumvent geometrical cutoff limitations. The amplitude and phase of the forward and reflected microwave power is measured to obtain the complex reflection coefficient from which the equivalent impedance of the plasma source is determined. Effect of critical plasma density condition is reflected in the measurements and provides insight into the working of the miniature plasma source. A basic impedance calculation model is developed to help in understanding the experimental observations. From experiment and theory, it is seen that the equivalent impedance magnitude is controlled by the coaxial discharge boundary conditions, and the phase is influenced primarily by the plasma immersed antenna impedance
12. Magnetar Giant Flares in Multipolar Magnetic Fields --- I. Fully and Partially Open Eruptions of Flux Ropes
CERN Document Server
Huang, Lei
2014-01-01
We propose a catastrophic eruption model for magnetar's enormous energy release during giant flares, in which a toroidal and helically twisted flux rope is embedded within a force-free magnetosphere. The flux rope stays in stable equilibrium states initially and evolves quasi-statically. Upon the loss of equilibrium point is reached, the flux rope cannot sustain the stable equilibrium states and erupts catastrophically. During the process, the magnetic energy stored in the magnetosphere is rapidly released as the result of destabilization of global magnetic topology. The magnetospheric energy that could be accumulated is of vital importance for the outbursts of magnetars. We carefully establish the fully open fields and partially open fields for various boundary conditions at the magnetar surface and study the relevant energy thresholds. By investigating the magnetic energy accumulated at the critical catastrophic point, we find that it is possible to drive fully open eruptions for dipole dominated background...
13. Establishment of M1 multipolarity of a 6.5 (micro)2n resonance in 172Yb at E(gamma) = 3.3 MeV
Energy Technology Data Exchange (ETDEWEB)
Schiller, A; Voinov, A; Algin, E; Becker, J A; Bernstein, L A; Garrett, P E; Guttormsen, M; Nelson, R O; Rekstad, J; Siem, S
2004-02-04
Two-step-cascade spectra in {sup 172}Yb have been measured after thermal neutron capture. they are compared to calculations based on experimental values of the level density and radiative strength function (RSF) obtained from the {sup 173}Yb(3{sup 3}He,{alpha}{gamma}){sup 172}Yb reaction. The multipolarity of a 6.5(15) {mu}{sub N}{sup 2} resonance at E{sub {gamma}} = 3.3(1) MeV in the RSF is determined to be M1 by this comparison.
14. ZnO thin films with c-axis orientation prepared on the room temperature substrate by the ECR multipolar plasma sputtering method
International Nuclear Information System (INIS)
Zinc oxide (ZnO) films with c-axis orientation have been prepared on the room temperature substrates by a reactive sputtering deposition utilizing electron-cyclotron-resonance multipolar (ECRM) plasma apparatus built with Nd-Fe-B magnets and 2.45 GHz, TE10 mode microwave. The plasma distributions in the axial direction were found to be sensitive to the magnetic field configurations in the plasma cavity. The XRD, TEM, SEM analyses indicated that the deposited ZnO films were of nanometre size, smoothness and dense with high c-axis orientation
15. The Fate of Sub-micron Circumplanetary Dust Grains II: Multipolar Fields
CERN Document Server
Jontof-Hutter, Daniel
2012-01-01
We study the radial and vertical stability of dust grains launched with all charge-to-mass ratios at arbitrary distances from rotating planets with complex magnetic fields. We show that the aligned dipole magnetic field model analyzed by Jontof-Hutter and Hamilton (2012) is an excellent approximation in most cases, but that fundamentally new physics arises with the inclusion of non-axisymmetric magnetic field terms. In particular, large numbers of distant negatively-charged dust grains, stable in a magnetic dipole, can be driven to escape by a more complex field. We trace the origin of the instability to overlapping Lorentz resonances which are extremely powerful when the gravitational and electromagnetic forces on a dust grain are comparable. These resonances enable a dust grain to tap the spin energy of the planet to power its escape. We also explore the relatively minor influence of different launch speeds and the far more important effects of variable grain charge. Only the latter are capable of significa...
16. Dynamic causal modelling of distributed electromagnetic responses
OpenAIRE
Daunizeau, J.; Kiebel, S. J.; Friston, K. J.
2009-01-01
In this note, we describe a variant of dynamic causal modelling for evoked responses as measured with electroencephalography or magnetoencephalography (EEG and MEG). We depart from equivalent current dipole formulations of DCM, and extend it to provide spatiotemporal source estimates that are spatially distributed. The spatial model is based upon neural-field equations that model neuronal activity on the cortical manifold. We approximate this description of electrocortical activity with a set...
17. Brain activity is related to individual differences in the number of items stored in auditory short-term memory for pitch: evidence from magnetoencephalography.
Science.gov (United States)
Grimault, Stephan; Nolden, Sophie; Lefebvre, Christine; Vachon, François; Hyde, Krista; Peretz, Isabelle; Zatorre, Robert; Robitaille, Nicolas; Jolicoeur, Pierre
2014-07-01
We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortices that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortices in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain. PMID:24642285
18. A theoretical analysis of HLA-DRbeta1*0301-CLIP complex using the first three multipolar moments of the electrostatic field.
Science.gov (United States)
Balbín, Alejandro; Cárdenas, Constanza; Villaveces, José Luis; Patarroyo, Manuel E
2006-09-01
Interactions between the HLA-DRbeta1*0301 molecule and several occupying peptides obtained from computational substitutions made to the CLIP peptide are studied. The exploration was carried out using a vector composed of the first three terms of the multipolar expansion of the electrostatic field, namely, charge (q), dipole (d) and quadrupole (C). Comparisons between pocket-peptide interactions established that the binding pockets for this HLA molecule are ordered in terms of their importance for binding peptides, as follows: P1 > P4 > P6 > P7 > P9. A set of electrostatically distinct amino acids that determine interaction stability and specificity were identified for each pocket. The beta74R residue was especially identified as being the key amino acid mediating the occupying peptide binding for pocket 4; this residue has been recently associated with Graves' disease. PMID:16872734
19. Parametrized post-Newtonian theory of reference frames, multipolar expansions and equations of motion in the N-body problem
International Nuclear Information System (INIS)
Post-Newtonian relativistic theory of astronomical reference frames based on Einstein's general theory of relativity was adopted by General Assembly of the International Astronomical Union in 2000. This theory is extended in the present paper by taking into account all relativistic effects caused by the presumable existence of a scalar field and parametrized by two parameters, ? and ?, of the parametrized post-Newtonian (PPN) formalism. We use a general class of the scalar-tensor (Brans-Dicke type) theories of gravitation to work out PPN concepts of global and local reference frames for an astronomical N-body system. The global reference frame is a standard PPN coordinate system. A local reference frame is constructed in the vicinity of a weakly self-gravitating body (a sub-system of the bodies) that is a member of the astronomical N-body system. Such local inertial frame is required for unambiguous derivation of the equations of motion of the body in the field of other members of the N-body system and for construction of adequate algorithms for data analysis of various gravitational experiments conducted in ground-based laboratories and/or on board of spacecrafts in the solar system.We assume that the bodies comprising the N-body system have weak gravitational field and move slowly. At the same time we do not impose any specific limitations on the distribution of density, velocity and the equation of state of the body's matter. Scalar-tensor equations of the gravitr. Scalar-tensor equations of the gravitational field are solved by making use of the post-Newtonian approximations so that the metric tensor and the scalar field are obtained as functions of the global and local coordinates. A correspondence between the local and global coordinate frames is found by making use of asymptotic expansion matching technique. This technique allows us to find a class of the post-Newtonian coordinate transformations between the frames as well as equations of translational motion of the origin of the local frame along with the law of relativistic precession of its spatial axes. These transformations depend on the PPN parameters ? and ?, generalize general relativistic transformations of the IAU 2000 resolutions, and should be used in the data processing of the solar system gravitational experiments aimed to detect the presence of the scalar field. These PPN transformations are also applicable in the precise time-keeping metrology, celestial mechanics, astrometry, geodesy and navigation.We consider a multipolar post-Newtonian expansion of the gravitational and scalar fields and construct a set of internal and external gravitational multipoles depending on the parameters ? and ?. These PPN multipoles generalize the Thorne-Blanchet-Damour multipoles defined in harmonic coordinates of general theory of relativity. The PPN multipoles of the scalar-tensor theory of gravity are split in three classes-active, conformal, and scalar multipoles. Only two of them are algebraically independent and we chose to work with the conformal and active multipoles. We derive the laws of conservations of the multipole moments and show that they must be formulated in terms of the conformal multipoles. We focus then on the law of conservation of body's linear momentum which is defined as a time derivative of the conformal dipole moment of the body in the local coordinates. We prove that the local force violating the law of conservation of the body's linear momentum depends exclusively on the active multipole moments of the body along with a few other terms which depend on the internal structure of the body and are responsible for the violation of the strong principle of equivalence (the Nordtvedt effect).The PPN translational equations of motion of extended bodies in the global coordinate frame and with all gravitational multipoles taken into account are derived from the law of conservation of the body's linear momentum supplemented by the law of motion of the origin of the local frame derived from the matching procedure. We use these equations to analyz
20. Binary black hole coalescence in the extreme-mass-ratio limit: Testing and improving the effective-one-body multipolar waveform
International Nuclear Information System (INIS)
We discuss the properties of the effective-one-body (EOB) multipolar gravitational waveform emitted by nonspinning black-hole binaries of masses ? and M in the extreme-mass-ratio limit ?/M=?-4 rad and maintain then a remarkably accurate phase coherence during the long inspiral (?33 orbits), accumulating only about -2x10-3 rad until the last stable orbit, i.e. ??/??-5.95x10-6. We obtain such accuracy without calibrating the analytically resummed EOB waveform to numerical data, which indicates the aptitude of the EOB waveform for studies concerning the Laser Interferometer Space Antenna. We then improve the behavior of the EOB waveform around merger by introducing and tuning next-to-quasicircular corrections in both the gravitational wave amplitude and phase. For each multipole we tune only fr each multipole we tune only four next-to-quasicircular parameters by requiring compatibility between EOB and Regge-Wheeler-Zerilli waveforms at the light ring. The resulting phase difference around the merger time is as small as ±0.015 rad, with a fractional amplitude agreement of 2.5%. This suggest that next-to-quasicircular corrections to the phase can be a useful ingredient in comparisons between EOB and numerical-relativity waveforms.
1. Modeling Choices in Nuclear Warfighting: Two Classroom Simulations on Escalation and Retaliation
Science.gov (United States)
Schofield, Julian
2013-01-01
Two classroom simulations--"Superpower Confrontation" and "Multipolar Asian Simulation"--are used to teach and test various aspects of the Borden versus Brodie debate on the Schelling versus Lanchester approach to nuclear conflict modeling and resolution. The author applies a Schelling test to segregate high from low empathic students, and assigns…
2. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations
OpenAIRE
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first conducted for an autoregressive model with 5 latent variables (brain regions), each defined by 3 indicators (successive activity time bins). A series...
3. Inspiral-merger-ringdown multipolar waveforms of nonspinning black-hole binaries using the effective-one-body formalism
OpenAIRE
Pan, Yi; Buonanno, Alessandra; Boyle, Michael; Buchman, Luisa T.; Kidder, Lawrence E.; Pfeiffer, Harald P.; Scheel, Mark A.
2011-01-01
We calibrate an effective-one-body (EOB) model to numerical-relativity simulations of mass ratios 1, 2, 3, 4, and 6, by maximizing phase and amplitude agreement of the leading (2,2) mode and of the subleading modes (2,1), (3,3), (4,4) and (5,5). Aligning the calibrated EOB waveforms and the numerical waveforms at low frequency, the phase difference of the (2,2) mode between model and numerical simulation remains below 0.1 rad throughout the evolution for all mass ratios cons...
4. Inspiral-merger-ringdown multipolar waveforms of nonspinning black-hole binaries using the effective-one-body formalism
CERN Document Server
Pan, Yi; Boyle, Michael; Buchman, Luisa T; Kidder, Lawrence E; Pfeiffer, Harald P; Scheel, Mark A
2011-01-01
We calibrate an effective-one-body (EOB) model to numerical-relativity simulations of mass ratios 1, 2, 3, 4, and 6, by maximizing phase and amplitude agreement of the leading (2,2) mode and of the subleading modes (2,1), (3,3), (4,4) and (5,5). Aligning the calibrated EOB waveforms and the numerical waveforms at low frequency, the phase difference of the (2,2) mode between model and numerical simulation remains below 0.1 rad throughout the evolution for all mass ratios considered. The fractional amplitude difference at peak amplitude of the (2,2) mode is 2% and grows to 12% during the ringdown. Using the Advanced LIGO noise curve we study the effectualness and measurement accuracy of the EOB model, and stress the relevance of modeling the higher-order modes for parameter estimation. We find that the effectualness, measured by the mismatch, between the EOB and numerical-relativity polarizations which include only the (2,2) mode is smaller than 0.2% for binaries with total mass 20-200 Msun and mass ratios 1, 2...
5. Inspiral-merger-ringdown multipolar waveforms of nonspinning black-hole binaries using the effective-one-body formalism
International Nuclear Information System (INIS)
We calibrate an effective-one-body (EOB) model to numerical-relativity simulations of mass ratios 1, 2, 3, 4, and 6, by maximizing phase and amplitude agreement of the leading (2, 2) mode and of the subleading modes (2, 1), (3, 3), (4, 4) and (5, 5). Aligning the calibrated EOB waveforms and the numerical waveforms at low frequency, the phase difference of the (2, 2) mode between model and numerical simulation remains below ?0.1 rad throughout the evolution for all mass ratios considered. The fractional amplitude difference at peak amplitude of the (2, 2) mode is 2% and grows to 12% during the ringdown. Using the Advanced LIGO noise curve we study the effectualness and measurement accuracy of the EOB model, and stress the relevance of modeling the higher-order modes for parameter estimation. We find that the effectualness, measured by the mismatch between the EOB and numerical-relativity polarizations which include only the (2, 2) mode, is smaller than 0.2% for binaries with total mass 20-200M· and mass ratios 1, 2, 3, 4, and 6. When numerical-relativity polarizations contain the strongest seven modes, and stellar-mass black holes with masses less than 50M· are considered, the mismatch for mass ratio 6 (1) can be as high as 7% (0.2%) when only the EOB (2, 2) mode is included, and an upper bound of the mismatch is 0.5% (0.07%) when all the four subleading EOB modes calibrated in this paper are taken into account. For binaries with intermediateccount. For binaries with intermediate-mass black holes with masses greater than 50M· the mismatches are larger. We also determine for which signal-to-noise ratios the EOB model developed here can be used to measure binary parameters with systematic biases smaller than statistical errors due to detector noise.
6. An experimental evaluation of a 12.5-cm diameter multipolar microwave electron cyclotron resonance plasma source
Science.gov (United States)
Mak, Peng Un
Using the 17.78 cm dia. resonant cavity, a new baseplate has been built to accommodate a 12.5 cm dia. plasma discharge at the bottom of the cavity. This plasma discharge, called MPDR TM-13, was comprehensively investigated by several experimental techniques versus a number of selected input variables. The experimental techniques include the micro-coax electric field probes, a gridded energy analyzer, single and double Langmuir probes. The input variables consisted of the variations of reactor design (side feed, end feed, and four different strong magnet configurations), input microwave power, chamber pressure, and the microwave tuning positions. Experiments were carried out mostly in argon gas. This plasma source behavior and performance were evaluated by the measurements of the magnitude and spatial variation of the electric field in the applicator, and the measurements of the plasma density, electron temperature, plasma potential and the ion energy in the downstream discharge region. Microwave coupling efficiency and ion production cost are two main measures of the source performance. Experimental results demonstrate that both the side feed excited with TE211 mode and end feed excited with TM011 mode operating with the 8P/8M magnet configuration provide the best overall performance. Both show similar, excellent coupling efficiencies (~98%) and ion production costs. High plasma density (>1011/cm3), low ion impinging energy (12 to 25 eV) on a substrate were readily achieved in both reactors. However, it is easier to maintain a discharge at very low pressure regimes (<1 mTorr) with the side feed applicator while the end feed applicator produces a more uniform plasma. Based on the electric field measurements, a microwave equivalent circuit for the end feed applicator was developed. The high density behavior of the MPDR TM-3 was also experimentally investigated by examining discharge hysteresis phenomena versus the tuning, pressure, and input power. Control strategies using plasma internal variables for reproducible plasma process was demonstrated in an argon soft sputter oxide application. Finally, a comparison of the experimental performance for four different ECR reactors with the performance predicted by global discharge models indicates that these global models are useful in understanding the plasma source behavior and in plasma reactor design.
7. Imagens multipolarizadas do sensor Palsar/Alos na discriminação das fases fenológicas da cana?de?açúcar Multipolarized Palsar/Alos images to discriminate sugarcane phenological phases
Directory of Open Access Journals (Sweden)
Michelle Cristina Araujo Picoli
2012-09-01
8. Imagens multipolarizadas do sensor Palsar/Alos na discriminação das fases fenológicas da cana?de?açúcar / Multipolarized Palsar/Alos images to discriminate sugarcane phenological phases
Scientific Electronic Library Online (English)
Michelle Cristina Araujo, Picoli; Rubens Augusto, Lamparelli; Edson Eyji, Sano; Jansle Vieira, Rocha.
1307-13-01
9. Phase Transition in Size- and Charge-Asymmetric Model Electrolytes
OpenAIRE
Khomkin, A. L.; Mulenko, I. A.
2003-01-01
A theoretical model of vapor-liquid phase transition in a system of charged hard cores of different diameters is suggested (with the parameters of the transition obtained in a number of studies using the Monte Carlo method). The model is based on the assumption that, in the neighborhood of the critical point, the system of charged cores is a mixture of multipolarly interacting neutral complexes.
10. Clinical applications of magnetoencephalography in epilepsy
Directory of Open Access Journals (Sweden)
Ray Amit
2010-01-01
Full Text Available Magnetoencehalography (MEG is being used with increased frequency in the pre-surgical evaluation of patients with epilepsy. One of the major advantages of this technique over the EEG is the lack of distortion of MEG signals by the skull and intervening soft tissue. In addition, the MEG preferentially records activity from tangential sources thus recording activity predominantly from sulci, which is not contaminated by activity from apical gyral (radial sources. While the MEG is probably more sensitive than the EEG in detecting inter-ictal spikes, especially in the some locations such as the superficial frontal cortex and the lateral temporal neocortex, both techniques are usually complementary to each other. The diagnostic accuracy of MEG source localization is usually better as compared to scalp EEG localization. Functional localization of eloquent cortex is another major application of the MEG. The combination of high spatial and temporal resolution of this technique makes it an extremely helpful tool for accurate localization of visual, somatosensory and auditory cortices as well as complex cognitive functions like language. Potential future applications include lateralization of memory function.
11. Scaling laws, force balances and dynamo generation mechanisms in numerical dynamo models: influence of boundary conditions
Science.gov (United States)
Dharmaraj, G.; Stanley, S.; Qu, A. C.
2014-10-01
We investigate the influence of different thermal and velocity boundary conditions on numerical geodynamo models. We concentrate on the implications for magnetic field morphology, heat transport scaling laws, force balances and generation mechanisms. The field morphology most strongly depends on the local Rossby number, but there is some variation in the dipolarity of the field with boundary condition. Scaling laws also depend on the boundary conditions, but a diffusivity-free scaling is a good first order approximation for all our dipolar models. Our multipolar models, however, obey different scaling laws from dipolar models implying a different force balance in these models. We find that our dipolar models have a stronger degree of Lorentz-Coriolis balance compared to our multipolar models which have a stronger degree of Lorentz-inertial balance.The models with a stronger Lorentz-Coriolis dominance can be generated by either ??, ?2? or ?2 mechanisms whereas the models with a stronger Lorentz-inertial balance are all ?2 dynamos. These results imply that some caution is necessary when extrapolating results from dynamo models to Earth-like parameters since the choice of boundary conditions can have important effects.
12. Some New Applications of Weyl's Multipolarization Operators
CERN Document Server
Towber, J
2001-01-01
In Weyl's "The Classical Groups", he introduces some some remarkable differential operators, which he calls "quasi-compositions" of the polarization operators Dij. In the present paper, an equivalent combinatorial formulation is obtained for these operators, and is then used to obtain explicit formulas for the differentials in certain complexes (constucted by Zelevinsky, and further studied by Verma, Akin et al.) which furnish higher syzygies for the Pluecker equations, and also for the defining relations for Weyl modules.
13. Modelling Sporangiospore-yeast transformation of Dimorphomyces strain.
Science.gov (United States)
Omoifo, C O
1996-01-01
Two types of buffered media, strictly defined-Ammonium sulphate-basal salts and complex Peptone-basal salts, were used for the cultivation of Dimorphomyces pleomorphis, one of two dimorphic fungi isolated from fermenting juice of soursop fruit, Annona muricata L. The growth count was taken every twenty-four hours. Transient morphologies were observed to change from sporangiospores through enlarged globose cells, to granular particles and eventually, polar budding yeast cells in the strictly defined medium at 15 degrees, 20 degrees, or 37 degrees C, but the complex medium casually terminally induced polar budding yeast cells and multipolar budding yeast like cells in between the growth phases, at 15 degrees and 20 degrees C, while mainly multipolar budding yeastlike morphology was observed at elevated temperature. There was obvious influence of nutritional factor or morphological expression (p < 0.01). After analysis of variance, the growth data could not fit into predictive quadratic polynomial model because the organism's response curves were incongruent with basic assumptions of the model. Furthermore, a stepwise regression analysis gave very low coefficients of determination, r2, for the interactive combinations. They were therefore, considered unfit for the data. Construction of the pII-profiles led to inference being drawn from the chemiosmotic theory, polyelectrolyte theory to account for the behaviour in the buffered multiionic media. It was also thought that inherent cellular mitotic division and glycolytic activity led to a prelogarithmic growth response. PMID:9676041
14. Computer Simulations of ER Fluids in did Model
Science.gov (United States)
Yu, K. W.
Theoretical investigations on electrorheological (ER) fluids are usually concentrated on monodisperse systems. Real ER fluids must be polydisperse in nature, i.e., the suspended particles can have various sizes and/or different dielectric constants. An initial approach for these studies would be the point-dipole (PD) approximation, which is known to err considerably when the particles approach and finally touch due to multipolar interactions. A dipole-induced-dipole (DID) model is shown to be both more accurate than the PD model and easy to use. The DID model is applied to simulate the athermal aggregation of particles in ER fluids and the aggregation time is found to deviate significantly as compared to the PD model. Moreover, the inclusion of DID force further complicates the results because the symmetry between positive and negative contrasts will be broken by the presence of dipole-induced interactions.
15. Probabilistic forward model for electroencephalography source analysis
Energy Technology Data Exchange (ETDEWEB)
Plis, Sergey M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); George, John S [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Jun, Sung C [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Ranken, Doug M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Volegov, Petr L [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schmidt, David M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2007-09-07
Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates.
16. Probabilistic forward model for electroencephalography source analysis
International Nuclear Information System (INIS)
Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conducity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates
17. Modeling versus accuracy in EEG and MEG data
Energy Technology Data Exchange (ETDEWEB)
Mosher, J.C.; Huang, M. [Los Alamos National Lab., NM (United States); Leahy, R.M. [Univ. of Southern California, Los Angeles, CA (United States); Spencer, M.E. [Signal Processing Solutions, Redondo Beach, CA (United States)
1997-07-30
The widespread availability of high-resolution anatomical information has placed a greater emphasis on accurate electroencephalography and magnetoencephalography (collectively, E/MEG) modeling. A more accurate representation of the cortex, inner skull surface, outer skull surface, and scalp should lead to a more accurate forward model and hence improve inverse modeling efforts. The authors examine a few topics in this paper that highlight some of the problems of forward modeling, then discuss the impacts these results have on the inverse problem. The authors begin by assuming a perfect head model, that of the sphere, then show the lower bounds on localization accuracy of dipoles within this perfect forward model. For more realistic anatomy, the boundary element method (BEM) is a common numerical technique for solving the boundary integral equations. For a three-layer BEM, the computational requirements can be too intensive for many inverse techniques, so they examine a few simplifications. They quantify errors in generating this forward model by defining a regularized percentage error metric. The authors then apply this metric to a single layer boundary element solution, a multiple sphere approach, and the common single sphere model. They conclude with an MEG localization demonstration on a novel experimental human phantom, using both BEM and multiple spheres.
18. Magnetoencephalography evidence for different brain subregions serving two musical cultures.
Science.gov (United States)
Matsunaga, Rie; Yokosawa, Koichi; Abe, Jun-ichi
2012-12-01
Individuals who have been exposed to two different musical cultures (bimusicals) can be differentiated from those exposed to only one musical culture (monomusicals). Just as bilingual speakers handle the distinct language-syntactic rules of each of two languages, bimusical listeners handle two distinct musical-syntactic rules (e.g., tonal schemas) in each musical culture. This study sought to determine specific brain activities that contribute to differentiating two culture-specific tonal structures. We recorded magnetoencephalogram (MEG) responses of bimusical Japanese nonmusicians and amateur musicians as they monitored unfamiliar Western melodies and unfamiliar, but traditional, Japanese melodies, both of which contained tonal deviants (out-of-key tones). Previous studies with Western monomusicals have shown that tonal deviants elicit an early right anterior negativity (mERAN) originating in the inferior frontal cortex. In the present study, tonal deviants in both Western and Japanese melodies elicited mERANs with characteristics fitted by dipoles around the inferior frontal gyrus in the right hemisphere and the premotor cortex in the left hemisphere. Comparisons of the nature of mERAN activity to Western and Japanese melodies showed differences in the dipoles' locations but not in their peak latency or dipole strength. These results suggest that the differentiation between a tonal structure of one culture and that of another culture correlates with localization differences in brain subregions around the inferior frontal cortex and the premotor cortex. PMID:23063935
19. Magnetoencephalography of frontotemporal dementia: spatiotemporally localized changes during semantic decisions
OpenAIRE
Hughes, Laura E.; Nestor, Peter J.; Hodges, John R.; Rowe, James B.
2011-01-01
Behavioural variant frontotemporal dementia is a neurodegenerative disorder with dysfunction and atrophy of the frontal lobes leading to changes in personality, behaviour, empathy, social conduct and insight, with relative preservation of language and memory. As novel treatments begin to emerge, biomarkers of frontotemporal dementia will become increasingly important, including functionally relevant neuroimaging indices of the neurophysiological basis of cognition. We used magnetoencephalogra...
20. Noise cancellation in magnetoencephalography and electroencephalography with isolated reference sensors
Science.gov (United States)
Kraus Jr., Robert H.; Espy, Michelle A.; Matlachov, Andrei; Volegov, Petr
2010-06-01
An apparatus measures electromagnetic signals from a weak signal source. A plurality of primary sensors is placed in functional proximity to the weak signal source with an electromagnetic field isolation surface arranged adjacent the primary sensors and between the weak signal source and sources of ambient noise. A plurality of reference sensors is placed adjacent the electromagnetic field isolation surface and arranged between the electromagnetic isolation surface and sources of ambient noise.
1. Electro-magneto-encephalography for the three-shell model: numerical implementation via splines for distributed current in spherical geometry
International Nuclear Information System (INIS)
The basic inverse problems for the functional imaging techniques of electroencephalography (EEG) and magnetoencephalography (MEG) consist in estimating the neuronal current in the brain from the measurement of the electric potential on the scalp and of the magnetic field outside the head. Here we present a rigorous derivation of the relevant formulae for a three-shell spherical model in the case of independent as well as simultaneous MEG and EEG measurements. Furthermore, we introduce an explicit and stable technique for the numerical implementation of these formulae via splines. Numerical examples are presented using the locations and the normal unit vectors of the real 102 magnetometers and 70 electrodes of the Elekta Neuromag (R) system. These results may have useful implications for the interpretation of the reconstructions obtained via the existing approaches. (paper)
2. An empirical model and an inversion technique for radar scattering from bare soil surfaces
Science.gov (United States)
Oh, Yisok; Sarabandi, Kamal; Ulaby, Fawwaz T.
1992-01-01
Polarimetric radar measurements were conducted for bare soil surfaces under a variety of roughness and moisture conditions at L-, C-, and X-band frequencies at incidence angles ranging from 10 to 70 deg. Using a laser profiler and dielectric probes, a complete and accurate set of ground truth data were collected for each surface condition, from which accurate measurements were made of the rms height, correlation length, and dielectric constant. Based on knowledge of the scattering behavior in limiting cases and the experimental observations, an empirical model was developed which was found to yield very good agreement with the backscattering measurements of this study, as well as with measurements reported in other investigations. An inversion technique for predicting the rms height of the surface and its moisture content from multipolarized radar observations is developed on the basis of the model.
3. Evaluation of multiple-sphere head models for MEG source localization
Energy Technology Data Exchange (ETDEWEB)
Lalancette, M; Cheyne, D [Department of Diagnostic Imaging, The Hospital for Sick Children, 555 University Ave., Toronto, Ontario M5G 1X8 (Canada); Quraan, M, E-mail: marc.lalancette@sickkids.ca, E-mail: douglas.cheyne@utoronto.ca [Krembil Neuroscience Centre, Toronto Western Research Institute, University Health Network, Toronto, Ontario M5T 2S8 (Canada)
2011-09-07
Magnetoencephalography (MEG) source analysis has largely relied on spherical conductor models of the head to simplify forward calculations of the brain's magnetic field. Multiple- (or overlapping, local) sphere models, where an optimal sphere is selected for each sensor, are considered an improvement over single-sphere models and are computationally simpler than realistic models. However, there is limited information available regarding the different methods used to generate these models and their relative accuracy. We describe a variety of single- and multiple-sphere fitting approaches, including a novel method that attempts to minimize the field error. An accurate boundary element method simulation was used to evaluate the relative field measurement error (12% on average) and dipole fit localization bias (3.5 mm) of each model over the entire brain. All spherical models can contribute in the order of 1 cm to the localization bias in regions of the head that depart significantly from a sphere (inferior frontal and temporal). These spherical approximation errors can give rise to larger localization differences when all modeling effects are taken into account and with more complex source configurations or other inverse techniques, as shown with a beamformer example. Results differed noticeably depending on the source location, making it difficult to recommend a fitting method that performs best in general. Given these limitations, it may be advisable to expand the use of realistic head models.
4. Aminoacid zwitterions in solution: Geometric, energetic, and vibrational analysis using density functional theory-continuum model calculations
Science.gov (United States)
Tortonda, Francisco R.; Pascual-Ahuir, Juan-Luis; Silla, Estanislao; Tuñón, Iñaki; Ramírez, Francisco J.
1998-07-01
Glycine and alanine aminoacids chemistry in solution is explored using a hybrid three parameters density functional (B3PW91) together with a continuum model. Geometries, energies, and vibrational spectra of glycine and alanine zwitterions are studied at the B3PW91/6-31+G** level and the results compared with those obtained at the HF and MP2/6-31+G** levels. Solvents effects are incorporated by means of an ellipsoidal cavity model with a multipolar expansion (up to sixth order) of the solute's electrostatic potential. Our results confirm the validity of the B3PW91 functional for studying aminoacid chemistry in solution. Taking into account the more favorable scaling behavior of density functional techniques with respect to correlated ab initio methods these studies could be extended to larger systems.
5. Refined modeling of superconducting double helical coils using finite element analyses
International Nuclear Information System (INIS)
Double helical coils are becoming more and more attractive for accelerator magnets and other applications. Conceptually, a sinusoidal modulation of the longitudinal position of the turns allows virtually any multipolar field to be produced and maximizes the effectiveness of the supplied ampere turns. Being intrinsically three-dimensional, the modeling of such structures is very complicated, and several approaches, with different degrees of complexity, can be used. In this paper we present various possibilities for solving the magnetostatic problem of a double helical coil, through both finite element analyses and direct integration of the Biot–Savart law, showing the limits and advantages of each solution and the corresponding information which can be derived. (paper)
6. Probing Electronic Correlations in Actinide Materials Using Multipolar Transitions
Energy Technology Data Exchange (ETDEWEB)
Bradley, J.A.; Gupta, S. Sen; Seidler, G.T.; Moore, K.T.; Haverkort, M.W.; Sawatzky, G.A.; Conradson, S.D.; Clark, D.L.; Kozimor, S.A.; Boland, K.S. (UWASH); (MXPL-SS); (LLNL); (UBC); (LANL)
2010-07-28
We report nonresonant inelastic x-ray scattering from the semicore 5d levels of several actinide compounds. Dipole-forbidden, high-multipole features form a rich bound-state spectrum dependent on valence electron configuration and spin-orbit and Coulomb interactions. Cross-material comparisons, together with the anomalously high Coulomb screening required for agreement between atomic-multiplet theory and experiment, demonstrate sensitivity to the neighboring electronic environment, such as is needed to address longstanding questions of electronic localization and bonding in 5f compounds.
7. Multipolar second-harmonic generation from films of chalcogenide glasses
Science.gov (United States)
Slablab, A.; Koskinen, K.; Czaplicki, R.; Karunakaran, N. T.; Sebastian, I.; Chandran, C. Pradeep; Kailasnath, M.; Radhakrishnan, P.; Kauranen, M.
2014-05-01
Chalcogenide glasses are amorphous semiconductors with a number of interesting properties required for photonic devices. Particularly, their optical properties can be tuned through the change of the glass composition. We investigate second-order nonlinear optical properties of chalcogenide glass (Ge27Se64Sb9) thin films fabricated by thermal evaporation. The strong second-harmonic generation observed for the samples investigated is analyzed as a function of incident polarization. Furthermore, the role of multipole effects in second-harmonic generation is also studied by using two beams at the fundamental frequency. Our results suggest that the higher-multipole effects are present and contribute significantly to the second-harmonic response of chalcogenide the samples.
8. Menzel 3: a Multipolar Nebula in the Making
Science.gov (United States)
Guerrero, Martín A.; Chu, You-Hua; Miranda, Luis F.
2004-10-01
The nebula Menzel 3 (Mz 3) has arguably the most complex bipolar morphology, consisting of three nested pairs of bipolar lobes and an equatorial ellipse. Its three pairs of bipolar lobes share the same axis of symmetry but have very different opening angles and morphologies: the innermost pair of bipolar lobes shows closed-lobe morphology, whereas the other two have open lobes with cylindrical and conical shapes, respectively. We have carried out high-dispersion spectroscopic observations of Mz 3 and detected distinct kinematic properties among the different morphological components. The expansion characteristics of the two outer pairs of lobes suggest that they originated in an explosive event, whereas the innermost pair of lobes resulted from the interaction of a fast wind with the surrounding material. The equatorial ellipse is associated with a fast equatorial outflow, which is unique among bipolar nebulae. The dynamical ages of the different structures in Mz 3 suggest episodic bipolar ejections, and the distinct morphologies and kinematics among these different structures reveal fundamental changes in the system between these episodic ejections.
9. The filamentary multi-polar planetary nebula NGC 5189
Scientific Electronic Library Online (English)
L., Sabin; R., Vázquez; J. A., López; Ma. T., García-Díaz; G., Ramos-Larios.
2012-10-01
Full Text Available Presentamos un conjunto de imágenes ópticas e infrarrojas combinadas con espectros de rendija larga de mediana y alta dispersión de la Nebulosa Planetaria (NP) del sur NGC 5189. La compleja morfología de esta NP es desconcertante y no había sido estudiada en detalle hasta ahora. Nuestra investigació [...] n revela la presencia de un toroide denso y frío, en el infrarrojo, el cual probablemente generó uno de los dos flujos bipolares vistos en el óptico y podría, mediante un proceso de interacción, ser también responsable de la apariencia retorcida del toroide óptico. Los espectros de alta resolución del MES-AAT muestran claramente la presencia de nudos y estructuras filamentosas, así como tres burbujas en expansión. Nuestros hallazgos sugieren que NGC 5189 es una NP cuadrupolar con varios conjuntos de condensaciones simétricas en la cual la interacción de flujos determinó su compleja morfología. Abstract in english We present a set of optical and infrared images combined with long-slit, medium and high dispersion spectra of the southern planetary nebula (PN) NGC 5189. The complex morphology of this PN is puzzling and has not been studied in detailed so far. Our investigation reveals the presence of a new dense [...] and cold infrared torus (alongside the optical one) which probably generated one of the two optically seen bipolar outflows and which might be responsible for the twisted appearance of the optical torus via an interaction process. The high-resolution MES-AAT spectra clearly show the presence of filamentary and knotty structures as well as three expanding bubbles. Our findings therefore suggest that NGC 5189 is a quadrupolar nebula with multiple sets of symmetrical condensations in which the interaction of outflows has determined its complex morphology.
10. China's Soft Diplomacy in an Emerging Multi-polar World
DEFF Research Database (Denmark)
Schmidt, Johannes Dragsbæk
Keynote presentation for the conference"The Growing Prominence of China on the World Stage: Exploring the Political, Economic, and Cultural Relations of China and Global Stakeholders" International Conference, Berlin, September 15th - 18th, 2011 - Held Parallel to the "Berlin - Asia Pacific Weeks Conference 2011
11. A pair spectrometer for measuring multipolarities of energetic nuclear transitions
CERN Document Server
Gulyás, J; Krasznahorkay, A J; Csatlós, M; Csige, L; Gácsi, Z; Hunyadi, M; Krasznahorkay, A; Vitéz, A; Tornyi, T G
2015-01-01
A multi-detector array has been designed and constructed for the simultaneous measurement of energy- and angular correlations of electron-positron pairs. Experimental results are obtained over a wide angular range for high-energy transitions in 16O, 12C and 8Be. A comparison with GEANT simulations demonstrates that angular correlations between 50 and 180 degrees of the electron-positron pairs in the energy range between 6 and 18 MeV can be determined with sufficient resolution and efficiency.
12. THE FILAMENTARY MULTI-POLAR PLANETARY NEBULA NGC5189
Directory of Open Access Journals (Sweden)
L. Sabin
2012-01-01
Full Text Available We present a set of optical and infrared images combined with long-slit, medium and high dispersion spectra of the southern planetary nebula (PN NGC5189. The complex morphology of this PN is puzzling and has not been studied in detailed so far. Our investigation reveals the presence of a new dense and cold infrared torus (alongside the optical one which probably generated one of the two optically seen bipolar outflows and which might be responsible for the twisted appearance of the optical torus via an interaction process. The high-resolution MES-AAT spectra clearly show the presence of filamentary and knotty structures as well as three expanding bubbles. Our findings therefore suggest that NGC5189 is a quadrupolar nebula with multiple sets of symmetrical condensations in which the interaction of outflows has determined its complex morphology.
13. The filamentary Multi-Polar Planetary Nebula NGC 5189
CERN Document Server
Sabin, L; López, J A; García-Díaz, Ma T; Ramos-Larios, G
2012-01-01
We present a set of optical and infrared images combined with long-slit, medium and high dispersion spectra of the southern planetary nebula (PN) NGC 5189. The complex morphology of this PN is puzzling and has not been studied in detail so far. Our investigation reveals the presence of a new dense and cold infrared torus (alongside the optical one) which probably generated one of the two optically seen bipolar outflows and which might be responsible for the twisted appearance of the optical torus via an interaction process. The high-resolution MES-AAT spectra clearly show the presence of filamentary and knotty structures as well as three expanding bubbles. Our findings therefore suggest that NGC 5189 is a quadrupolar nebula with multiple sets of symmetrical condensations in which the interaction of outflows has determined the complex morphology.
14. Mercado Simbólico: um modelo de comunicação para políticas públicas / The symbolic market: a communication model for public policies / Mercado Simbólico: un modelo de comunicación para políticas públicas
Scientific Electronic Library Online (English)
Inesita Soares de, Araújo.
2004-02-01
15. A variational Bayes spatiotemporal model for electromagnetic brain mapping.
Science.gov (United States)
Nathoo, F S; Babul, A; Moiseev, A; Virji-Babul, N; Beg, M F
2014-03-01
In this article, we present a new variational Bayes approach for solving the neuroelectromagnetic inverse problem arising in studies involving electroencephalography (EEG) and magnetoencephalography (MEG). This high-dimensional spatiotemporal estimation problem involves the recovery of time-varying neural activity at a large number of locations within the brain, from electromagnetic signals recorded at a relatively small number of external locations on or near the scalp. Framing this problem within the context of spatial variable selection for an underdetermined functional linear model, we propose a spatial mixture formulation where the profile of electrical activity within the brain is represented through location-specific spike-and-slab priors based on a spatial logistic specification. The prior specification accommodates spatial clustering in brain activation, while also allowing for the inclusion of auxiliary information derived from alternative imaging modalities, such as functional magnetic resonance imaging (fMRI). We develop a variational Bayes approach for computing estimates of neural source activity, and incorporate a nonparametric bootstrap for interval estimation. The proposed methodology is compared with several alternative approaches through simulation studies, and is applied to the analysis of a multimodal neuroimaging study examining the neural response to face perception using EEG, MEG, and fMRI. PMID:24354514
16. Mathematical framework for large-scale brain network modeling in The Virtual Brain.
Science.gov (United States)
Sanz-Leon, Paula; Knock, Stuart A; Spiegler, Andreas; Jirsa, Viktor K
2015-05-01
In this article, we describe the mathematical framework of the computational model at the core of the tool The Virtual Brain (TVB), designed to simulate collective whole brain dynamics by virtualizing brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. Here, a consistent notation for the generalized BNM is given, so that in this form the equations represent a direct link between the mathematical description of BNMs and the components of the numerical implementation in TVB. Finally, we made a summary of the forward models implemented for mapping simulated neural activity (EEG, MEG, sterotactic electroencephalogram (sEEG), fMRI), identifying their advantages and limitations. PMID:25592995
17. Modelling
CERN Document Server
Spädtke, P
2013-01-01
Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.
18. Evaluation of different measures of functional connectivity using a neural mass model.
Science.gov (United States)
David, Olivier; Cosmelli, Diego; Friston, Karl J
2004-02-01
We use a neural mass model to address some important issues in characterising functional integration among remote cortical areas using magnetoencephalography or electroencephalography (MEG or EEG). In a previous paper [Neuroimage (in press)], we showed how the coupling among cortical areas can modulate the MEG or EEG spectrum and synchronise oscillatory dynamics. In this work, we exploit the model further by evaluating different measures of statistical dependencies (i.e., functional connectivity) among MEG or EEG signals that are mediated by neuronal coupling. We have examined linear and nonlinear methods, including phase synchronisation. Our results show that each method can detect coupling but with different sensitivity profiles that depended on (i) the frequency specificity of the interaction (broad vs. narrow band) and (ii) the nature of the coupling (linear vs. nonlinear). Our analyses suggest that methods based on the concept of generalised synchronisation are the most sensitive when interactions encompass different frequencies (broadband analyses). In the context of narrow-band analyses, mutual information was found to be the most sensitive way to disclose frequency-specific couplings. Measures based on generalised synchronisation and phase synchronisation are the most sensitive to nonlinear coupling. These different sensitivity profiles mean that the choice of coupling measures can have dramatic effects on the cortical networks identified. We illustrate this using a single-subject MEG study of binocular rivalry and highlight the greater recovery of statistical dependencies among cortical areas in the beta band when mutual information is used. PMID:14980568
19. Modelling
International Nuclear Information System (INIS)
This last volume in the series of textbooks on environmental isotopes in the hydrological cycle provides an overview of the basic principles of existing conceptual formulations of modelling approaches. While some of the concepts provided in Chapter 2 and Chapter 3 are of general validity for quantitative interpretation of isotope data; the modelling methodologies commonly employed for incorporating isotope data into evaluations specifically related to groundwater systems are given in this volume together with some illustrative examples. Development of conceptual models for quantitative interpretations of isotope data in hydrogeology and the assessment of their limitations and field verification has been given priority in the research and development efforts of the IAEA during the last decade. Several Co-ordinated Research Projects on this specific topic were implemented and results published by the IAEA. Based on these efforts and contributions made by a number of scientists involved in this specific field, the IAEA has published two Technical Documents entitled ''Mathematical models and their applications to isotope studies in groundwater studies -- IAEA TECDOC-777, 1994'' and ''Manual on Mathematical models in isotope hydrogeology -- IAEA TECDOC-910, 1996''. Results of a recently completed Co-ordinated Research Project by the IAEA entitled ''Use of isotopes for analysis of flow and transport dynamics in groundwater systems'' will also soon be published by the IAEA. will also soon be published by the IAEA. This is the reason why the IAEA was involved in the co-ordination required for preparation of this volume; the material presented is a condensed overview prepared by some of the scientists that were involved in the above cited IAEA activities. This volume VI providing such an overview was included into the series to make this series self-sufficient in its coverage of the field of Isotope Hydrology. A special chapter on the methodologies and concepts related to geochemical modelling in groundwater systems would have been most desirable to include. The reader is referred to IAEA-TECDOC-910 and other relevant publications for guidance in this specific field
20. A Skew-t space-varying regression model for the spectral analysis of resting state brain activity.
Science.gov (United States)
Ismail, Salimah; Sun, Wenqi; Nathoo, Farouk S; Babul, Arif; Moiseev, Alexader; Beg, Mirza Faisal; Virji-Babul, Naznin
2013-08-01
It is known that in many neurological disorders such as Down syndrome, main brain rhythms shift their frequencies slightly, and characterizing the spatial distribution of these shifts is of interest. This article reports on the development of a Skew-t mixed model for the spatial analysis of resting state brain activity in healthy controls and individuals with Down syndrome. Time series of oscillatory brain activity are recorded using magnetoencephalography, and spectral summaries are examined at multiple sensor locations across the scalp. We focus on the mean frequency of the power spectral density, and use space-varying regression to examine associations with age, gender and Down syndrome across several scalp regions. Spatial smoothing priors are incorporated based on a multivariate Markov random field, and the markedly non-Gaussian nature of the spectral response variable is accommodated by the use of a Skew-t distribution. A range of models representing different assumptions on the association structure and response distribution are examined, and we conduct model selection using the deviance information criterion. (1) Our analysis suggests region-specific differences between healthy controls and individuals with Down syndrome, particularly in the left and right temporal regions, and produces smoothed maps indicating the scalp topography of the estimated differences. PMID:22614763
1. A simple model of the chaotic eccentricity of Mercury
CERN Document Server
Boué, Gwenaël; Farago, François
2012-01-01
Mercury's eccentricity is chaotic and can increase so much that collisions with Venus or the Sun become possible (Laskar, 1989, 1990, 1994, 2008, Batygin & Laughlin, 2008, Laskar & Gastineau, 2009). This chaotic behavior results from an intricate network of secular resonances, but in this paper, we show that a simple integrable model with only one degree of freedom is actually able to reproduce the large variations in Mercury's eccentricity, with the correct amplitude and timescale. We show that this behavior occurs in the vicinity of the separatrices of the resonance g1-g5 between the precession frequencies of Mercury and Jupiter. However, the main contribution does not come from the direct interaction between these two planets. It is due to the excitation of Venus' orbit at Jupiter's precession frequency g5. We use a multipolar model that is not expanded with respect to Mercury's eccentricity, but because of the proximity of Mercury and Venus, the Hamiltonian is expanded up to order 20 and more in t...
2. Current focussing in cochlear implants: An analysis of neural recruitment in a computational model.
Science.gov (United States)
Kalkman, Randy K; Briaire, Jeroen J; Frijns, Johan H M
2015-04-01
Several multipolar current focussing strategies are examined in a computational model of the implanted human cochlea. The model includes a realistic spatial distribution of cell bodies of the auditory neurons throughout Rosenthal's canal. Simulations are performed of monopolar, (partial) tripolar and phased array stimulation. Excitation patterns, estimated thresholds, electrical dynamic range, excitation density and neural recruitment curves are determined and compared. The main findings are: (I) Current focussing requires electrical field interaction to induce spatially restricted excitation patterns. For perimodiolar electrodes the distance to the neurons is too small to have sufficient electrical field interaction, which results in neural excitation near non-centre contacts. (II) Current focussing only produces spatially restricted excitation patterns when there is little or no excitation occurring in the peripheral processes, either because of geometrical factors or due to neural degeneration. (III) The model predicts that neural recruitment with electrical stimulation is a three-dimensional process; regions of excitation not only expand in apical and basal directions, but also by penetrating deeper into the spiral ganglion. (IV) At equal loudness certain differences between the spatial excitation patterns of various multipoles cannot be simulated in a model containing linearly aligned neurons of identical morphology. Introducing a form of variability in the neurons, such as the spatial distribution of cell bodies in the spiral ganglion used in this study, is therefore essential in the modelling of spread of excitation. This article is part of a Special Issue entitled . PMID:25528491
3. A Model for the Escape of Solar-Flare Accelerated Particles
CERN Document Server
Masson, Sophie; DeVore, C Rick
2013-01-01
Impulsive solar energetic particles (SEP) bursts are frequently observed in association with so-called eruptive flares consisting of a coronal mass ejection (CME) and a flare. These highly prompt SEPs are believed to be accelerated by the flare rather than by a CME shock, but in the standard flare model the accelerated particles should remain trapped in the corona or in the ejected plas- moid. In this case, however, the particles would reach the Earth only after a delay of many hours to a few days. We present a new model that can account for the prompt injection of energetic particles onto open interplanetary magnetic flux tubes. The basic idea underlying the model is that magnetic reconnection between the ejection and external open field allows for the release of the ener- getic particles. We demonstrate the model using 2.5D MHD simulations of a CME/flare event. The model system consists of a multipolar field with a coro- nal null point and with photospheric shear imposed at a polarity inversion line, as in ...
4. The structure of 193Au within the Interacting Boson Fermion Model
International Nuclear Information System (INIS)
A ?? angular correlation experiment investigating the nucleus 193Au is presented. In this work the level scheme of 193Au is extended by new level information on spins, multipolarities and newly observed states. The new results are compared with theoretical predictions from a general Interacting Boson Fermion Model (IBFM) calculation for the positive-parity states. The experimental data is in good agreement with an IBFM calculation using all proton orbitals between the shell closures at Z=50 and Z=126. As a dominant contribution of the d3/2 orbital to the wave function of the lowest excited states is observed, a truncated model of the IBFM using a Bose–Fermi symmetry is applied to the describe 193Au. Using the parameters of a fit performed for 193Au, the level scheme of 192Pt, the supersymmetric partner of 193Au, is predicted but shows a too small boson seniority splitting. We obtained a common fit by including states observed in 192Pt. With the new parameters a supersymmetric description of both nuclei is established
5. Dynamics of nuclear fluid. IV. Some spin and isospin properties in the hydrodynamical model
International Nuclear Information System (INIS)
In the hydrodynamical model, the nuclear spin and isospin symmetry energies and the speeds of spin sound and isospin sound are pertinent to the determination of the static and dynamic properties of finite nuclei. We examine these quantities with a generalized Skyrme interaction. From the explicit expressions we obtain for these quantities, we find an interesting algebraic relation connecting the symmetry energies and likewise the sound speeds. Relevant collective vibrational energies of various types and different multipolarities are evaluated with the known sets of Skyrme interactions, in order to provide information for future amendments or selections among the sets of interactions and for the confrontation of the hydrodynamical model with experiment. We further investigate the dispersion of sound waves due to the range of the nuclear interaction. In particular, the ''plasma oscillation'' arising from the long-range Coulomb interaction is found to lead to a simple modification of the energies of the isoscalar and isovector collective vibrational states. When applied to the nuclear giant dipole and monopole resonances, the inclusion of the plasma oscillation gives an improved agreement between the hydrodynamical and the experimental giant dipole state energies and modifies the nuclear incompressibility extracted from measured giant monopole energies by as much as 15%
6. The structure of {sup 193}Au within the Interacting Boson Fermion Model
Energy Technology Data Exchange (ETDEWEB)
Thomas, T., E-mail: tim.thomas@ikp.uni-koeln.de [Institute for Nuclear Physics, University of Cologne, Zülpicher Straße 77, D-50937 Köln (Germany); WNSL, Yale University, P.O. Box 208120, New Haven, CT 06520-8120 (United States); Bernards, C. [Institute for Nuclear Physics, University of Cologne, Zülpicher Straße 77, D-50937 Köln (Germany); WNSL, Yale University, P.O. Box 208120, New Haven, CT 06520-8120 (United States); Régis, J.-M.; Albers, M.; Fransen, C.; Jolie, J.; Heinze, S.; Radeck, D.; Warr, N.; Zell, K.-O. [Institute for Nuclear Physics, University of Cologne, Zülpicher Straße 77, D-50937 Köln (Germany)
2014-02-15
A ?? angular correlation experiment investigating the nucleus {sup 193}Au is presented. In this work the level scheme of {sup 193}Au is extended by new level information on spins, multipolarities and newly observed states. The new results are compared with theoretical predictions from a general Interacting Boson Fermion Model (IBFM) calculation for the positive-parity states. The experimental data is in good agreement with an IBFM calculation using all proton orbitals between the shell closures at Z=50 and Z=126. As a dominant contribution of the d{sub 3/2} orbital to the wave function of the lowest excited states is observed, a truncated model of the IBFM using a Bose–Fermi symmetry is applied to the describe {sup 193}Au. Using the parameters of a fit performed for {sup 193}Au, the level scheme of {sup 192}Pt, the supersymmetric partner of {sup 193}Au, is predicted but shows a too small boson seniority splitting. We obtained a common fit by including states observed in {sup 192}Pt. With the new parameters a supersymmetric description of both nuclei is established.
7. Direction of magnetoencephalography sources associated with feedback and feedforward contributions in a visual object recognition task.
Science.gov (United States)
Ahlfors, Seppo P; Jones, Stephanie R; Ahveninen, Jyrki; Hämäläinen, Matti S; Belliveau, John W; Bar, Moshe
2015-01-12
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depend on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
8. Visual stress–induced migraine aura compared to spontaneous aura studied by magnetoencephalography
OpenAIRE
Welch, K. Michael A.; Bowyer, Susan M.; Aurora, Sheena K.; Moran, John E.; Tepley, Norman
2001-01-01
DC MEG shifts, similar and complex in waveform, were observed in visually induced migraine with aura patients similar to spontaneous aura but not controls. Multiple cortical areas were activated in visually induced and spontaneous aura patients. In normal subjects activation was only observed in the primary visual cortex. Results support a spreading depression–like neuroelectric event as the basis of migraine aura that can arise spontaneously or be vis...
9. Alpha-band hypersynchronization in progressive mild cognitive impairment: a magnetoencephalography study.
Science.gov (United States)
López, María Eugenía; Bruña, Ricardo; Aurtenetxe, Sara; Pineda-Pardo, José Ángel; Marcos, Alberto; Arrazola, Juan; Reinoso, Ana Isabel; Montejo, Pedro; Bajo, Ricardo; Maestú, Fernando
2014-10-29
People with mild cognitive impairment (MCI) show a high risk to develop Alzheimer's disease (AD; Petersen et al., 2001). Nonetheless, there is a lack of studies about how functional connectivity patterns may distinguish between progressive (pMCI) and stable (sMCI) MCI patients. To examine whether there were differences in functional connectivity between groups, MEG eyes-closed recordings from 30 sMCI and 19 pMCI subjects were compared. The average conversion time of pMCI was 1 year, so they were considered as fast converters. To this end, functional connectivity in different frequency bands was assessed with phase locking value in source space. Then the significant differences between both groups were correlated with neuropsychological scores and entorhinal, parahippocampal, and hippocampal volumes. Both groups did not differ in age, gender, or educational level. pMCI patients obtained lower scores in episodic and semantic memory and also in executive functioning. At the structural level, there were no differences in hippocampal volume, although some were found in left entorhinal volume between both groups. Additionally, pMCI patients exhibit a higher synchronization in the alpha band between the right anterior cingulate and temporo-occipital regions than sMCI subjects. This hypersynchronization was inversely correlated with cognitive performance, both hippocampal volumes, and left entorhinal volume. The increase in phase synchronization between the right anterior cingulate and temporo-occipital areas may be predictive of conversion from MCI to AD. PMID:25355209
10. Development of Theory of Mind Stimuli in Magnetoencephalography for Nursing Evaluation
Directory of Open Access Journals (Sweden)
Sungwon Park
2009-09-01
Full Text Available We introduce the development of animation stimuli for theory of mind (ToM in magnetoencepalography (MEG. We will discuss apparatus for presenting animation stimuli and a technical problem like an eye movement signal generated from following triangles in the animations, and its rejection using independent component analysis (ICA. With the ToM animations and the apparatus, we conducted MEG measurements for 8 normal controls and 6 schizophrenic patients. We present a preliminary assessment result for the developed animation stimuli as a tool for ToM test, which has been obtained by scoring in the followingup interview after the MEG measurement.
11. The influence of low-grade glioma on resting state oscillatory brain activity: a magnetoencephalography study
OpenAIRE
Bosma, I.; Stam, C.; Douw, L.; Bartolomei, F.; Heimans, J.; Dijk, B.; Postma, T.; Klein, M.; Reijneveld, J.
2008-01-01
Purpose: In the present MEG-study, power spectral analysis of oscillatory brain activity was used to compare resting state brain activity in both low-grade glioma (LGG) patients and healthy controls. We hypothesized that LGG patients show local as well as diffuse slowing of resting state brain activity compared to healthy controls and that particularly global slowing correlates with neurocognitive dysfunction. Patient and methods Resting state MEG recordings were obtai...
12. Internal conversion coefficients in the Hartree-Fock atomic model. Calculations and experiments for 199Hg
International Nuclear Information System (INIS)
The internal conversion coefficients were calculated for the transitions in 199Hg using both Hartree-Fock and Hartree-Fock-Slater atomic models. The relative conversion line intensities were measured with the magnetic spectrometers in Prague and Heidelberg. The multipolarities were determined to be: M1 + (0.20 +- 0.03)% E2, pure E2 and M1 + (13.4 +- 0.4)% E2 for the 50, 158 and 208 keV transitions, respectively. Allowing for the nuclear structure effect in M1 component we obtained: M1 + (0.15 +- 0.03)% E2, lambda = 2.4 +- 1.0 for the 50 keV and M1 + (10.9 +- 0.7)% E2, lambda = 3.8 +- 0.5 for the 208 keV transitions. Very good agreement was found between theory and experiment for the atomic subshells, K, Lsub(1-3), Msub(1-5), N, and O + P. (orig.)
13. Prediction of vapor-liquid equilibrium and PVTx properties of geological fluid system with SAFT-LJ EOS including multi-polar contribution. Part II: Application to H2O-NaCl and CO2-H2O-NaCl System
Science.gov (United States)
Sun, Rui; Dubessy, Jean
2012-07-01
The SAFT-LJ equation of state improved by Sun and Dubessy (2010) can represent the vapor-liquid equilibrium and PVTx properties of the CO2-H2O system over a wide P-T range because it accounts for the energetic contribution of the main types of molecular interactions in terms of reliable molecular based models. Assuming that NaCl fully dissociates into individual ions (spherical Na+ and Cl-) in water and adopting the restricted primitive model of mean spherical approximation to account for the energetic contribution due to long-range electrostatic forces between ions, this study extends the improved SAFT-LJ EOS to the H2O-NaCl and the CO2-H2O-NaCl systems at temperatures below 573 K. The EOS parameters for the interactions between ion and ion and between ion and water were determined from the mean ionic activity coefficient data and the density data of the H2O-NaCl system. The parameters for the interactions between ion and CO2 were evaluated from CO2 solubility data of the CO2-H2O-NaCl system. Comparison with the experimental data shows that this model can predict the mean ionic activity coefficient, osmotic coefficient, saturation pressure, and density of aqueous NaCl solution and can predict the vapor-liquid equilibrium and PVTx properties of the CO2-H2O-NaCl system over the range from 273 to 573 K, from 0 to 1000 bar, and from 0 to 6 mol/kg NaCl with high accuracy.
14. Joint EEG/fMRI state space model for the detection of directed interactions in human brains--a simulation study.
Science.gov (United States)
Lenz, Michael; Musso, Mariachristina; Linke, Yannick; Tüscher, Oliver; Timmer, Jens; Weiller, Cornelius; Schelter, Björn
2011-11-01
An often addressed challenge in neuroscience research is the assignment of different tasks to specific brain regions. In many cases several brain regions are activated during a single task. Therefore, one is also interested in the temporal evolution of brain activity to infer causal relations between activated brain regions. These causal relations may be described by a directed, task specific network which consists of activated brain regions as vertices and directed edges. The edges describe the causal relations. Inference of the task specific brain network from measurements like electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) is challenging, due to the low spatial resolution of the former and the low temporal resolution of the latter. Here, we present a simulation study investigating a possible combined analysis of simultaneously measured EEG and fMRI data to address the challenge specified above. A nonlinear state space model is used to distinguish between the underlying brain states and the (simulated) EEG/fMRI measurements. We make use of a modified unscented Kalman filter and a corresponding unscented smoother for the estimation of the underlying neural activity. Model parameters are estimated using an expectation-maximization algorithm, which exploits the partial linearity of our model. Inference of the brain network structure is then achieved using directed partial correlation, a measure for Granger causality. The results indicate that the convolution effect of the fMRI forward model imposes a big challenge for the parameter estimation and reduces the influence of the fMRI in combined EEG-fMRI models. It remains to be investigated whether other models or similar combinations of other modalities such as, e.g., EEG and magnetoencephalography can increase the profit of the promising idea of combining various modalities. PMID:22027197
15. The role of extracellular conductivity profiles in compartmental models for neurons: particulars for layer 5 pyramidal cells.
Science.gov (United States)
Wang, Kai; Riera, Jorge; Enjieu-Kadji, Herve; Kawashima, Ryuta
2013-07-01
With the rapid increase in the number of technologies aimed at observing electric activity inside the brain, scientists have felt the urge to create proper links between intracellular- and extracellular-based experimental approaches. Biophysical models at both physical scales have been formalized under assumptions that impede the creation of such links. In this work, we address this issue by proposing a multicompartment model that allows the introduction of complex extracellular and intracellular resistivity profiles. This model accounts for the geometrical and electrotonic properties of any type of neuron through the combination of four devices: the integrator, the propagator, the 3D connector, and the collector. In particular, we applied this framework to model the tufted pyramidal cells of layer 5 (PCL5) in the neocortex. Our model was able to reproduce the decay and delay curves of backpropagating action potentials (APs) in this type of cell with better agreement with experimental data. We used the voltage drops of the extracellular resistances at each compartment to approximate the local field potentials generated by a PCL5 located in close proximity to linear microelectrode arrays. Based on the voltage drops produced by backpropagating APs, we were able to estimate the current multipolar moments generated by a PCL5. By adding external current sources in parallel to the extracellular resistances, we were able to create a sensitivity profile of PCL5 to electric current injections from nearby microelectrodes. In our model for PCL5, the kinetics and spatial profile of each ionic current were determined based on a literature survey, and the geometrical properties of these cells were evaluated experimentally. We concluded that the inclusion of the extracellular space in the compartmental models of neurons as an extra electrotonic medium is crucial for the accurate simulation of both the propagation of the electric potentials along the neuronal dendrites and the neuronal reactivity to an electrical stimulation using external microelectrodes. PMID:23607554
16. CAM3 bias over the Arctic region during northern winter studied with a linear stationary model
Science.gov (United States)
Grotjahn, Richard; Pan, Lin-Lin; Tribbia, Joseph
2011-08-01
This study builds upon two prior papers, which examine Arctic region bias of CAM3 (NCAR Community Atmosphere Model version 3) simulations during winter. CAM3 output is compared with ECMWF (European Centre for Medium-Range Weather Forecasts) 40 year reanalysis (ERA-40) data. Our prior papers considered the temperature and the vorticity equation terms and demonstrated that diabatic, transient, and linear terms dominate nonlinear bias terms over most areas of interest. Accordingly, this paper uses a linearized form of the model's dynamical core equations to study aspects of the forcing that lead to the CAM3 biases. We treat the model's long term winter bias as a solution to a linear stationary wave model (LSWM). Key features of the bias in the vorticity, temperature, and ln of surface pressure (=q) fields are shown at medium resolution. The important features found at medium resolution are captured at the much lower LSWM resolution. The Arctic q bias has two key features: excess q over the Barents Sea and a missing Beaufort High (negative maximum q bias) to the north of Alaska and eastern Siberia. The forcing fields are calculated by the LSWM. Horizontal advection tends to create multi-polar combinations of negative and positive extrema in the forcing. The positive and negative areas of forcing approximately match corresponding areas in the bias. There is a broad relation between cold bias with elevated q bias, as expected from classical theory. Forcing in related quantities: near surface vorticity and surface pressure combine to produce the sea level pressure bias.
17. CAM3 bias over the Arctic region during northern winter studied with a linear stationary model
Energy Technology Data Exchange (ETDEWEB)
Grotjahn, Richard [University of California, Department of Land, Air and Water Resources, Davis, CA (United States); Pan, Lin-Lin; Tribbia, Joseph [National Center for Atmospheric Research, Boulder, CO (United States)
2011-08-15
This study builds upon two prior papers, which examine Arctic region bias of CAM3 (NCAR Community Atmosphere Model version 3) simulations during winter. CAM3 output is compared with ECMWF (European Centre for Medium-Range Weather Forecasts) 40 year reanalysis (ERA-40) data. Our prior papers considered the temperature and the vorticity equation terms and demonstrated that diabatic, transient, and linear terms dominate nonlinear bias terms over most areas of interest. Accordingly, this paper uses a linearized form of the model's dynamical core equations to study aspects of the forcing that lead to the CAM3 biases. We treat the model's long term winter bias as a solution to a linear stationary wave model (LSWM). Key features of the bias in the vorticity, temperature, and ln of surface pressure (=q) fields are shown at medium resolution. The important features found at medium resolution are captured at the much lower LSWM resolution. The Arctic q bias has two key features: excess q over the Barents Sea and a missing Beaufort High (negative maximum q bias) to the north of Alaska and eastern Siberia. The forcing fields are calculated by the LSWM. Horizontal advection tends to create multi-polar combinations of negative and positive extrema in the forcing. The positive and negative areas of forcing approximately match corresponding areas in the bias. There is a broad relation between cold bias with elevated q bias, as expected from classical theory. Forcing in related quantities: near surface vorticity and surface pressure combine to produce the sea level pressure bias. (orig.)
18. A modelling study to inform specification and optimal electrode placement for imaging of neuronal depolarization during visual evoked responses by electrical and magnetic detection impedance tomography
International Nuclear Information System (INIS)
Electrical impedance tomography (EIT) has the potential to achieve non-invasive functional imaging of fast neuronal activity in the human brain due to opening of ion channels during neuronal depolarization. Local changes of resistance in the cerebral cortex are about 1%, but the size and location of changes recorded on the scalp are unknown. The purpose of this work was to develop an anatomically realistic finite element model of the adult human head and use it to predict the amplitude and topography of changes on the scalp, and so inform specification for an in vivo measuring system. A detailed anatomically realistic finite element (FE) model of the head was produced from high resolution MRI. Simulations were performed for impedance changes in the visual cortex during evoked activity with recording of scalp potentials by electrodes or magnetic flux density by magnetoencephalography (MEG) in response to current injected with electrodes. The predicted changes were validated by recordings in saline filled tanks and with boundary voltages measured on the human scalp. Peak changes were 1.03 ± 0.75 µV (0.0039 ± 0.0034%) and 27 ± 13 fT (0.2 ± 0.5%) respectively, which yielded an estimated peak signal-to-noise ratio of about 4 for in vivo averaging over 10 min and 1 mA current injection. The largest scalp changes were over the occipital cortex. This modelling suggests, for the first time, that reproducible changes could be recorded on the scalp in vivo in single channels, although a higher SNR would be desirable for accurate image production. The findings suggest that an in vivo study is warranted in order to determine signal size but methods to improve SNR, such as prolonged averaging or other signal processing may be needed for accurate image production
19. Use of the isolated problem approach for multi-compartment BEM models of electro-magnetic source imaging
International Nuclear Information System (INIS)
The isolated problem approach (IPA) is a method used in the boundary element method (BEM) to overcome numerical inaccuracies caused by the high-conductivity difference in the skull and the brain tissues in the head. Haemaelaeinen and Sarvas (1989 IEEE Trans. Biomed. Eng. 36 165-71) described how the source terms can be updated to overcome these inaccuracies for a three-layer head model. Meijs et al (1989 IEEE Trans. Biomed. Eng. 36 1038-49) derived the integral equations for the general case where there are an arbitrary number of layers inside the skull. However, the IPA is used in the literature only for three-layer head models. Studies that use complex boundary element head models that investigate the inhomogeneities in the brain or model the cerebrospinal fluid (CSF) do not make use of the IPA. In this study, the generalized formulation of the IPA for multi-layer models is presented in terms of integral equations. The discretized version of these equations are presented in two different forms. In a previous study (Akalin-Acar and Gencer 2004 Phys. Med. Biol. 49 5011-28), we derived formulations to calculate the electroencephalography and magnetoencephalography transfer matrices assuming a single layer in the skull. In this study, the transfer matrix formulations are updated to incorporate the generalized IPA. The effects of the IPA are investigated on the accuracy of spherical and realistic models when the CSF layer and a tumour tissue are included in the model. Itumour tissue are included in the model. It is observed that, in the spherical model, for a radial dipole 1 mm close to the brain surface, the relative difference measure (RDM*) drops from 1.88 to 0.03 when IPA is used. For the realistic model, the inclusion of the CSF layer does not change the field pattern significantly. However, the inclusion of an inhomogeneity changes the field pattern by 25% for a dipole oriented towards the inhomogeneity. The effect of the IPA is also investigated when there is an inhomogeneity in the brain. In addition to a considerable change in the scale of the potentials, the field pattern also changes by 15%. The computation times are presented for the multi-layer realistic head model
20. Sequence of multipolar transitions: A scenario for URu2Si2
CERN Document Server
2005-01-01
d- and f-shells support a large number of local degrees of freedom: dipoles, quadrupoles, octupoles, hexadecapoles, etc. Usually, the ordering of any multipole component leaves the system sufficiently symmetrical to allow a second symmetry breaking transition. To classify the possibilities, one has to construct the symmetry group of the first ordered phase, and then re-classify the order parameters in the new symmetry. While this is straightforward for dipole or quadrupole order, it is less familiar for octupole order. We give a group theoretical analysis, and some illustrative mean field calculations, for the case when a second ordering transition follows T(xyz) octupolar ordering in a tetragonal system. If quadrupoles appear in the second phase transition, they must be accompanied by a time-reversal-odd multipole as an induced order parameter. For O(xy), O(xz), or O(yz) quadrupoles, this would be one of the components of J, which should be easy either to check or to rule out. However, a pre-existing octupol...
1. Reshaping Europe In A Multipolar World: Can The EU Rise To The Challenge?
Directory of Open Access Journals (Sweden)
Dean Carroll
2011-09-01
Full Text Available Globalisation and the emergence of economic players such as Brazil, Russia, India and China (BRIC have led to predictions that US hegemony will quickly decline as a new world order emerges. With the European Union (EU also facing a downgrading of its own status – as economic, political and cultural power shifts from west to east – now is the time to ensure the Union has a strategy in place to remain an influential global actor despite its lack of natural resources and member state sovereign debt arising from the 2008/9 economic crisis. Only concerted efforts at institutional future-proofing (or widening and deepening plus by the EU and a global vision for the supranational body will ensure its survival and prosperity.
2. Multipolar universal relations between f-mode frequency and tidal deformability of compact stars
CERN Document Server
Chan, T K; Leung, P T; Lin, L -M
2014-01-01
Though individual stellar parameters of compact stars usually demonstrate obvious dependence on the equation of state (EOS), EOS-insensitive universal formulas relating these parameters remarkably exist. In the present paper, we explore the inter-relationship between two such formulas, namely the f-I relation connecting the $f$-mode quadrupole oscillation frequency $\\omega_2$ and the moment of inertia $I$, and the I-Love-Q relations relating $I$, the quadrupole tidal deformability $\\lambda_2$, and the quadrupole moment $Q$, which have been proposed by Lau et al. [Astrophys. J. {\\bf 714}, 1234 (2010)], and Yagi and Yunes [Science, {\\bf 341}, 365 (2013)], respectively. A relativistic universal relation between $\\omega_l$ and $\\lambda_l$ with the same angular momentum $l=2,3,\\ldots$, the so called "diagonal f-Love relation" that holds for realistic compact stars and stiff polytropic stars, is unveiled here. An in-depth investigation in the Newtonian limit is further carried out to pinpoint its underlying physica...
3. Relevance of Triple Coupling of Multipolar Order Parameters in URu_2Si_2
Science.gov (United States)
Koga, M.; Cox, D. L.
2000-03-01
We investigate stability of the tiny staggered magnetic dipole moment observed in URu_2Si_2, provided that an antiferroquadrupolar ordering takes place as a primary effect. A ferromagnetic octupolar ordering plays an important role in stabilizing the moment, and the stability conditions are very sensitive to the crystalline-electronic-field (CEF) energy levels of U and the RKKY couplings between the U sites. Our mean-field solution shows that the tiny dipolar ordering is hardly realized for the three lowest-lying CEF singlets proposed by Santini and Amoretti. Alternatively, we stress relevance of the non-Kramers doublets to obtain the tiny moment at low temperatures. We suggest that by combining short range quadrupolar order with the two-channel Kondo effect we can explain the magnetic susceptibility above the quadrupolar ordering temperature.
4. Treatment of atrial fibrillation with radiofrequency ablation and simultaneous multipolar mapping of the pulmonary veins
Directory of Open Access Journals (Sweden)
Rocha Neto Almino C.
2001-01-01
Full Text Available OBJECTIVE: To demonstrate the feasibility and safety of simultaneous catheterization and mapping of the 4 pulmonary veins for ablation of atrial fibrillation. METHODS: Ten patients, 8 with paroxysmal atrial fibrillation and 2 with persistent atrial fibrillation, refractory to at least 2 antiarrhythmic drugs and without structural cardiopathy, were consecutively studied. Through the transseptal insertion of 2 long sheaths, 4 pulmonary veins were simultaneously catheterized with octapolar microcatheters. After identification of arrhythmogenic foci radiofrequency was applied under angiographic or ultrasonographic control. RESULTS: During 17 procedures, 40 pulmonary veins were mapped, 16 of which had local ectopic activity, related or not with the triggering of atrial fibrillation paroxysms. At the end of each procedure, suppression of arrhythmias was obtained in 8 patients, and elimination of pulmonary vein potentials was accomplished in 4. During the clinical follow-up of 9.6±3 months, 7 patients remained in sinus rhythm, 5 of whom were using antiarrhythmic drugs that had previously been ineffective. None of the patients had pulmonary hypertension or evidence of stenosis in the pulmonary veins. CONCLUSION: Selective and simultaneous catheterization of the 4 pulmonary veins with microcatheters for simultaneous recording of their electrical activity is a feasible and safe procedure that may help ablation of atrial fibrillation.
5. Multi-polarization C-band SAR for soil moisture estimation
Science.gov (United States)
Brown, R. J.; Brisco, B.
1991-01-01
Previous studies of synthetic aperture radar (SAR) imagery have shown qualitative relationships between radar backscatter and soil moisture. However, to be able to use these data in operational programs it will be necessary to establish quantitatively how the radar return is related to soil moisture and the effects of surface roughness, soil type, and vegetation cover and growth stage, as a function of frequency and polarization. To this end, a multi-year experiment began in 1990 as a cooperative venture amongst the Canada Center (Agriculture Canada), and the Universities of Guelph, Sherbrooke, Laval, and Waterloo. During 1990, SAR imagery was acquired during two periods (May and Jun.) to correspond to times of minimal and substantial vegetation cover. SAR data were acquired on three days in May and on four days in Jul. to cover different soil moisture conditions. This unique comprehensive data set is used to investigate the relationships between soil moisture and radar backscatter. The experiment and data collected are described, and a preliminary qualitative interpretation of the relationship between soil moisture and image tone is provided.
6. Antihydrogen formation dynamics in a multipolar neutral anti-atom trap
CERN Document Server
Andresen, G B; Bowe, P D; Bray, C; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Fajans, J; Fujiwara, M C; Gill, D R; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jørgensen, L V; Kerrigan, S J; Kurchaninov, L; Lambo, R; Madsen, N; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Seif El Nasr, S; Silveira, D M; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki, Y
2010-01-01
Antihydrogen production in a neutral atom trap formed by an octupole-based magnetic field minimum is demonstrated using field-ionization of weakly bound anti-atoms. Using our unique annihilation imaging detector, we correlate antihydrogen detection by imaging and by field-ionization for the first time. We further establish how field-ionization causes radial redistribution of the antiprotons during antihydrogen formation and use this effect for the first simultaneous measurements of strongly and weakly bound antihydrogen atoms. Distinguishing between these provides critical information needed in the process of optimizing for trappable antihydrogen. These observations are of crucial importance to the ultimate goal of performing CPT tests involving antihydrogen, which likely depends upon trapping the anti-atom.
7. The effect of antiferromagnetic interchain coupling on multipolar phases in quasi-1D quantum helimagnets
International Nuclear Information System (INIS)
Coupled s = 1/2 frustrated Heisenberg chains with ferromagnetic nearest-neighbor and antiferromagnetic next-nearest-neighbor exchange interactions in high magnetic field are studied by density-matrix renormalization group (DMRG) and hard-core boson (HCB) approaches at T = 0. First, we propose an appropriate one-dimensional array for the construction of a 3D system to be studied with the DMRG method and demonstrate the performance by comparing the ground-state energy to the exact solution. Next, the binding energy of multimagnon bound state is calculated as a function of interchain coupling. We find that the multimagnon bound state is easily destroyed by weak interchain coupling. In the 2-magnon phase the DMRG results are supported by the HCB approach.
8. Multipolar universal relations between f -mode frequency and tidal deformability of compact stars
Science.gov (United States)
Chan, T. K.; Sham, Y.-H.; Leung, P. T.; Lin, L.-M.
2014-12-01
Though individual stellar parameters of compact stars usually demonstrate obvious dependence on the equation of state (EOS), EOS-insensitive universal formulas relating these parameters remarkably exist. In the present paper, we explore the interrelationship between two such formulas, namely the f -I relation connecting the f -mode quadrupole oscillation frequency ?2 and the moment of inertia I , and the I -Love-Q relations relating I , the quadrupole tidal deformability ?2, and the quadrupole moment Q , which have been proposed by Lau, Leung, and Lin [Astrophys. J. 714, 1234 (2010)] and Yagi and Yunes [Science 341, 365 (2013)], respectively. A relativistic universal relation between ?l and ?l with the same angular momentum l =2 ,3 ,… , the so-called "diagonal f -Love relation" that holds for realistic compact stars and stiff polytropic stars, is unveiled here. An in-depth investigation in the Newtonian limit is further carried out to pinpoint its underlying physical mechanism and hence leads to a unified f -I -Love relation. We reach the conclusion that these EOS-insensitive formulas stem from a common physical origin—compact stars can be considered as quasiincompressible when they react to slow time variations introduced by f -mode oscillations, tidal forces and rotations.
9. Multipolar representation of Maxwell and Schroedinger equations: Lagrangian and Hamiltonian formalisms: Examples
International Nuclear Information System (INIS)
Development of quantum engineering put forward new theoretical problems. Behaviour of a single mesoscopic cell (device) we may usually describe by equations of quantum mechanics. However, if experimentators gather hundreds of thousands of similar cells there arises some artificial medium that one already needs to describe by means of new electromagnetic equations. The same problem arises when we try to describe e.g. a sublattice structure of such complex substances like perovskites. It is demonstrated that the inherent primacy of vector potential in quantum systems leads to a generalization of the equations of electromagnetism by introducing in them toroid polarizations. To derive the equations of motion the Lagrangian and the Hamiltonian formalisms are used. Some examples where electromagnetic properties of molecules are described by the toroid moment are pointed out. (author). 26 refs, 7 figs
10. Multipolarity or cosmopolitanism? : A critique of Mouffe from a hegemony-theoretical perspective
DEFF Research Database (Denmark)
Hansen, Allan Dreyer
In a series of publications Chantal Mouffe (2004, 2005a, 2005b, 2008, 2009, 2013) has criticized cosmopolitanism for its lack of conceptualization of power, conflict and struggle, in short of politics. Even though this critique is largely well placed, the conclusions drawn from the analysis by Mouffe are flawed. As she puts it, if a cosmopolitan democracy “was ever realized, it could only signify the world hegemony of a dominant power that would have been able to impose its conception of the world on the entire planet and which, identifying its interests with those of humanity, would treat any disagreement as an illegitimate challenge to its ‘rational’ leadership”. Mouffe, On the Political pp. 106–7. I argue that Mouffe paradoxically seems to be using a traditional 'realist' conceptualization of hegemony, signifying simply domination. Against this I argue that a post-structuralist understanding of hegemony – as developed by herself and Laclau in Hegemony and Socialist Strategy, (Laclau and Mouffe,1985), precisely allows us to see the distance between universal values, such as freedom and equality for all, and their actual interpretation and use. The fact that the West are using democracy and human rights as legitimating devises for non-democratic goals, should not make us abandon the realization of these values on the global scale as the political goal.
11. Multipolar permanent-magnet synchronous generators intended for wind power plants
Science.gov (United States)
Kovalev, L. K.; Kovalev, K. L.; Tulinova, Ye. Ye.; Ivanov, N. S.
2012-12-01
The analytical method of calculating two-dimensional magnetic fields in the active section of permanent-magnet synchronous electrical rotating machines, as applied to their use in the wind energy industry, has been developed. The analytical relationships for calculating distribution of two-dimensional magnetic fields and determining output parameters with due regard for geometry of the active section, the number of pairs of poles, and magnetic characteristics of materials have been obtained. The criteria dependences needed for calculating the electromotive force and main inductive reactance of permanent-magnet synchronous electric machines, with consideration for the geometry of a machine and electrophysical properties of materials being used, have been derived. The procedure of evaluating parameters of permanent-magnet synchronous generators for large-size wind power plants is presented.
12. Multipolarity of the 228.5-keV transition in 80Y
International Nuclear Information System (INIS)
We have unambiguously characterized the deexcitation of the 228.5-keV T1/2=4.7-s isomer in 80Y as an M3 transition. This result determines, in conjunction with other experimental data, the spin and parity of the 228.5-keV isomer and the 80Y ground state as 1- and 4-, respectively. (c) 2000 The American Physical Society
13. Use of the isolated problem approach for multi-compartment BEM models of electro-magnetic source imaging
Energy Technology Data Exchange (ETDEWEB)
Gencer, Nevzat G; Akalin-Acar, Zeynep [Department of Electrical and Electronics Engineering, Brain Research Laboratory, Middle East Technical University, 06531 Ankara (Turkey)
2005-07-07
The isolated problem approach (IPA) is a method used in the boundary element method (BEM) to overcome numerical inaccuracies caused by the high-conductivity difference in the skull and the brain tissues in the head. Haemaelaeinen and Sarvas (1989 IEEE Trans. Biomed. Eng. 36 165-71) described how the source terms can be updated to overcome these inaccuracies for a three-layer head model. Meijs et al (1989 IEEE Trans. Biomed. Eng. 36 1038-49) derived the integral equations for the general case where there are an arbitrary number of layers inside the skull. However, the IPA is used in the literature only for three-layer head models. Studies that use complex boundary element head models that investigate the inhomogeneities in the brain or model the cerebrospinal fluid (CSF) do not make use of the IPA. In this study, the generalized formulation of the IPA for multi-layer models is presented in terms of integral equations. The discretized version of these equations are presented in two different forms. In a previous study (Akalin-Acar and Gencer 2004 Phys. Med. Biol. 49 5011-28), we derived formulations to calculate the electroencephalography and magnetoencephalography transfer matrices assuming a single layer in the skull. In this study, the transfer matrix formulations are updated to incorporate the generalized IPA. The effects of the IPA are investigated on the accuracy of spherical and realistic models when the CSF layer and a tumour tissue are included in the model. It is observed that, in the spherical model, for a radial dipole 1 mm close to the brain surface, the relative difference measure (RDM*) drops from 1.88 to 0.03 when IPA is used. For the realistic model, the inclusion of the CSF layer does not change the field pattern significantly. However, the inclusion of an inhomogeneity changes the field pattern by 25% for a dipole oriented towards the inhomogeneity. The effect of the IPA is also investigated when there is an inhomogeneity in the brain. In addition to a considerable change in the scale of the potentials, the field pattern also changes by 15%. The computation times are presented for the multi-layer realistic head model.
14. Damping of the giant resonances in a fluid-dynamical model
International Nuclear Information System (INIS)
The relevance to the damping of the giant resonances, of the anharmonic coupling between the normal modes, is investigated in a fluid dynamical mode. It is found that this mechanism leads to a weak damping which, however, increases very drastically with the wavevector, implying a very short life time for high multipolarity modes. (Author)
15. The impact of the new Earth gravity models on the measurement of the Lense-Thirring effect with a new satellite
CERN Document Server
Iorio, L
2005-01-01
In this paper we investigate the opportunities offered by the new Earth gravity models from the dedicated CHAMP and, especially, GRACE missions to the project of measuring the general relativistic Lense-Thirring effect with a new Earth's artificial satellite. It turns out that it would be possible to abandon the stringent, and expensive, requirements on the orbital geometry of the originally prosed LARES mission (same semimajor axis a=12270 km of the existing LAGEOS and inclination i=70 deg) by inserting the new spacecraft in a relatively low, and cheaper, orbit (a=7500-8000 km, i\\sim 70 deg) and suitably combining its node Omega with those of LAGEOS and LAGEOS II in order to cancel out the first even zonal harmonic coefficients of the multipolar expansion of the terrestrial gravitational potential J_2, J_4 along with their temporal variations. The total systematic error due to the mismodelling in the remaining even zonal harmonics would amount to \\sim 1% and would be insensitive to departures of the inclinat...
16. Human in vitro reporter model of neuronal development and early differentiation processes
Directory of Open Access Journals (Sweden)
Bogdahn Ulrich
2008-02-01
Full Text Available Abstract Background During developmental and adult neurogenesis, doublecortin is an early neuronal marker expressed when neural stem cells assume a neuronal cell fate. To understand mechanisms involved in early processes of neuronal fate decision, we investigated cell lines for their capacity to induce expression of doublecortin upon neuronal differentiation and develop in vitro reporter models using doublecortin promoter sequences. Results Among various cell lines investigated, the human teratocarcinoma cell line NTERA-2 was found to fulfill our criteria. Following induction of differentiation using retinoic acid treatment, we observed a 16-fold increase in doublecortin mRNA expression, as well as strong induction of doublecortin polypeptide expression. The acquisition of a neuronal precursor phenotype was also substantiated by the establishment of a multipolar neuronal morphology and expression of additional neuronal markers, such as Map2, ?III-tubulin and neuron-specific enolase. Moreover, stable transfection in NTERA-2 cells of reporter constructs encoding fluorescent or luminescent genes under the control of the doublecortin promoter allowed us to directly detect induction of neuronal differentiation in cell culture, such as following retinoic acid treatment or mouse Ngn2 transient overexpression. Conclusion Induction of doublecortin expression in differentiating NTERA-2 cells suggests that these cells accurately recapitulate some of the very early events of neuronal determination. Hence, the use of reporter genes under the control of the doublecortin promoter in NTERA-2 cells will help us to investigate factors involved early in the course of neuronal differentiation processes. Moreover the ease to detect the induction of a neuronal program in this model will permit to perform high throughput screening for compounds acting on the early neuronal differentiation mechanisms.
17. How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex
Directory of Open Access Journals (Sweden)
Skoblov Nikita
2011-09-01
Full Text Available Abstract Background Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. Results The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1 the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2 we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1 under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2 under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. Conclusions We present here a system-wide computational model of neuronal migration that integrates theory and data within a precise, testable framework. Our model accounts for a range of observable behaviors and affords a computational framework to study aspects of neuronal migration as a complex process that is driven by a relatively simple molecular program. Analysis of the model generated new hypotheses and yet unobserved phenomena that may guide future experimental studies. This paper thus reports a first step toward a comprehensive in-silico model of neuronal migration.
18. Neural responses to auditory stimulus deviance under threat of electric shock revealed by spatially-filtered magnetoencephalography
OpenAIRE
Cornwell, Brian R.; Baas, Johanna M. P.; Johnson, Linda; Holroyd, Tom; Carver, Frederick W.; Lissek, Shmuel; Grillon, Christian
2007-01-01
Stimulus novelty or deviance may be especially salient in anxiety-related states due to sensitization to environmental change, a key symptom of anxiety disorders such as posttraumatic stress disorder (PTSD). We aimed to identify human brain regions that show potentiated responses to stimulus deviance during anticipatory anxiety. Twenty participants (14 men) were presented a passive oddball auditory task in which they were exposed to uniform auditory stimulation of tones with occasional deviat...
19. Chaoticity and Dissipation of Nuclear Collective Motion in a Classical Model
OpenAIRE
Baldo, M.; Burgio, G. F.; Rapisarda, A.; Schuck, P.
1996-01-01
We analyze the behavior of a gas of classical particles moving in a two-dimensional "nuclear" billiard whose multipole-deformed walls undergo periodic shape oscillations. We demonstrate that a single particle Hamiltonian containing coupling terms between the particles' motion and the collective coordinate induces a chaotic dynamics for any multipolarity, independently on the geometry of the billiard. The absence of coupling terms allows us to recover qualitatively the "wall ...
20. Models and role models.
Science.gov (United States)
Ten Cate, Jacob M
2015-01-01
Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. © 2015 S. Karger AG, Basel. PMID:25871413
1. Dual-frequency and Multi-polarization Shuttle Imaging Radar for Volcano Detection in Kunlun Mountain of Western China
Science.gov (United States)
Huadong, G.; Jingjuan, L.; Changlin, W.; Caho, W.; Farr, T. G.; Evans, D. L.
1996-01-01
This paper discusses the methods and mechanisms of detecting volcanoes using L.C bands and HH, HV, VV polarization imaging radar, and gives the spatial distributions of volcanoes elevated more than 5300m above the sea level, eruptive phases and analytical results of rock components.
2. The African Union (AU), new partnership for African Aevelopment (NEPAD) and regional integration in Africa in a multipolar word
OpenAIRE
Asogwa, Felix Chinwe
2014-01-01
It is trite to argue that regional integration or cooperation in Africa is deeply rooted in the historical evolution of the continent’s socio-political forces. No doubt, the trans-Atlantic slave trade created a huge social, political, economic, and cultural distortion in Africa. It was a period when millions of productive Africans were forcefully uprooted from the continent and taken to Europe and the Americas. However, the end of the slave trade opened a new vista in the efforts of peop...
3. Intense ultra-broadband down-conversion in co-doped oxide glass by multipolar interaction process.
Science.gov (United States)
Liu, Zijun; Yang, Luyun; Dai, Nengli; Chu, Yingbo; Chen, Qiaoqiao; Li, Jinyan
2013-05-20
We report that Eu(2+) can be an efficient sensitizer for Yb(3+) and a broadband absorber for blue solar spectra in the host of oxide glass. The greenish 4f ? 5d transition of Eu(2+) and the characteristic near-infrared emission of Yb(3+) were observed, with the blue-light of xenon lamp excitation. The 5d energy can be adjusted by the host and the energy transfer efficiency can be enhanced. The quantum efficiency is up to 163.8%. Given the broad excitation band, high absorption coefficient and excellent mechanical, thermal and chemical stability, this system can be useful as down-conversion layer for solar cells. PMID:23736483
4. Model’s comparison
DEFF Research Database (Denmark)
Hisham Beshara Halasa, Tariq; Boklund, Anette
2012-01-01
5. Design of mini-orange spectrometers and their application to nuclear structure studies
International Nuclear Information System (INIS)
The design and properties of a mini-orange spectrometer for internal conversion experiments are described. The application of such a system to the study of the decay of Coulomb excited /sup 191,193/Ir nuclei is presented. Some E2/M1 mixing ratios of the transitions with mixed multipolarities are deduced. The experimental energy levels and reduced matrix elements of the excited /sup 191,193/Ir are compared with two model calculations, namely the particle-plus-triaxial rotor model and the interacting boson fermion model. A mini-orange spectrometer was also used to study the multipolarities of the decay of high spin continuum states in 130Ce
6. Modelling the models
CERN Multimedia
Anaïs Schaeffer
2012-01-01
By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models. Average transverse momentum (pT) as a function of rapidity loss ?y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...
7. Polarimetric SAR Data for Urban Land Cover Classification Using Finite Mixture Model
Science.gov (United States)
2013-04-01
Image classification techniques play an important role in automatic analysis of remote sensing data. This paper demonstrates the potential of polarimetric synthetic aperture radar (PolSAR) for urban land cover mapping using an unsupervised classification approach. Analysis of PolSAR images often shows that non-Gaussian models give better representation of the scattering vector statistics. Hence, processing algorithms based on non-Gaussian statistics should improve performance, compared to complex Gaussian distributions. Several distributions could be used to model SAR image texture with different spatial correlation properties and various degrees of inhomogeneity [1-3]. Statistical properties are widely used for image segmentation and land cover classification of PolSAR data. The pixel-based approaches cluster individual pixels through analysis of their statistical properties. Those methods work well on the relatively coarse spatial resolution images. But classification results based on pixelwise analysis demonstrate the pepper-salt effect of speckle in medium and high resolution applications such as urban area monitoring [4]. Therefore, the expected improvement of the classification results is hindered by the increase of textural differences within a class. In such situation, enhancement could be made through exploring the contextual correlation among pixels by Markov random field (MRF) models [4, 5]. The potential of MRF models to retrieve spatial contextual information is desired to improve the accuracy and reliability of image classification. Unsupervised contextual polarimetric SAR image segmentation is addressed by combining statistical modeling and spatial context within an MRF framework. We employ the stochastic expectation maximization (SEM) algorithm [6] to jointly perform clustering of the data and parameter estimation of the statistical distribution conditioned to each image cluster and the MRF model. This classification method is applied on medium resolution L-band ALOS data from Tehran, Iran. Clustering results are presented and discussed in the full paper, also comparing the classification approach with other commonly used algorithms. References: [1] J.-S. Lee, M. Grunes, and R. Kwok, "Classification of multi-look polarimetric SAR imagery based on the complex Wishart distribution," Int. J Remote Sens., vol. 15, no. 11, pp. 2299-2311, Jul. 1994. [2] C. C. Freitas, A. C. Frery, and A. H. Correia, "The polarimetric G0 distribution for SAR data analysis," Environmetrics, vol. 16, no. 1, pp. 13-31, Feb. 2005. [3] A. P. Doulgeris, S. N. Anfinsen, and T. Eltoft, "Automated non-Gaussian clustering of polarimetric synthetic aperture radar images," IEEE Trans. Geosci. Remote Sens., vol. 49, no. 10, pp. 3665-3676, Oct. 2011. [4]. V. Akbari, A. P. Doulgeris, G. Moser, S. N. Anfinsen, T. Eltoft, and S. Serpico, "A textural-contextual model for unsupervised segmentation of multi-polarization synthetic aperture radar images," IEEE Transactions on Geoscience and Remote Sensing, in press. [5] S. Li, "Markov Random Field Modeling in Image Analysis," 3rd ed. London, U.K., Springer-Verlag, 2009.
8. Modeling Malaria
Science.gov (United States)
Angela B. Shiflet
In this module, we develop models of the effects of malaria on various populations of humans and mosquitoes. After considering differential equations to model a system, we create a model using the systems modeling tool STELLA. Projects involve various refinements of the model.
9. Modelling Practice
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very successful in many model building activities. It is vital that good methodologies are adopted for both thoroughness andefficiency purposes. Asking good questions for each modelling stage can aid in getting to effective and efficient solutions in modelling practice. Modelling is very much a ‘goal oriented’ activity, under constraints of system insight, time, cost and human resources. The George Box dictum that “all models are wrong, some are useful” should be coupled with the parsimony principle to ensure optimal outcomes.
10. Model uncertainty and model inaccuracy
International Nuclear Information System (INIS)
The problem of model uncertainty versus model inaccuracy is examined in the light of the concept of the 'probability of correctness of a model under a given context' introduced by Apostolakis. To avoid possible difficulties linked with this concept, a distinction is introduced between 'predictive' models and 'constitutive' models, the former being generic in the sense that they can host the latter as submodels. A metric or distance between linear models as well as an objective of the model are introduced, from which we can give an operational definition of 'model uncertainty' (with respect to distribution of parameters of the associated constitutive models) and of 'model accuracy' with respect to a reference model. Finally the choice of a predictive model is linked to a loss function and a cost of using or defining a model
11. Model uncertainty and model inaccuracy
Energy Technology Data Exchange (ETDEWEB)
Devooght, J
1998-02-01
The problem of model uncertainty versus model inaccuracy is examined in the light of the concept of the 'probability of correctness of a model under a given context' introduced by Apostolakis. To avoid possible difficulties linked with this concept, a distinction is introduced between 'predictive' models and 'constitutive' models, the former being generic in the sense that they can host the latter as submodels. A metric or distance between linear models as well as an objective of the model are introduced, from which we can give an operational definition of 'model uncertainty' (with respect to distribution of parameters of the associated constitutive models) and of 'model accuracy' with respect to a reference model. Finally the choice of a predictive model is linked to a loss function and a cost of using or defining a model.
12. Fair Model
Science.gov (United States)
Betty Blecha
The Fair model web site includes a freely available United States macroeconomic econometric model and a multicounty econometric model. The models run on the Windows OS. Instructors can use the models to teach forecasting, run policy experiments, and evaluate historical episodes of macroeconomic behavior. The web site includes extensive documentation for both models. The simulation is for upper-division economics courses in macroeconomics or econometrics. The principle developer is Ray Fair at Yale University.
13. Player Modeling
OpenAIRE
Yannakakis, Georgios N.; Spronck, Pieter; Loiacono, Daniele; Andre?, Elisabeth
2013-01-01
Player modeling is the study of computational models of players in games. This includes the detection, modeling, prediction and expression of human player characteristics which are manifested through cognitive, affective and behavioral patterns. This chapter introduces a holistic view of player modeling and provides a high level taxonomy and discussion of the key components of a player's model. The discussion focuses on a taxonomy of approaches for constructing a player model, the availabl...
14. Model theory
CERN Document Server
Chang, CC
2013-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
15. Coulomb dissociation of 8B and the low-energy cross section of the 7Be(p,gamma)8B solar fusion reaction
OpenAIRE
Schuemann, F.; Hammache, F.; Typel, S.; Uhlig, F.; Suemmerer, K.; Boettcher, I.; Cortina, D.; Foerster, A.; Gai, M.; Geissel, H.; Greife, U.; Iwasa, N.; Koczon, P.; Kohlmeyer, B.; Kulessa, R.
2003-01-01
An exclusive measurement of the Coulomb breakup of 8B into 7Be+p at 254 A MeV allowed to study the angular correlations of the breakup particles. These correlations demonstrate clearly that E1 multipolarity dominates and that E2 multipolarity can be neglected. By using a simple single-particle model for 8B and treating the breakup in first-order perturbation theory, we extract a zero-energy S factor of S-(17)(0) = 18.6 +- 1.2 +- 1.0 eV b.
16. Coulomb dissociation of 8B and the low-energy cross section of the 7Be(p,gamma)8B solar fusion reaction
CERN Document Server
Schuemann, F; Cortina-Gil, D; Förster, A; Gai, M; Geissel, H; Greife, U; Hammache, F; Iwasa, N; Koczón, P; Kohlmeyer, B; Kulessa, R; Kumagai, H; Kurz, N; Menzel, M; Motobayashi, T; Oeschler, H; Ozawa, A; Ploskon, M; Prokopowicz, W; Schwab, E; Senger, P; Strieder, F; Sturm, C; Sun, Z Y; Surówka, G; Sümmerer, K; Typel, S; Uhlig, F; Wagner, A; Walús, W; Sun, Zhi-Yu
2003-01-01
An exclusive measurement of the Coulomb breakup of 8B into 7Be+p at 254 A MeV allowed to study the angular correlations of the breakup particles. These correlations demonstrate clearly that E1 multipolarity dominates and that E2 multipolarity can be neglected. By using a simple single-particle model for 8B and treating the breakup in first-order perturbation theory, we extract a zero-energy S factor of S-(17)(0) = 18.6 +- 0.5 +- 1.0 eV b.
17. Coulomb dissociation of 8B and the low-energy cross section of the 7Be(p,gamma)8B solar fusion reaction.
Science.gov (United States)
Schümann, F; Hammache, F; Typel, S; Uhlig, F; Sümmerer, K; Böttcher, I; Cortina, D; Förster, A; Gai, M; Geissel, H; Greife, U; Iwasa, N; Koczo?, P; Kohlmeyer, B; Kulessa, R; Kumagai, H; Kurz, N; Menzel, M; Motobayashi, T; Oeschler, H; Ozawa, A; P?osko?, M; Prokopowicz, W; Schwab, E; Senger, P; Strieder, F; Sturm, C; Sun, Zhi-Yu; Surówka, G; Wagner, A; Walu?, W
2003-06-13
An exclusive measurement of the Coulomb breakup of 8B into 7Be+p at 254A MeV allowed the study of the angular correlations of the breakup particles. These correlations demonstrate clearly that E1 multipolarity dominates and that E2 multipolarity can be neglected. By using a simple single-particle model for 8B and treating the breakup in first-order perturbation theory, we extract a zero-energy S factor of S17(0)=18.6+/-1.2+/-1.0 eV b, where the first error is experimental and the second one reflects the theoretical uncertainty in the extrapolation. PMID:12857251
18. Hydrological models are mediating models
OpenAIRE
2013-01-01
Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" betwe...
19. Crowd modelling
OpenAIRE
Hartmann, Mikkel; Xiao, Wence; Christensen, Troels; Albrechtsen, Dan Elmkvist; Thrane, Malik; Høiland-jørgensen, Toke
2011-01-01
We look at social force models as a way to model the behaviour of human crowds, in order to eval- uate how well these types of models simulate crowd behaviour, and what the models’ strengths and weaknesses are. In order to do this evaluation, we implement a computer simulation of an exemplary social force model. In order to create this simulation, we pick an exemplary model that is well described in the article that presents it, and analyse it in detail, filling in details from other art...
20. Supermatrix models
International Nuclear Information System (INIS)
Radom matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two component plasma in one dimension. A stationary point of the model is described
1. Supermatrix Models
CERN Document Server
Yost, S A
1992-01-01
Random matrix models based on an integral over supermatrices are proposed as a natural extension of bosonic matrix models. The subtle nature of superspace integration allows these models to have very different properties from the analogous bosonic models. Two choices of integration slice are investigated. One leads to a perturbative structure which is reminiscent of, and perhaps identical to, the usual Hermitian matrix models. Another leads to an eigenvalue reduction which can be described by a two component plasma in one dimension. A stationary point of the model is described.
2. Geochemical modeling
International Nuclear Information System (INIS)
Contributions to the workshop 'Geochemical modeling' from 19 to 20 September 1990 at the Karlsruhe Nuclear Research Centre. The report contains the programme and a selection of the lectures held at the workshop 'Geochemical modeling'. (BBR)
3. Landscape Models
Science.gov (United States)
David Marchetti
In this assignment students model different scenarios of landscape evolution using an on-line landscape evolution model. The assignment takes them through several situations involving changes in commonly modeled landscape variables like overland flow, faulting and uplift, erosivity, and drainage incision. At the end I have students devise a situation (of variables) that tests a hypothesis or the sensitivity of the model to changes in a variable. Designed for a geomorphology course Uses online and/or real-time data
4. SIR Model
Science.gov (United States)
Tony Weisstein (Truman State University; Biology)
2007-06-20
This worksheet implements an SIR (Susceptible/ Infected/ Resistant) model of epidemiology for vector-borne diseases. Up to three microbial strains with different virulence and transmission parameters can be modeled and the results graphed. Originally designed to explore coevolution of myxoma and rabbits, the model is easily generalized to other systems.
5. Hydrological models are mediating models
Science.gov (United States)
2013-08-01
Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting more importance to identifying and communicating on the many factors involved in model development might increase transparency of model building.
6. Hydrological models are mediating models
Directory of Open Access Journals (Sweden)
L. V. Babel
2013-08-01
Full Text Available Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1 being only partially dependent on theory and observations, (2 integrating non-deductive elements in their construction, and (3 carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting more importance to identifying and communicating on the many factors involved in model development might increase transparency of model building.
7. Constitutive Models
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Piccolo, Chiara
2011-01-01
This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses the degrees of freedom, dependent and independent variables and solution strategy. Vapour-liquid and solid-liquid equilibrium is covered, and applications to droplet evaporation and kinetic models are given.
8. ICRF modelling
International Nuclear Information System (INIS)
This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs
9. Nested Models and Model Uncertainty
OpenAIRE
Kriwoluzky, Alexander; Stoltenberg, Christian A.
2009-01-01
Uncertainty about the appropriate choice among nested models is a central concern for optimal policy when policy prescriptions from those models differ. The standard procedure is to specify a prior over the parameter space ignoring the special status of some sub-models, e.g. those resulting from zero restrictions. This is especially problematic if a model's generalization could be either true progress or the latest fad found to fit the data. We propose a procedure that ensures that the spe...
10. Inter-trial effect in luminance processing revealed by magnetoencephalography / Efecto inter-ensayo en el procesamiento de iluminación revelado por magnetoencefalografía
Scientific Electronic Library Online (English)
Aki, Kondo; Katsumi, Watanabe.
2013-12-15
Full Text Available En este estudio, se examinó si el procesamiento de iluminación en el sistema visual humano exhibie algún efecto de historia (es decir, modulación inter-ensayo) en experimentos psicofísicos y de magnetoencefalografía (MEG). Un disco se presentó contra un fondo negro en varios niveles de iluminación e [...] n un orden aleatorio. Durante el registro de MEG, los participantes fueron instruidos para clasificar el brillo del disco (estimación de magnitud) y reportarlo durante el intervalo inter-ensayo. Los resultados de MEG mostraron que la activación neuromagnetica alrededor 200-220 ms después de la aparición de estímulo en las regiones occipito-temporal izquierda en un ensayo dade fue más débil cuando la iluminación de disco en el ensayo inmediatamente antes fue mayor. También se observó un efecto inverso inter-ensayo en el experimento psicofísico. Estos hallazgos sugieren que la actividad neuromagnética refleja la modulación inter-ensayo de procesamiento de iluminación que se correlaciona con la percepción subjetiva de brillo. Abstract in english In this study, we examined whether luminance processing in the human visual system would exhibit any history effect (i.e., inter-trial modulation) in psychophysical and magnetoencephalographic experiments. A disk was presented against a black background at various luminance levels in a randomized or [...] der. During the MEG recording, participants were instructed to rate the brightness of the disk (magnitude estimation) and to report it aloud during inter-stimulus interval. The MEG results showed that the neuromagnetic activation around 200-220 ms after the stimulus onset in the left occipito-temporal regions at a given trial was weaker when the disk luminance in the immediately prior trial was higher. An inverse inter-trial effect was also observed in the psychophysical experiment. These findings suggest that the neuromagnetic activity reflects the inter-trial modulation of luminance processing that correlates with the subjective perception of brightness.
11. Model choice versus model criticism
OpenAIRE
Robert, Christian P.; Mengersen, Kerrie; Chen, Carla
2009-01-01
The new perspectives on ABC and Bayesian model criticisms presented in Ratmann et al.(2009) are challenging standard approaches to Bayesian model choice. We discuss here some issues arising from the authors' approach, including prior influence, model assessment and criticism, and the meaning of error in ABC.
12. Ventilation Model
International Nuclear Information System (INIS)
The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future
13. Experimental and database-transferred electron-density analysis and evaluation of electrostatic forces in coumarin-102 dye.
Science.gov (United States)
Bibila Mayaya Bisseyou, Yvon; Bouhmaida, Nouhza; Guillot, Benoit; Lecomte, Claude; Lugan, Noel; Ghermani, Noureddine; Jelsch, Christian
2012-12-01
The electron-density distribution of a new crystal form of coumarin-102, a laser dye, has been investigated using the Hansen-Coppens multipolar atom model. The charge density was refined versus high-resolution X-ray diffraction data collected at 100?K and was also constructed by transferring the charge density from the Experimental Library of Multipolar Atom Model (ELMAM2). The topology of the refined charge density has been analysed within the Bader `Atoms In Molecules' theory framework. Deformation electron-density peak heights and topological features indicate that the chromen-2-one ring system has a delocalized ?-electron cloud in resonance with the N (amino) atom. The molecular electrostatic potential was estimated from both experimental and transferred multipolar models; it reveals an asymmetric character of the charge distribution across the molecule. This polarization effect is due to a substantial charge delocalization within the molecule. The molecular dipole moments derived from the experimental and transferred multipolar models are also compared with the liquid and gas-phase dipole moments. The substantial molecular dipole moment enhancements observed in the crystal environment originate from the crystal field and from intermolecular charge transfer induced and controlled by C-H···O and C-H···N intermolecular hydrogen bonds. The atomic forces were integrated over the atomic basins and compared for the two electron-density models. PMID:23165601
14. Turbulence modelling
International Nuclear Information System (INIS)
This paper is an introduction course in modelling turbulent thermohydraulics, aimed at computational fluid dynamics users. No specific knowledge other than the Navier Stokes equations is required beforehand. Chapter I (which those who are not beginners can skip) provides basic ideas on turbulence physics and is taken up in a textbook prepared by the teaching team of the ENPC (Benque, Viollet). Chapter II describes turbulent viscosity type modelling and the 2k-? two equations model. It provides details of the channel flow case and the boundary conditions. Chapter III describes the 'standard' (Rij-?) Reynolds tensions transport model and introduces more recent models called 'feasible'. A second paper deals with heat transfer and the effects of gravity, and returns to the Reynolds stress transport model. (author)
15. Event Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
2001-01-01
The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can be characterized by their occurrence times and the participating books and borrowers. When we characterize events as information objects we focus on concepts like information structures. When viewed as change agents events are phenomena that trigger change. For example, when borrow event occurs books are moved temporarily from bookcases to borrowers. When we characterize events as change agents we focus on concepts like transactions, entity processes, and workflow processes.
16. Spherical models
CERN Document Server
Wenninger, Magnus J
2014-01-01
Well-illustrated, practical approach to creating star-faced spherical forms that can serve as basic structures for geodesic domes. Complete instructions for making models from circular bands of paper with just a ruler and compass. Discusses tessellation, or tiling, and how to make spherical models of the semiregular solids and concludes with a discussion of the relationship of polyhedra to geodesic domes and directions for building models of domes. "". . . very pleasant reading."" - Science. 1979 edition.
17. Mental models
Directory of Open Access Journals (Sweden)
Marco Antonio Moreira
1996-12-01
Full Text Available The mental models subject is presented particularly in the light of Johnson-Laird’s theory. Views from different authors are also presented but the emphasis lies in Johson-Laird’s approach, proposing mental models as a third path in the images x propositions debate. In this perspective, the nature, content, and typology of mental models are discussed, as well as the issue of conciousness and computability. In addition, the methodology of research studies are provided. Essentially, the aim of the paper is to provide an introduction to the mental models topic, having science education research in mind.
18. Role of short range correlations on nuclear matrix elements of neutrinoless double beta decay
International Nuclear Information System (INIS)
Employing four different parametrization of the pairing plus multipolar type of effective two-body interaction and three different parametrizations of Jastrow-type of short range correlations, the uncertainties in the nuclear transition matrix elements due to the exchange of light as well as heavy Majorana neutrino for the 0+?0+ transition of neutrinoless positron ?? decay are estimated in the PHFB model.
19. Damping rates of surface plasmons for particles of size from nano- to micrometers; reduction of the nonradiative decay
International Nuclear Information System (INIS)
Damping rates of multipolar, localized surface plasmons (SPs) of gold and silver nanospheres of radii up to 1000 nm were found with the tools of classical electrodynamics. The significant increase in damping rates followed by noteworthy decrease for larger particles takes place along with substantial red-shift of plasmon resonance frequencies as a function of particle size. We also introduced interface damping into our modeling, which substantially modifies the plasmon damping rates of smaller particles. We demonstrate unexpected reduction of the multipolar SP damping rates in certain size ranges. This effect can be explained by the suppression of the nonradiative decay channel as a result of the lost competition with the radiative channel. We show that experimental dipole damping rates [H. Baida, et al., Nano Lett. 9(10) (2009) 3463, and C. Sönnichsen, et al., Phys. Rev. Lett. 88 (2002) 077402], and the resulting resonance quality factors can be described in a consistent and straightforward way within our modeling extended to particle sizes still unavailable experimentally. -- Highlights: ? We model plasmon damping rates up to the uncommonly large particles of 1000 nm. ? We demonstrate reduction of multipolar SP damping rates below its low size limit. ? We show that the radiative decay competes with the nonradiative processes. ? We model the quality Q-factor of SP multipolar resonances as a function of size. ? We confront our size characteristics with the experimental results of other authors.
20. Study of the M7 magnetic multipoles in 51V and 59Co, and M9 in 93Nb and 209Bi by elastic scattering of high transfer electrons
International Nuclear Information System (INIS)
The magnetization distribution of the odd-even nuclei 51V, and 93Nb has been investigated by elastic electron scattering at backward angle of 155 deg. The highest multipolarity has been mapped out accurately up 2,86fm-1, in the absence of background. Results are interpreted in the framework of the shell model
1. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise
OpenAIRE
Ahveninen, Jyrki; Ha?ma?la?inen, Matti; Ja?a?skela?inen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Lin, Fa-hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.
2011-01-01
How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional ...
2. Clinical Application of Spatiotemporal Distributed Source Analysis in Presurgical Evaluation of Epilepsy
OpenAIRE
Naoaki Tanaka
2014-01-01
Magnetoencephalography (MEG), which acquires neuromagnetic fields in the brain, is a useful diagnostic tool in presurgical evaluation of epilepsy. Previous studies have shown that MEG affects the planning intracranial electroencephalography placement and correlates with surgical outcomes by using a single dipole model. Spatiotemporal source analysis using distributed source models is an advanced method for analyzing MEG, and has been recently introduced for analyzing epileptic spikes. It has ...
3. Entrepreneurship Models.
Science.gov (United States)
Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.
This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models
4. Chameleonic ?-models
International Nuclear Information System (INIS)
(1+1)-dimensional, non-linear and (2,2)-supersymmetric ?-models are constructed in which the target space changes topology at distinguished regions of the parameter space. In particular, a ?-model formulation is provided for the recently discovered topological transitions among many of the Calabi-Yau manifolds. (orig.)
5. Zitterbewegung modeling
Energy Technology Data Exchange (ETDEWEB)
Hestenes, D. (Arizona State Univ., Tempe (United States))
1993-03-01
Guidelines for constructing point particle models of the electron with [ital zitterbewegung] and other features of the Dirac theory are discussed. Such models may at least be useful approximations to the Dirac theory, but the more exciting possibility is that this approach may lead to a more fundamental reality. 6 refs.
6. Zitterbewegung modeling
International Nuclear Information System (INIS)
Guidelines for constructing point particle models of the electron with zitterbewegung and other features of the Dirac theory are discussed. Such models may at least be useful approximations to the Dirac theory, but the more exciting possibility is that this approach may lead to a more fundamental reality. 6 refs
7. Daisyworld Model
Science.gov (United States)
James Lovelock
The simulation exercise uses a STELLA-based model called Daisyworld to explore concepts associated with Earth's energy balance and climate change. Students examine the evolution of a simplified model of an imaginary planet with only two species of life on its surface -- white and black daisies -- with different albedos. The daisies can alter the temperature of the surface where they are growing.
8. Modeling Sunspots
Science.gov (United States)
Oh, Phil Seok; Oh, Sung Jin
2013-01-01
Modeling in science has been studied by education researchers for decades and is now being applied broadly in school. It is among the scientific practices featured in the "Next Generation Science Standards" ("NGSS") (Achieve Inc. 2013). This article describes modeling activities in an extracurricular science club in a high…
9. Scale Models
Science.gov (United States)
McDonald Observatory
2011-01-01
In this activity, learners explore the relative sizes and distances of objects in the solar system. Without being informed of the expected product, learners will make a Play-doh model of the Earth-Moon system, scaled to size and distance. The facilitator reveals the true identity of the system at the conclusion of the activity. During the construction phase, learners try to guess what members of the solar system their model represents. Each group receives different amounts of Play-doh, with each group assigned a color (red, blue, yellow, white). At the end, groups set up their models and inspect the models of other groups. They report patterns of scale that they notice; as the amount of Play-doh increases, for example, so do the size and distance of the model. This resource guide includes background information about the Earth to Moon ratio and solar eclipses.
10. Protein structure modeling with MODELLER.
Science.gov (United States)
Webb, Benjamin; Sali, Andrej
2014-01-01
Genome sequencing projects have resulted in a rapid increase in the number of known protein sequences. In contrast, only about one-hundredth of these sequences have been characterized at atomic resolution using experimental structure determination methods. Computational protein structure modeling techniques have the potential to bridge this sequence-structure gap. In this chapter, we present an example that illustrates the use of MODELLER to construct a comparative model for a protein with unknown structure. Automation of a similar protocol has resulted in models of useful accuracy for domains in more than half of all known protein sequences. PMID:24573470
11. OSPREY Model
Energy Technology Data Exchange (ETDEWEB)
Veronica J. Rutledge
2013-01-01
The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to OSPREY to used and evaluate the model.
12. Complete identification by the particle-rotor model of /sup 153/Gd states up to 1 MeV
International Nuclear Information System (INIS)
Measurements performed at the Institut Laue-Langevin in Grenoble regarding gamma rays and conversion electrons following thermal-neutron capture in /sup 152/Gd together with measurements of 2 keV neutron capture in the same nucleus at the High Flux Reactor in Brookhaven have resulted in a 100-level /sup 153/Gd scheme. For some 200 transitions in /sup 153/Gd conversion coefficients have been calculated. This enabled the determination of transition multipolarities and spin and/or parity restrictions for many levels
13. Linear Models
CERN Document Server
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
14. Quark models
International Nuclear Information System (INIS)
This paper invites experimenters to consider the wide variety of tests suggested by the new aspects of quark models since the discovery of charm and beauty, and nonrelativistic models. Colors and flavours are counted and combined into hadrons. The current quark zoo is summarized. Models and theoretical background are studied under: qualitative QCD: strings and bags, potential models, relativistic effects, electromagnetic transitions, gluon emissions, and single quark transition descriptions. Hadrons containing quarks known before 1974 (i.e. that can be made of ''light'' quarks u, d, and s) are treated in Section III, while those containing charmed quarks and beauty (b) quarks are discussed in Section IV. Unfolding the properties of the sixth quark from information on its hadrons is seen as a future application of the methods used in this study
15. Programming models
Energy Technology Data Exchange (ETDEWEB)
Daniel, David J [Los Alamos National Laboratory; Mc Pherson, Allen [Los Alamos National Laboratory; Thorp, John R [Los Alamos National Laboratory; Barrett, Richard [SNL; Clay, Robert [SNL; De Supinski, Bronis [LLNL; Dube, Evi [LLNL; Heroux, Mike [SNL; Janssen, Curtis [SNL; Langer, Steve [LLNL; Laros, Jim [SNL
2011-01-14
A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.
16. Modeling Arcs
CERN Document Server
Insepov, Zeke; Veitzer, Seth; Mahalingam, Sudhakar
2011-01-01
Although vacuum arcs were first identified over 110 years ago, they are not yet well understood. We have since developed a model of breakdown and gradient limits that tries to explain, in a self-consistent way: arc triggering, plasma initiation, plasma evolution, surface damage and gra- dient limits. We use simple PIC codes for modeling plasmas, molecular dynamics for modeling surface breakdown, and surface damage, and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties. Since any given experiment seems to have more variables than data points, we have tried to consider a wide variety of arcing (rf structures, e beam welding, laser ablation, etc.) to help constrain the problem, and concentrate on common mechanisms. While the mechanisms can be comparatively simple, modeling can be challenging.
17. Coupling of numerical methods for the forward problem in Magneto- and Electro-Encephalography
OpenAIRE
Olivi, Emmanuel
2011-01-01
Electro- and Magneto-Encephalography are precious tools for studying brain activity, notably due to their time resolution and their non invasive nature. Acquisitions are done on the exterior of the head (scalp electrodes for EEG, and magnetometers for MEG); in order to recover the sources responsible of the measured signal, an inverse problem must be solved, for which accurate solutions of the forward problem must be available. This requires a good modeling of the head tissues, and an appropr...
18. Environmental modeling
CERN Document Server
Holzbecher, Ekkehard
2012-01-01
The book has two aims: to introduce basic concepts of environmental modelling and to facilitate the application of the concepts using modern numerical tools such as MATLAB. It is targeted at all natural scientists dealing with the environment: process and chemical engineers, physicists, chemists, biologists, biochemists, hydrogeologists, geochemists and ecologists. MATLAB was chosen as the major computer tool for modeling, firstly because it is unique in it's capabilities, and secondly because it is available in most academic institutions, in all universities and in the research departments of
19. Quasimolecular modelling
CERN Document Server
Greenspan, Donald
1991-01-01
In this book the author has tried to apply "a little imagination and thinking" to modelling dynamical phenomena from a classical atomic and molecular point of view. Nonlinearity is emphasized, as are phenomena which are elusive from the continuum mechanics point of view. FORTRAN programs are provided in the Appendices.
20. Groundwater Model
Science.gov (United States)
In this activity, students build a model to demonstrate how aquifers are formed and ground water becomes polluted. For younger students, the teacher can perform this activity as a demonstration, or older students can perform it themselves. A materials list, instructions, and extension activities are provided.
1. Why Model?
Directory of Open Access Journals (Sweden)
OlafWolkenhauer
2014-01-01
Full Text Available Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question "Why model?"
2. Why model?
Science.gov (United States)
Wolkenhauer, Olaf
2014-01-01
Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question "Why model?" PMID:24478728
3. Marshmallow Models
Science.gov (United States)
Lawrence Hall of Science
2010-01-01
No glue is needed for learners of any age to become marshmallow architects or engineers. Using marshmallows and water (and maybe edible decorations like peanut butter, pretzels, gumdrops, etc.), learners wet a few marshamallows at a time and stick them together bit by bit to construct whatever models they want.
OpenAIRE
Vitolins, Valdis; Kalnins, Audris
2003-01-01
Business concepts are studied using a metamodel-based approach, using UML 2.0. The Notation Independent Business concepts metamodel is introduced. The approach offers a mapping between different business modeling notations which could be used for bridging BM tools and boosting the MDA approach.
5. Model thinking.
Science.gov (United States)
Salvage, Jane
Nancy Roper, Win Logan and Alison Tierney published their ground-breaking model of nursing in 1980, sparking the evolution of nursing theory from staid, biomedical thinking to an individualised, independent approach. Jane Salvage looks back at the lasting impact their research had on her and the profession as whole. PMID:16425763
6. Biotran model
International Nuclear Information System (INIS)
The BIOTRAN model was developed at Los Alamos to help predict short- and long-term consequences to man from releases of radionuclides into the environment. It is a dynamic model that simulates on a daily and yearly basis the flux of biomass, water, and radionuclides through terrestrial and aquatic ecosystems. Biomass, water, and radionuclides are driven within the ecosystems by climate variables stochastically generated by BIOTRAN each simulation day. The climate variables influence soil hydraulics, plant growth, evapotranspiration, and particle suspension and deposition. BIOTRAN has 22 different plant growth strategies for simulating various grasses, shrubs, trees, and crops. Ruminants and humans are also dynamically simulated by using the simulated crops and forage as intake for user-specified diets. BIOTRAN has been used at Los Alamos for long-term prediction of health effects to populations following potential accidental releases of uranium and plutonium. Newly developed subroutines are described: a human dynamic physiological and metabolic model; a soil hydrology and irrigation model; limnetic nutrient and radionuclide cycling in fresh-water lakes. 7 references
7. Criticality Model
International Nuclear Information System (INIS)
The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality computational method will be used for evaluating the criticality potential of configurations of fissionable materials (in-package and external to the waste package) within the repository at Yucca Mountain, Nevada for all waste packages/waste forms. The criticality computational method is also applicable to preclosure configurations. The criticality computational method is a component of the methodology presented in ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003). How the criticality computational method fits in the overall disposal criticality analysis methodology is illustrated in Figure 1 (YMP 2003, Figure 3). This calculation will not provide direct input to the total system performance assessment for license application. It is to be used as necessary to determine the criticality potential of configuration classes as determined by the configuration probability analysis of the configuration generator model (BSC 2003a)
8. Molecular Modelling
Directory of Open Access Journals (Sweden)
Aarti Sharma
2009-12-01
Full Text Available
The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important
tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and
the exponential growth of the knowledge of protein structures have made it possible for organic compounds to tailored to
decrease harmful side effects and increase the potency. This article provides a detailed description of the techniques
employed in molecular modeling. Molecular modelling is a rapidly developing discipline, and has been supported from
the dramatic improvements in computer hardware and software in recent years.
9. Modeling Overstock
OpenAIRE
Fernandes, Rui; Gouveia, Borges; Pinho, Carlos
2010-01-01
Two main problems have been emerging in supply chain management: the increasing pressure to reduce working capital and the growing variety of products. Most of the popular indicators have been developed based on a controlled environment. A new indicator is now proposed, based on the uncertainty of the demand, the flexibility of the supply chains, the evolution of the products lifecycle and the fulfillment of a required service level. The model to support the indicator will be developed wit...
10. Nuclear Models
International Nuclear Information System (INIS)
The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction).Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the wn series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.
11. Model Well
Science.gov (United States)
Twin Cities Public Television, Inc.
2008-01-01
In this quick activity about pollutants and groundwater (page 2 of PDF), learners build a model well with a toilet paper tube. Learners use food coloring to simulate pollutants and observe how they can be carried by groundwater and eventually enter water sources such as wells, rivers, and streams. This activity is associated with nanotechnology and relates to linked video, DragonflyTV Nano: Water Clean-up.
12. Modelling tsunamis
International Nuclear Information System (INIS)
We doubt the relevance of soliton theory to the modelling of tsunamis, and present a case in support of an alternative view. Although the shallow-water equations do provide, we believe, an appropriate basis for this phenomenon, an asymptotic analysis of the solution for realistic variable depths, and for suitable background flows, is essential for a complete understanding of this phenomenon. In particular we explain how a number of tsunami waves can arrive at a shoreline. (letter to the editor)
13. Gas Model
Science.gov (United States)
The Exploratorium
2013-01-30
This highly visual model demonstrates the atomic theory of matter which states that a gas is made up of tiny particles of atoms that are in constant motion, smashing into each other. Balls, representing molecules, move within a cage container to simulate this phenomenon. A hair dryer provides the heat to simulate the heating and cooling of gas: the faster the balls are moving, the hotter the gas. Learners observe how the balls move at a slower rate at lower "temperatures."
14. Modelling biodiversity
OpenAIRE
Halkos, George
2010-01-01
This study uses a sample of 71 countries and nonparametric quantile and partial regressions to model a number of threatened species (reptiles, mammals, fish, birds, trees, plants) in relation to various economic and environmental variables (GDPc, CO¬2 emissions, agricultural production, energy intensity, protected areas, population and income inequality). From the analysis and due to high asymmetric distribution of the dependent variables it seems that a linear regression is not adequate and...
15. Supersymmetric models
International Nuclear Information System (INIS)
This lecture was given at the KEK Summer School on August 3-6, 1993 by Professor N. Sakai. All the available experimental data at low energy can be adequately described by the standard model with SU(3) x SU(2) x U(1) gauge group. The three different gauge coupling constants originate from the three different interactions, namely, strong, weak and electromagnetic interactions. The three interactions described by the three different gauge groups can be truly unified into a single gauge group if a simple gauge group to describe all three interactions is chosen. Even if the grand unified theory is not accepted, the existence of gravitational interaction is sure. There are only two options to explain the gauge hierarchy, that is, technicolor model and supersymmetry. As the introduction to supersymmetry, Spinors and Grassmann number, Supertransformation, unitary representation, chiral scalar superfield and supersymmetric Lagrangian field theory are explained. Regarding the supersymmetric SU(3) x SU(2) x U(1) model, Yukawa coupling and particle content are described. It should be noted that the Higgsino (chiral fermions associated with Higgs scalar) in general introduces anomaly in gauge currents. The simplest way out of such anomaly problem is to introduce Higgsino doublet in pair. (K.I.)
16. Ozone modeling
International Nuclear Information System (INIS)
Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO2), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NOx concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NOx coordinates of the point, known as the NMOC/NOx ratio. Results obtained by the described model are presented
17. Animal models.
Science.gov (United States)
Walker, Ellen A
2010-01-01
As clinical studies reveal that chemotherapeutic agents may impair several different cognitive domains in humans, the development of preclinical animal models is critical to assess the degree of chemotherapy-induced learning and memory deficits and to understand the underlying neural mechanisms. In this chapter, the effects of various cancer chemotherapeutic agents in rodents on sensory processing, conditioned taste aversion, conditioned emotional response, passive avoidance, spatial learning, cued memory, discrimination learning, delayed-matching-to-sample, novel-object recognition, electrophysiological recordings and autoshaping is reviewed. It appears at first glance that the effects of the cancer chemotherapy agents in these many different models are inconsistent. However, a literature is emerging that reveals subtle or unique changes in sensory processing, acquisition, consolidation and retrieval that are dose- and time-dependent. As more studies examine cancer chemotherapeutic agents alone and in combination during repeated treatment regimens, the animal models will become more predictive tools for the assessment of these impairments and the underlying neural mechanisms. The eventual goal is to collect enough data to enable physicians to make informed choices about therapeutic regimens for their patients and discover new avenues of alternative or complementary therapies that reduce or eliminate chemotherapy-induced cognitive deficits. PMID:20738016
18. Model checking
Science.gov (United States)
Dill, David L.
1995-01-01
Automatic formal verification methods for finite-state systems, also known as model-checking, successfully reduce labor costs since they are mostly automatic. Model checkers explicitly or implicitly enumerate the reachable state space of a system, whose behavior is described implicitly, perhaps by a program or a collection of finite automata. Simple properties, such as mutual exclusion or absence of deadlock, can be checked by inspecting individual states. More complex properties, such as lack of starvation, require search for cycles in the state graph with particular properties. Specifications to be checked may consist of built-in properties, such as deadlock or 'unspecified receptions' of messages, another program or implicit description, to be compared with a simulation, bisimulation, or language inclusion relation, or an assertion in one of several temporal logics. Finite-state verification tools are beginning to have a significant impact in commercial designs. There are many success stories of verification tools finding bugs in protocols or hardware controllers. In some cases, these tools have been incorporated into design methodology. Research in finite-state verification has been advancing rapidly, and is showing no signs of slowing down. Recent results include probabilistic algorithms for verification, exploitation of symmetry and independent events, and the use symbolic representations for Boolean functions and systems of linear inequalities. One of the most exciting areas for further research is the combination of model-checking with theorem-proving methods.
19. Model Awal Dan Model Klasik Struktur Informasi
OpenAIRE
Widayati, Dwi
2010-01-01
This paper describes early models of information structure and classical models of information structure. Early models of information structure consist of (1) subject- predicate structure, (2) the early psychological model, (3) the communicative model, and (4) linguistics, psychology, and information structure. Classical models is begun from the Prague school, Halliday and the American structuralists, Chafe on givenness, and Chomsky on focus and presupposition. The most characteristic feat...
20. Functional connectivity in slow-wave sleep: identification of synchronous cortical activity during wakefulness and sleep using time series analysis of electroencephalographic data.
Science.gov (United States)
Langheim, Frederick J P; Murphy, Michael; Riedner, Brady A; Tononi, Giulio
2011-12-01
Sleep is a behavioral state ideal for studying functional connectivity because it minimizes many sources of between-subject variability that confound waking analyses. This is particularly important for potential connectivity studies in mental illness where cognitive ability, internal milieu and active psychotic symptoms can vary widely across subjects. We, therefore, sought to adapt techniques applied to magnetoencephalography for use in high-density electroencephalography (EEG), the gold-standard in brain-recording methods during sleep. Autoregressive integrative moving average modeling was used to reduce spurious correlations between recording sites (electrodes) in order to identify functional networks. We hypothesized that identified network characteristics would be similar to those found with magnetoencephalography, and would demonstrate sleep stage-related differences in a control population. We analysed 60-s segments of low-artifact data from seven healthy human subjects during wakefulness and sleep. EEG analysis of eyes-closed wakefulness revealed widespread nearest-neighbor positive synchronous interactions, similar to magnetoencephalography, though less consistent across subjects. Rapid eye movement sleep demonstrated positive synchronous interactions akin to wakefulness but weaker. Slow-wave sleep (SWS), instead, showed strong positive interactions in a large left fronto-temporal-parietal cluster markedly more consistent across subjects. Comparison of connectivity from early SWS to SWS from a later sleep cycle indicated sleep-related reduction in connectivity in this region. The consistency of functional connectivity during SWS within and across subjects suggests this may be a promising technique for comparing functional connectivity between mental illness and health. PMID:21281369
1. Towards a Multi Business Model Innovation Model
DEFF Research Database (Denmark)
Lindgren, Peter; JØrgensen, Rasmus
2012-01-01
This paper studies the evolution of business model (BM) innovations related to a multi business model framework. The paper tries to answer the research questions: • What are the requirements for a multi business model innovation model (BMIM)? • How should a multi business model innovation model look like? Different generations of BMIMs are initially studied in the context of laying the baseline for how next generation multi BM Innovation model (BMIM) should look like. All generations of models are analyzed with the purpose of comparing the characteristics and challenges of previous generations of BMIMs. On behalf of these results and case analyses, the paper concludes by proposing a framework for a multi BMIM.
2. Model Selection Principles in Misspecified Models
CERN Document Server
Lv, Jinchi
2010-01-01
Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...
3. CISNET lung models: Comparison of model assumptions and model structures
Science.gov (United States)
McMahon, Pamela M.; Hazelton, William; Kimmel, Marek; Clarke, Lauren
2012-01-01
Sophisticated modeling techniques can be powerful tools to help us understand the effects of cancer control interventions on population trends in cancer incidence and mortality. Readers of journal articles are however rarely supplied with modeling details. Six modeling groups collaborated as part of the National Cancer Institute’s Cancer Intervention and Surveillance Modeling Network (CISNET) to investigate the contribution of US tobacco control efforts towards reducing lung cancer deaths over the period 1975 to 2000. The models included in this monograph were developed independently and use distinct, complementary approaches towards modeling the natural history of lung cancer. The models used the same data for inputs and agreed on the design of the analysis and the outcome measures. This article highlights aspects of the models that are most relevant to similarities of or differences between the results. Structured comparisons can increase the transparency of these complex models. PMID:22882887
4. Modelling Sonoluminescence
CERN Document Server
Chodos, A; Chodos, Alan; Groff, Sarah
1999-01-01
In single-bubble sonoluminescence, a bubble trapped by a sound wave in a flask of liquid is forced to expand and contract; exactly once per cycle, the bubble emits a very sharp ($< 50 ps$) pulse of visible light. This is a robust phenomenon observable to the naked eye, yet the mechanism whereby the light is produced is not well understood. One model that has been proposed is that the light is "vacuum radiation" generated by the coupling of the electromagnetic fields to the surface of the bubble. In this paper, we simulate vacuum radiation by solving Maxwell's equations with an additional term that couples the field to the bubble's motion. We show that, in the static case originally considered by Casimir, we reproduce Casimir's result. In a simple purely time-dependent example, we find that an instability occurs and the pulse of radiation grows exponentially. In the more realistic case of spherically-symmetric bubble motion, we again find exponential growth in the context of a small-radius approximation.
5. Modeling sonoluminescence
International Nuclear Information System (INIS)
In single-bubble sonoluminescence, a bubble trapped by a sound wave in a flask of liquid is forced to expand and contract; exactly once per cycle, the bubble emits a very sharp (<50 ps) pulse of visible light. This is a robust phenomenon observable to the naked eye, yet the mechanism whereby the light is produced is not well understood. One model that has been proposed is that the light is open-quotes vacuum radiationclose quotes generated by the coupling of the electromagnetic fields to the surface of the bubble. In this paper, we simulate vacuum radiation by solving Maxwell's equations with an additional term that couples the field to the bubble's motion. We show that, in the static case originally considered by Casimir [Proc. K. Ned. Akad. Nel. 51, 783 (1948)], we reproduce Casimir's result. In a simple purely time-dependent example, we find that an instability occurs and the pulse of radiation grows exponentially. In the more realistic case of spherically symmetric bubble motion, we again find exponential growth in the context of a small-radius approximation. copyright 1999 The American Physical Society
6. The IMACLIM model; Le modele IMACLIM
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-07-01
This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)
7. I&C Modeling in SPAR Models
Energy Technology Data Exchange (ETDEWEB)
John A. Schroeder
2012-06-01
The Standardized Plant Analysis Risk (SPAR) models for the U.S. commercial nuclear power plants currently have very limited instrumentation and control (I&C) modeling [1]. Most of the I&C components in the operating plant SPAR models are related to the reactor protection system. This was identified as a finding during the industry peer review of SPAR models. While the Emergency Safeguard Features (ESF) actuation and control system was incorporated into the Peach Bottom Unit 2 SPAR model in a recent effort [2], various approaches to expend resources for detailed I&C modeling in other SPAR models are investigated.
8. Concept Modeling vs. Data modeling in Practice
DEFF Research Database (Denmark)
Madsen, Bodil Nistrup; Erdman Thomsen, Hanne
2015-01-01
This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models. We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from the Danish public sector (a user interface for drug prescription and a data model for food control).
9. CISNET lung models: Comparison of model assumptions and model structures
OpenAIRE
Mcmahon, Pamela M.; Hazelton, William; Kimmel, Marek; Clarke, Lauren
2012-01-01
Sophisticated modeling techniques can be powerful tools to help us understand the effects of cancer control interventions on population trends in cancer incidence and mortality. Readers of journal articles are however rarely supplied with modeling details.
10. Example of a stable wormhole in general relativity
OpenAIRE
Bronnikov, K. A.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.
2013-01-01
We study a static, spherically symmetric wormhole model whose metric coincides with that of the so-called Ellis wormhole but the material source of gravity consists of a perfect fluid with negative density and a source-free radial electric or magnetic field. For a certain class of fluid equations of state, it has been shown that this wormhole model is linearly stable under both spherically symmetric perturbations and axial perturbations of arbitrary multipolarity. A similar ...
11. Nucleon polarization and the nuclear charge operator
International Nuclear Information System (INIS)
Effects of nucleon polarization on the nuclear charge operator have been evaluated in a constituent quark model. At momentum transfer q approx. equal to 4 fm-1 monopole, dipole and quadrupole excitations are of equal importance. In a harmonic oscillator model for 3He all multipolarities give negative contributions, leading to an overall contribution comparable to the relativistic pair effect. The influence of realistic wave functions, coupling constants and off-shell form factors is discussed. (orig.)
12. The effect of thermal boundary conditions on dynamos driven by internal heating
OpenAIRE
Hori, K.; Wicht, J.; Christensen, U. R.
2010-01-01
Abstract The early dynamos of Mars and Earth probably operated without an inner core being present. They were thus exclusively driven by secular cooling and radiogenic heating which can both be modeled by homogeneously distributed heat sources. Some previous dynamo simulations that explored this driving mode found dipole dominated magnetic fields, while other reported multipolar configurations. Since these models differed both in the employed outer thermal boundary conditions and i...
13. Armas estratégicas e poder no sistema internacional: o advento das armas de energia direta e seu impacto potencial sobre a guerra e a distribuição multipolar de capacidades / Strategic weapons and power in international system: the arise of direct energy weapons and their potential impact over the war and multipolar distribution of capabilities
Scientific Electronic Library Online (English)
Fabrício Schiavo, Ávila; José Miguel, Martins; Marco, Cepik.
2009-04-01
14. Armas estratégicas e poder no sistema internacional: o advento das armas de energia direta e seu impacto potencial sobre a guerra e a distribuição multipolar de capacidades Strategic weapons and power in international system: the arise of direct energy weapons and their potential impact over the war and multipolar distribution of capabilities
Directory of Open Access Journals (Sweden)
Fabrício Schiavo Ávila
2009-04-01
15. Bayesian model comparison of solar radiation models
Energy Technology Data Exchange (ETDEWEB)
Lauret, Philippe; Riviere, Carine [Lab. de Physique du Batiment et des Systemes, Saint-Denis (France)
2008-07-01
In this paper, we propose a new statistical method: the Bayesian Model Comparison (BMC) method for selecting an adequate hourly diffuse fraction correlation. Six models are investigated and compared according to the BMC method. The selection of the best model is based on a Bayesian criterion called the Deviance Information Criterion (DIC). In this article, we demonstrate the usefulness of the DIC criterion in the model selection process and we issue a caution regarding the selection of a model with standard statistical methods. The aim of this paper is also to introduce the DIC to the solar radiation modeling community. (orig.)
16. Cognitive models embedded in system simulation models
International Nuclear Information System (INIS)
If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context
17. Models for preequilibrium decay
International Nuclear Information System (INIS)
After a qualitative discussion of the most substantial features of preequilibrium decay, four models related to this mechanism have been presented: the exciton model, the Harp-Miller-Berne (H.M.B.) model, the hybrid model, the geometry-dependent model (G.D.H.). This includes: formulation of the model, comparisons with experimental data, associated computer codes, and, finally, intercomparisons of models. (author)
18. Orthogonal Meta-Modeling
OpenAIRE
Katharina Gorlach; Frank Leymann
2014-01-01
This article introduces meta-modeling hierarchies additional to the conventional meta-modeling hierarchy in a model-driven architecture. Additional hierarchies are introduced orthogonal to the conventional meta-modeling hierarchy for an appropriate correlation of information on combined hierarchies. In particular, orthogonal meta-modeling enables the grouping of models on the same conventional meta-modeling layer based on additional semantic dependencies. For the enhancement of conventional m...
19. QSMSR QUALITATIVE MODEL
Directory of Open Access Journals (Sweden)
Tahir Abdullah
2012-02-01
Full Text Available Software architecture design and requirement engineering are core and independent areas of engineering. A lot of research, education and practice are carried on Requirement elicitation and doing refine it, but it is a major issue of engineering. QSMSR model act as a bridge between requirement and design there is a huge gap between these two areas of software architecture and requirement engineering. The QSMSR model divide into two sub model qualitative model and Principal model in this research we focus on Qualitative model which further divide into two sub models fabricated model and classified model. Classified model make the sub groups of the role and match it with components. The Fabricated model link QSMSR Principal Model to an architecture design. At the end it provides the QSMSR Architecture model of the system as output.
20. Model Checking of Boolean Process Models
OpenAIRE
Schneider, Christoph; Wehler, Joachim
2011-01-01
In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are ...
1. Nonlinear empirical modeling using local PLS models
OpenAIRE
Aarhus, Lars Thore
1994-01-01
This thesis proposes some new iterative local modeling algorithms for the multivariate approximation problem (mapping from R P to R). Partial Least Squares Regression (PLS)is used as the local linear modeling technique. The local models are interpolated by means of normalized Gaussian weight functions, providing a smooth total nonlinear model. The algorithms are tested on both artificial and real world set of data, yielding good predictions compared to other linear and nonlinear techniques.
2. BIM Modeling For Contractors - Improving Model Takeoffs
OpenAIRE
André Monteiro; João Pedro Poças Martins
2012-01-01
As industry stakeholders investigate on the best uses for Building Information Modeling (BIM), its shortcomings begin to be realized, the need for modeling parameterization becomes more evident and methods to better approach these issues developed. Automatic quantity takeoff is one of the most important BIM-based features. Research conducted by the authors shows that in order to be successfully used, quantity takeoff requires specific model definition. Adapting a model for qua...
3. Automated data model evaluation
International Nuclear Information System (INIS)
Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation
OpenAIRE
Markovic, Ivan
2010-01-01
This book presents a process-oriented business modeling framework based on semantic technologies. The framework consists of modeling languages, methods, and tools that allow for semantic modeling of business motivation, business policies and rules, and business processes. Quality of the proposed modeling framework is evaluated based on the modeling content of SAP Solution Composer and several real-world business scenarios.
5. China model: Energy modeling the modern dynasty
Energy Technology Data Exchange (ETDEWEB)
Shaw, J.
1996-05-01
In this paper a node-based microeconomic analysis is used to model the Chinese energy system. This model is run across multiple periods employing Lagrangian Relaxation techniques to achieve general equilibrium. Later, carbon dioxide emissions are added and the model is run to answer the question, {open_quotes}How can greenhouse gas emissions be reduced{close_quotes}?
6. Coalgebraic models for combinatorial model categories
OpenAIRE
Ching, Michael; Riehl, Emily
2014-01-01
We show that the category of algebraically cofibrant objects in a combinatorial and simplicial model category A has a model structure that is left-induced from that on A. In particular it follows that any presentable model category is Quillen equivalent (via a single Quillen equivalence) to one in which all objects are cofibrant.
7. Solicited abstract: Global hydrological modeling and models
Science.gov (United States)
Xu, Chong-Yu
2010-05-01
The origins of rainfall-runoff modeling in the broad sense can be found in the middle of the 19th century arising in response to three types of engineering problems: (1) urban sewer design, (2) land reclamation drainage systems design, and (3) reservoir spillway design. Since then numerous empirical, conceptual and physically-based models are developed including event based models using unit hydrograph concept, Nash's linear reservoir models, HBV model, TOPMODEL, SHE model, etc. From the late 1980s, the evolution of global and continental-scale hydrology has placed new demands on hydrologic modellers. The macro-scale hydrological (global and regional scale) models were developed on the basis of the following motivations (Arenll, 1999). First, for a variety of operational and planning purposes, water resource managers responsible for large regions need to estimate the spatial variability of resources over large areas, at a spatial resolution finer than can be provided by observed data alone. Second, hydrologists and water managers are interested in the effects of land-use and climate variability and change over a large geographic domain. Third, there is an increasing need of using hydrologic models as a base to estimate point and non-point sources of pollution loading to streams. Fourth, hydrologists and atmospheric modellers have perceived weaknesses in the representation of hydrological processes in regional and global climate models, and developed global hydrological models to overcome the weaknesses of global climate models. Considerable progress in the development and application of global hydrological models has been achieved to date, however, large uncertainties still exist considering the model structure including large scale flow routing, parameterization, input data, etc. This presentation will focus on the global hydrological models, and the discussion includes (1) types of global hydrological models, (2) procedure of global hydrological model development, (3) state-of-the-art of existing global hydrological models, and (4) challenges. Acknowledgment: Thanks to Lebing Gong, Elin Widén-Nilsson, and Sven Halldin of Uppsala University for the team work in global hydrological models.
8. Weather Forecast Models
Science.gov (United States)
Weather Forecast Models from the NOAA, provides links to sites posting output from many of their numerical models. These models attempt to simulate the state of the atmosphere at various times in the future.
9. Editor's Roundtable: Model behavior
Science.gov (United States)
Inez Liftig
2010-11-01
Models are manageable representations of objects, concepts, and phenomena, and are everywhere in science. Models are "thinking tools" for scientists and have always played a key role in the development of scientific knowledge. Models of the solar system,
10. Mathematics and Statistics Models
Science.gov (United States)
Developed by Bob MacKay, Clark College. What are Mathematical and Statistical Models These types of models are obviously related, but there are also real differences between them. Mathematical Models: grow out of ...
11. Dynamic vector hysteresis modeling
International Nuclear Information System (INIS)
The possibility of considering dynamic effects in three vector hysteresis models is investigated. The friction model of oriented Preisach operators which rotate due to the torque exerted by the external field, the coercive spheres model, the 3D analogue of the classical Preisach model, and a further collective model based on micromagnetic analogy are considered. Furthermore, the 'external' dynamic generalization of the static hysteresis models is introduced for the vector case
12. Modeling of geothermal systems
Energy Technology Data Exchange (ETDEWEB)
Bodvarsson, G.S.; Pruess, K.; Lippmann, M.J.
1985-03-01
During the last decade the use of numerical modeling for geothermal resource evaluation has grown significantly, and new modeling approaches have been developed. In this paper we present a summary of the present status in numerical modeling of geothermal systems, emphasizing recent developments. Different modeling approaches are described and their applicability discussed. The various modeling tasks, including natural-state, exploitation, injection, multi-component and subsidence modeling, are illustrated with geothermal field examples. 99 refs., 14 figs.
13. Predictive Models for Music
OpenAIRE
Paiement, Jean-franc?ois; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
14. Practical Marginalized Multilevel Models
OpenAIRE
Griswold, Michael E.; Swihart, Bruce J.; Caffo, Brian S.; Zeger, Scott L.
2013-01-01
Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel model...
15. Survival modelling with frailty
OpenAIRE
Lynch, Joseph
2011-01-01
In the survival analysis literature, the standard model for data analysis is the semi-parametric Proportional Hazard (PH) model of Cox (1972). MacKenzie (1996) introduced the Generalised Time Dependent Logistic (GTDL) family of non-PH parametric survival models, which compete with Cox’s PH model. This thesis develops the GTDL model side-by-side with the PH Weibull model. In many datasets, some attributes that might be deemed relevant may not be available. The effect of ...
16. Assessing Financial Model Risk
OpenAIRE
Barrieu, Pauline; Scandolo, Giacomo
2013-01-01
Model risk has a huge impact on any risk measurement procedure and its quantification is therefore a crucial step. In this paper, we introduce three quantitative measures of model risk when choosing a particular reference model within a given class: the absolute measure of model risk, the relative measure of model risk and the local measure of model risk. Each of the measures has a specific purpose and so allows for flexibility. We illustrate the various notions by studying ...
17. Geologic Framework Model Analysis Model Report
International Nuclear Information System (INIS)
The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the repository design. These downstream models include the hydrologic flow models and the radionuclide transport models. All the models and the repository design, in turn, will be incorporated into the Total System Performance Assessment (TSPA) of the potential radioactive waste repository block and vicinity to determine the suitability of Yucca Mountain as a host for the repository. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 2
18. Multilevel modeling using R
CERN Document Server
Finch, W Holmes; Kelley, Ken
2014-01-01
A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo
19. Simplicity, Complexity and Modelling
CERN Document Server
Christie, Mike; Dawid, Philip; Senn, Stephen S
2011-01-01
Several points of disagreement exist between different modelling traditions as to whether complex models are always better than simpler models, as to how to combine results from different models and how to propagate model uncertainty into forecasts. This book represents the result of collaboration between scientists from many disciplines to show how these conflicts can be resolved. Key Features: Introduces important concepts in modelling, outlining different traditions in the use of simple and complex modelling in statistics. Provides numerous case studies on complex modelling, such as clima
20. Modelling Food Webs
CERN Document Server
Drossel, B
2002-01-01
We review theoretical approaches to the understanding of food webs. After an overview of the available food web data, we discuss three different classes of models. The first class comprise static models, which assign links between species according to some simple rule. The second class are dynamical models, which include the population dynamics of several interacting species. We focus on the question of the stability of such webs. The third class are species assembly models and evolutionary models, which build webs starting from a few species by adding new species through a process of "invasion" (assembly models) or "speciation" (evolutionary models). Evolutionary models are found to be capable of building large stable webs.
1. Comparative Protein Structure Modeling Using MODELLER.
Science.gov (United States)
Webb, Benjamin; Sali, Andrej
2014-01-01
Functional characterization of a protein sequence is one of the most frequent problems in biology. This task is usually facilitated by accurate three-dimensional (3-D) structure of the studied protein. In the absence of an experimentally determined structure, comparative or homology modeling can sometimes provide a useful 3-D model for a protein that is related to at least one known protein structure. Comparative modeling predicts the 3-D structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. Curr. Protoc. Bioinform. 47:5.6.1-5.6.32. © 2014 by John Wiley & Sons, Inc. PMID:25199792
2. Tailoring the interactions between self-propelled bodies
OpenAIRE
Caussin, Jean-baptiste; Bartolo, Denis
2014-01-01
We classify the interactions between self-propelled particles moving at a constant speed from symmetry considerations. We establish a systematic expansion for the two-body forces in the spirit of a multipolar expansion. This formulation makes it possible to rationalize most of the models introduced so far within a common framework. We distinguish between three classes of physical interactions: (i) potential forces, (ii) inelastic collisions and (iii) non-reciprocal interacti...
3. On electromagnetic transitions between highly excited states of deformed odd-A nuclei
International Nuclear Information System (INIS)
The strength function method developed earlier is generalized to calculation of transition probabilities between states of deformed nuclei lying at excitations of an order of the nucleon binding energy and lower. The structure of these states is described within the quasiparticle-phonon model. The used phonon operator contains the components of different multipolarities of electric and magnetic types for the projection ? of its momentum onto the nucleus symmetry axis. The results of methodical calculations are discussed. 14 refs., 1 fig., 1 tab
4. Transition probabilities between the first excited state and the ground state in the N = 81 nuclei 139Ce, 141Nd, 143Sm and 145Gd
International Nuclear Information System (INIS)
The half-life of the 108 keV level in 143Sm (T1sub(/)2 = 800 +- 50 ps) has been measured by delayed e--? coincidences and the multipolarity of the deexciting transition has been determined (E2/M1 139Ce, 141Nd and 145Gd from the literature. These systematics are interpreted in terms of intermediate-coupling model calculations. (orig.)
5. JINR rapid communications
International Nuclear Information System (INIS)
The present collection of rapid communications from JINR, Dubna, contains five separate records on momentum reconstruction procedure for a nonfocusing spectrometer with wide-aperture analyzing magnet and nonuniform field, analysis of data from 4? experiments on relativistic nuclei beams at Dubna synchrophasotron based on automodelity, production of the cumulative particles in the FRITIOF model, decay of 152Tb and transitions in 152Gd with E0 multipolarities and first experiment on relativistic deuteron extraction from the Nuclotron with a bent crystal
6. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT
Energy Technology Data Exchange (ETDEWEB)
Clinton Lum
2002-02-04
The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4) Generation of derivative property models via linear coregionalization with porosity; (5) Post-processing of the simulated models to impart desired secondary geologic attributes and to create summary and uncertainty models; and (6) Conversion of the models into real-world coordinates. The conversion to real world coordinates is performed as part of the integration of the RPM into the Integrated Site Model (ISM) 3.1; this activity is not part of the current analysis. The ISM provides a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site and consists of three components: (1) Geologic Framework Model (GFM); (2) RPM, which is the subject of this AMR; and (3) Mineralogic Model. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 1. Figure 2 shows the geographic boundaries of the RPM and other component models of the ISM.
7. MARS software model for modeling modular manipulators
Science.gov (United States)
McKee, Gerard T.; Fryer, J. A.; Schenker, Paul S.
2001-10-01
In this paper we describe the application of the MARS model, for modelling and reasoning about modular robot systems, to modular manipulators. The MARS model provides a mechanism for describing robotic components and a method for reasoning about the interaction of these components in modular manipulator configurations. It specifically aims to articulate functionality that is a property of the whole manipulator, but which is not represented in any one component. This functionality arises, in particular, through the capacity for modules to inherit functionality from each other. The paper also uses the case of modular manipulators to illustrate a number of features of the MARS model, including the use of abstract and concrete module classes, and to identify some current limitations of the model. The latter provide the basis for ongoing development of the model.
8. Integrated Site Model Process Model Report
International Nuclear Information System (INIS)
The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM
9. A future of the model organism model
OpenAIRE
Rine, Jasper
2014-01-01
Changes in technology are fundamentally reframing our concept of what constitutes a model organism. Nevertheless, research advances in the more traditional model organisms have enabled fresh and exciting opportunities for young scientists to establish new careers and offer the hope of comprehensive understanding of fundamental processes in life. New advances in translational research can be expected to heighten the importance of basic research in model organisms and expand opportunities. Howe...
10. [Microcirculation stream modeling: hydromechanic and acoustic models].
Science.gov (United States)
Kovaleva, A A; Skedina, M A; Pichulin, V S
2009-01-01
The objective was to attempt mathematical modeling of ultrasonic scanning tissue section in order to discern signals from erythrocytes and leucocytes that is displayed as Doppler images. Hydromechanic and acoustic microcirculation models have been constructed for a 20 MHz ultrasonic sensor. Results of the modeling showed that ultrasonic blood cells differentiation will require complex analysis of amplitude and frequency parameters of echoed signal. PMID:20169740
11. Model Selection for Gaussian Mixture Models
OpenAIRE
Huang, Tao; Peng, Heng; Zhang, Kun
2013-01-01
This paper is concerned with an important issue in finite mixture modelling, the selection of the number of mixing components. We propose a new penalized likelihood method for model selection of finite multivariate Gaussian mixture models. The proposed method is shown to be statistically consistent in determining of the number of components. A modified EM algorithm is developed to simultaneously select the number of components and to estimate the mixing weights, i.e. the mix...
12. Models of mechanics
CERN Document Server
Klarbring, Anders
2006-01-01
A new universal approach to modelling in mechanicsA larger scope than existing texts on continuum mechanicsGives a step-by-step approach to modellingGives a platform for deriving new models of applied useNovel treatments of classical models off, e.g., pipe flow and beams
13. Multivariate GARCH models
DEFF Research Database (Denmark)
Silvennoinen, Annastiina; Teräsvirta, Timo
2008-01-01
This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example in which several multivariate GARCH models are fitted to the same data set and the results compared.
14. Fire Model Matrix
Science.gov (United States)
COMET
2008-02-05
The Fire Model Matrix is an on-line resource that presents four fire community models in a matrix that facilitates the exploration of the characteristics of each model. As part of the Advanced Fire Weather Forecasters Course, this matrix is meant to sensitize forecasters to the use of weather data in these fire models to forecast potential fire activity.
15. Wastewater treatment models
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2011-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including description of biological phosphorus removal, physicalchemical processes, hydraulics and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2D/3D dynamic numerical models. Plant-wide modeling is set to advance further the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise. Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically activated sludge models – are introduced since these define a boundary of validity for WWTP model applications.
16. Wastewater Treatment Models
DEFF Research Database (Denmark)
Gernaey, Krist; Sin, Gürkan
2008-01-01
The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including description of biological phosphorus removal, physical–chemical processes, hydraulics, and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2-D/3-D dynamic numerical models. Plant-wide modeling is set to advance further the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise. Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically, activated sludge models – are introduced since these define a boundary of validity for WWTP model applications.
17. The Rater Bundle Model.
Science.gov (United States)
Wilson, Mark; Hoskens, Machteld
2001-01-01
Introduces the Rater Bundle Model, an item response model for repeated ratings of student work. Applies the model to real and simulated data to illustrate the approach, which was motivated by the observation that when repeated ratings occur, the assumption of conditional independence is violated, and current item response models can then…
18. Objective Bayes model selection in probit models.
Science.gov (United States)
Leon-Novelo, Luis; Moreno, Elías; Casella, George
2012-02-20
We describe a new variable selection procedure for categorical responses where the candidate models are all probit regression models. The procedure uses objective intrinsic priors for the model parameters, which do not depend on tuning parameters, and ranks the models for the different subsets of covariates according to their model posterior probabilities. When the number of covariates is moderate or large, the number of potential models can be very large, and for those cases, we derive a new stochastic search algorithm that explores the potential sets of models driven by their model posterior probabilities. The algorithm allows the user to control the dimension of the candidate models and thus can handle situations when the number of covariates exceed the number of observations. We assess, through simulations, the performance of the procedure and apply the variable selector to a gene expression data set, where the response is whether a patient exhibits pneumonia. Software needed to run the procedures is available in the R package varselectIP. PMID:22162041
19. Modelling Holocene climate trends: A model intercomparison
Science.gov (United States)
Lohmann, Gerrit
2013-04-01
For the paleomodel intercomparison, we compared the results from scenarios with identical forcing for the mid-to-late Holocene period: varying Earth's orbital parameters, fixed level of greenhouse gas concentrations, fixed land-sea mask and orography. 18 paleoclimate modelling groups are involved in this initiative, working on transient Holocene simulations. One major issue of both the modelling and reconstruction side were the quantification of uncertainties, and the evaluation of trend and variability patterns beyond a single proxy and beyond a single model simulation. The goal is to obtain robust results of trend patterns, seasonality changes, as well as transitions on a regional scale. The major objective is to investigate the spatio-temporal pattern of temperature and precipitation changes during Holocene as derived from integrations with a set comprehensive global climate models (GCMs), Earth system models of intermediate complexity (EMICs), as well as conceptual-statistical models. In the conceptual-statistical model by Laepple and Lohmann (2009) a rigorous simple concept is proposed: The temperature response on astronomical timescales has the same function as the response to seasonal insolation variations. The general pattern of surface temperatures in the models shows a high latitude cooling and a low latitude warming. Our analysis shows common patterns of temperature changes, especially for the respective summer seasons. This is a common feature for all model considered. Due to strong differences in atmospheric dynamics and sea ice, we find significant differences in the winter patterns. The precipitation trends show a clear difference between GCMs and EMICs mainly because the treatment of the hydological cycle in the tropics. Most models show a southward movement of the ITCZ. Using statistical analysis of the model variability modes and their amplitude during the Holocene, we reveal a strong heterogeneity in temperature and precipitation pattern and no common response in trend and variability, although a tendency towards NAO- and SOI- (El Nino-like) is detected. Our approach is to obtain, through ensemble runs for climate model output, a range of solutions that can be then compared and evaluated for their consistency with the range of uncertainty given by the palaeoclimate proxies. This approach allows a much more congruent way of comparison between proxy data and model result because both investigations will provide a range of possible climate change where the errors in the estimates are accounted for. We compare the ocean temperature evolution of the Holocene as simulated by climate models and reconstructed from marine temperature proxies. Independently of the choice of the climate model, we observe significant mismatches between modelled and reconstructed amplitudes in the trends for the last 6000 years.
20. A Model for Conversation
DEFF Research Database (Denmark)
Ayres, Phil
2012-01-01
This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity of design. Three distinctions are drawn through which to develop this discussion of models in an architectural context. An examination of these distinctions serves to nuance particular characteristics and roles of models, the modelling activity itself and those engaged in it.
1. Conceptual Model for Communication
CERN Document Server
2009-01-01
A variety of idealized models of communication systems exist, and all may have something in common. Starting with Shannons communication model and ending with the OSI model, this paper presents progressively more advanced forms of modeling of communication systems by tying communication models together based on the notion of flow. The basic communication process is divided into different spheres (sources, channels, and destinations), each with its own five interior stages, receiving, processing, creating, releasing, and transferring of information. The flow of information is ontologically distinguished from the flow of physical signals, accordingly, Shannons model, network based OSI models, and TCP IP are redesigned.
2. Validation of HEDR models
International Nuclear Information System (INIS)
The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid
3. Modeling Epidemic Network Failures
DEFF Research Database (Denmark)
Ruepp, Sarah Renée; Fagertun, Anna Manolova
2013-01-01
This paper presents the implementation of a failure propagation model for transport networks when multiple failures occur resulting in an epidemic. We model the Susceptible Infected Disabled (SID) epidemic model and validate it by comparing it to analytical solutions. Furthermore, we evaluate the SID model’s behavior and impact on the network performance, as well as the severity of the infection spreading. The simulations are carried out in OPNET Modeler. The model provides an important input to epidemic connection recovery mechanisms, and can due to its flexibility and versatility be used to evaluate multiple epidemic scenarios in various network types.
4. Meta-model Pruning
OpenAIRE
Sen, Sagar; Moha, Naouel; Baudry, Benoit; Je?ze?quel, Jean-marc
2009-01-01
Large and complex meta-models such as those of Uml and its profiles are growing due to modelling and inter-operability needs of numerous stakeholders. The complexity of such meta-models has led to coining of the term meta-muddle. Individual users often exercise only a small view of a meta-muddle for tasks ranging from model creation to construction of model transformations. What is the effective meta-model that represents this view? We present a flexible meta-model pruning algorithm and tool ...
5. Calibrated Properties Model
International Nuclear Information System (INIS)
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
6. Calibrated Properties Model
International Nuclear Information System (INIS)
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions
7. Protein Models Comparator
CERN Document Server
Widera, Pawe?
2011-01-01
The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...
8. Lumped Thermal Household Model
DEFF Research Database (Denmark)
Biegel, Benjamin; Andersen, Palle
2013-01-01
In this paper we discuss two different approaches to model the flexible power consumption of heat pump heated households: individual household modeling and lumped modeling. We illustrate that a benefit of individual modeling is that we can overview and optimize the complete flexibility of a heat pump portfolio. Following, we illustrate two disadvantages of individual models, namely that it requires much computational effort to optimize over a large portfolio, and second that it is difficult to accurately model the houses in certain time periods due to local disturbances. Finally, we propose a lumped model approach as an alternative to the individual models. In the lumped model, the portfolio is seen as baseline consumption superimposed with an ideal storage of limited power and energy capacity. The benefit of such a lumped model is that the computational effort of flexibility optimization is significantly reduced. Further, the individual disturbances will smooth out as the number of houses in the portfolio increases.
OpenAIRE
Hennerfeind, Andrea; Brezger, Andreas; Fahrmeir, Ludwig
2003-01-01
Survival data oftern contain small area geographical or spatial information, such as the residence of individuals. In many cases the impact of such spatial effects on hazard rates is of considerable substantive interest. Therefore, extensions of known survival or hazard rate models to spatial models have been suggested recently. Mostly, a spatial component is added to the usual linear predictor of the Cox model. We propose flexible continuous time geoadditive models, extending the Cox model w...
10. Measuring model risk
OpenAIRE
Sibbertsen, Philipp; Stahl, Gerhard; Luedtke, Corinna
2008-01-01
Model risk as part of the operational risk is a serious problem for financial institutions. As the pricing of derivatives as well as the computation of the market or credit risk of an institution depend on statistical models the application of a wrong model can lead to a serious over- or underestimation of the institution's risk. Because the underlying data generating process is unknown in practice evaluating the model risk is a challenge. So far, definitions of model risk are either applicat...
11. QSMSR QUALITATIVE MODEL
OpenAIRE
Tahir Abdullah; Shahbaz Nazeer
2012-01-01
Software architecture design and requirement engineering are core and independent areas of engineering. A lot of research, education and practice are carried on Requirement elicitation and doing refine it, but it is a major issue of engineering. QSMSR model act as a bridge between requirement and design there is a huge gap between these two areas of software architecture and requirement engineering. The QSMSR model divide into two sub model qualitative model and Principal model in this resear...
12. Modelling: Nature and Use
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
Engineering of products and processes is increasingly “model-centric”. Models in their multitudinous forms are ubiquitous, being heavily used for a range of decision making activities across all life cycle phases. This chapter gives an overview of what is a model, the principal activities in the formation of a model for a specific purpose and the wide range of problem types that characterise the application areas of those models. In particular, a strong systems and life cycle perspective is presented which emphasises the development and application of models within each of the life cycle phases. The modelling goal is emphasised and discussed in terms of a triplet of: the model, amodel application and the type of system under study. The much wider length and time scale phenomena now being addressed through modelling is discussed. This change has broadened modelling practice from a dominance on the mesoscale phenomena towards higher and lower scales. This breadth in scale-spread of the partial models being developed presents significant challenges around multiscale modelling and the integration frameworks for such complex system modelling. A number of these frameworks are given in the chapter and are discussed. Throughout the chapter a number of taxonomies around model types and formshelp summarise the current modelling situation within much of product and process applications.
13. Bubble models, data acquisition and model applicability.
Czech Academy of Sciences Publication Activity Database
Jebavá, Marcela; Kloužek, Jaroslav; N?mec, Lubomír
Vsetín : GLASS SERVICE ,INC, 2005, s. 182-191. ISBN 80-239-4687-0. [International Seminar on Mathematical Model ing and Advanced Numerical Methods in Furnace Design and Operation /8./. Velké Karlovice (CZ), 19.05.2005-20.05.2005] Institutional research plan: CEZ:AV0Z40320502 Keywords : bubble model s Subject RIV: CA - Inorganic Chemistry
14. Modeling survival data extending the cox model
CERN Document Server
Therneau, Terry M
2000-01-01
Extending the Cox Model is aimed at researchers, practitioners, and graduate students who have some exposure to traditional methods of survival analysis The emphasis is on semiparametric methods based on the proportional hazards model The inclusion of examples with SAS and S-PLUS code will make the book accessible to most working statisticians
15. Model Checking of Boolean Process Models
CERN Document Server
Schneider, Christoph
2011-01-01
In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean sy...
16. Damping rates of surface plasmons for particles of size from nano- to micrometers; reduction of the nonradiative decay
CERN Document Server
Kolwas, Krystyna
2012-01-01
Damping rates of multipolar, localized surface plasmons (SP) of gold and silver nanospheres of radii up to $1000nm$ were found with the tools of classical electrodynamics. The significant increase in damping rates followed by noteworthy decrease for larger particles takes place along with substantial red-shift of plasmon resonance frequencies as a function of particle size. We also introduced interface damping into our modeling, which substantially modifies the plasmon damping rates of smaller particles. We demonstrate unexpected reduction of the multipolar SP damping rates in certain size ranges. This effect can be explained by the suppression of the nonradiative decay channel as a result of the lost competition with the radiative channel. We show that experimental dipole damping rates [H. Baida, et al., Nano Lett. 9(10) (2009) 3463, and C. S\\"onnichsen, et al., Phys. Rev. Lett. 88 (2002) 077402], and the resulting resonance quality factors can be described in a consistent and straightforward way within our ...
17. Approximate Waveforms for Extreme-Mass-Ratio Inspirals: The Chimera Scheme
CERN Document Server
Sopuerta, Carlos F
2012-01-01
We describe a new kludge scheme to model the dynamics of generic extreme-mass-ratio inspirals (EMRIs; stellar compact objects spiraling into a spinning supermassive black hole) and their gravitational-wave emission. The Chimera scheme is a hybrid method that combines tools from different approximation techniques in General Relativity: (i) A multipolar, post-Minkowskian expansion for the far-zone metric perturbation (the gravitational waveforms) and for the local prescription of the self-force; (ii) a post-Newtonian expansion for the computation of the multipole moments in terms of the trajectories; and (iii) a BH perturbation theory expansion when treating the trajectories as a sequence of self-adjusting Kerr geodesics. The EMRI trajectory is made out of Kerr geodesic fragments joined via the method of osculating elements as dictated by the multipolar post-Minkowskian radiation-reaction prescription. We implemented the proper coordinate mapping between Boyer-Lindquist coordinates, associated with the Kerr geo...
18. Model Validation Status Review
Energy Technology Data Exchange (ETDEWEB)
E.L. Hardin
2001-11-28
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5.
19. Model Validation Status Review
International Nuclear Information System (INIS)
The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and engineered barriers, plus the TSPA model itself Description of the model areas is provided in Section 3, and the documents reviewed are described in Section 4. The responsible manager for the Model Validation Status Review was the Chief Science Officer (CSO) for Bechtel-SAIC Co. (BSC). The team lead was assigned by the CSO. A total of 32 technical specialists were engaged to evaluate model validation status in the 21 model areas. The technical specialists were generally independent of the work reviewed, meeting technical qualifications as discussed in Section 5
20. Models for Dynamic Applications
DEFF Research Database (Denmark)
Sales-Cruz, Mauricio; Morales Rodriguez, Ricardo
2011-01-01
This chapter covers aspects of the dynamic modelling and simulation of several complex operations that include a controlled blending tank, a direct methanol fuel cell that incorporates a multiscale model, a fluidised bed reactor, a standard chemical reactor and finally a polymerisation reactor. These models help illustrate aspects of model formulation, the generation of the underlying assumptions about the systems, the degrees of freedom analysis and finally the solution and simulation of the models subject to changes in a variety of inputs. It is shown how an integrated system such as ICAS-MoT can be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment.
1. Operational risk modeling analytics
CERN Document Server
Panjer, Harry H
2006-01-01
Discover how to optimize business strategies from both qualitative and quantitative points of viewOperational Risk: Modeling Analytics is organized around the principle that the analysis of operational risk consists, in part, of the collection of data and the building of mathematical models to describe risk. This book is designed to provide risk analysts with a framework of the mathematical models and methods used in the measurement and modeling of operational risk in both the banking and insurance sectors.Beginning with a foundation for operational risk modeling and a focus on the modeling process, the book flows logically to discussion of probabilistic tools for operational risk modeling and statistical methods for calibrating models of operational risk. Exercises are included in chapters involving numerical computations for students'' practice and reinforcement of concepts.Written by Harry Panjer, one of the foremost authorities in the world on risk modeling and its effects in business management, this is ...
2. Calibrated Properties Model
International Nuclear Information System (INIS)
The purpose of this Model Report is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Office of Repository Development (ORD). The UZ contains the unsaturated rock layers overlying the repository and host unit, which constitute a natural barrier to flow, and the unsaturated rock layers below the repository which constitute a natural barrier to flow and transport. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Performance Assessment Unsaturated Zone'' (BSC 2002 [160819], Section 1.10.8 [under Work Package (WP) AUZM06, Climate Infiltration and Flow], and Section I-1-1 [in Attachment I, Model Validation Plans]). In Section 4.2, four acceptance criteria (ACs) are identified for acceptance of this Model Report; only one of these (Section 4.2.1.3.6.3, AC 3) was identified in the TWP (BSC 2002 [160819], Table 3-1). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, and drift-scale and mountain-scale coupled-process models from the UZ Flow, Transport and Coupled Processes Department in the Natural Systems Subproject of the Performance Assessment (PA) Project. The Calibrated Properties Model output will also be used by the Engineered Barrier System Department in the Engineering Systems Subproject. The Calibrated Properties Model provides input through the UZ Model Model provides input through the UZ Model and other process models of natural and engineered systems to the Total System Performance Assessment (TSPA) models, in accord with the PA Strategy and Scope in the PA Project of the Bechtel SAIC Company, LLC (BSC). The UZ process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions. UZ flow is a TSPA model component
3. Five models of capitalism
Directory of Open Access Journals (Sweden)
Luiz Carlos Bresser-Pereira
2012-03-01
Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.
4. Marine Wave Model Matrix
Science.gov (United States)
COMET
2006-05-16
The Marine Wave Model Matrix provides information on the formulation of wave models developed by the National Centers for Environmental Prediction (NCEP) and other modeling centers, including how these models forecast the generation, propagation, and dissipation of ocean waves using NWP model forecasts for winds and near-surface temperature and stability. Additionally, information is provided on data assimilation, post-processing of data, and verfication of wave models currently in operation. Within the post-processing pages are links to forecast output both in graphical and raw form, including links for data downloads. Links to COMET training on wave processes are also provided.
5. Modeling worldwide highway networks
International Nuclear Information System (INIS)
This Letter addresses the problem of modeling the highway systems of different countries by using complex networks formalism. More specifically, we compare two traditional geographical models with a modified geometrical network model where paths, rather than edges, are incorporated at each step between the origin and the destination vertices. Optimal configurations of parameters are obtained for each model and used for the comparison. The highway networks of Australia, Brazil, India, and Romania are considered and shown to be properly modeled by the modified geographical model.
6. The Dgp Model Revisited
Science.gov (United States)
Ng, Kah Fee; Ng, Shao Chin Cindy
2014-04-01
In this paper, we study the model proposed by Dvali, Gabadadze and Porrati (the DGP model), which produces solutions with cosmic acceleration even in the absence of a cosmological constant. The model is fitted to the recent SNLS data using the minimum ?2 test, and an analytical method is used to marginalize over the nuisance parameters h and M. The result suggests that the DPG model does not fit the SNLS data much better than the ?CDM model, and further observations are needed to better distinguish the two models.
7. Energy-consumption modelling
Energy Technology Data Exchange (ETDEWEB)
Reiter, E.R.
1980-01-01
A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.
8. Modeling Worldwide Highway Networks
CERN Document Server
Boas, Paulino Ribeiro Villas; Costa, Luciano da Fontoura
2009-01-01
This letter addresses the problem of modeling the highway systems of different countries by using complex networks formalism. More specifically, we compare two traditional geographical models with a modified geometrical network model where paths, rather than edges, are incorporated at each step between the origin and destination nodes. Optimal configurations of parameters are obtained for each model and used in the comparison. The highway networks of Brazil, the US and England are considered and shown to be properly modeled by the modified geographical model. The Brazilian highway network yielded small deviations that are potentially accountable by specific developing and sociogeographic features of that country.
9. Spectral Modeling with APEC
Science.gov (United States)
Brickhouse, Nancy S.; Smith, Randall K.
2005-01-01
The Astrophysical Plasma Emission Code (APEC) collaboration now provides public models for X-ray spectra of collisional equilibrium plasmas. These models facilitate the diagnosis of temperature density elemental abundance charge state and optical depth. We report benchmarking studies of the APEC models from the Emission Line Project a project to test these models using high quality stellar coronal spectra. We discuss the implications of the benchmarked atomic data for non-equilibrium collisional models as well. Finally we discuss the extension of APED to other applications such as opacity models for AGN.
10. Antibody modeling assessment II. Structures and models.
Science.gov (United States)
Teplyakov, Alexey; Luo, Jinquan; Obmolova, Galina; Malia, Thomas J; Sweet, Raymond; Stanfield, Robyn L; Kodangattil, Sreekumar; Almagro, Juan Carlos; Gilliland, Gary L
2014-08-01
To assess the state-of-the-art in antibody structure modeling, a blinded study was conducted. Eleven unpublished Fab crystal structures were used as a benchmark to compare Fv models generated by seven structure prediction methodologies. In the first round, each participant submitted three non-ranked complete Fv models for each target. In the second round, CDR-H3 modeling was performed in the context of the correct environment provided by the crystal structures with CDR-H3 removed. In this report we describe the reference structures and present our assessment of the models. Some of the essential sources of errors in the predictions were traced to the selection of the structure template, both in terms of the CDR canonical structures and VL/VH packing. On top of this, the errors present in the Protein Data Bank structures were sometimes propagated in the current models, which emphasized the need for the curated structural database devoid of errors. Modeling non-canonical structures, including CDR-H3, remains the biggest challenge for antibody structure prediction. PMID:24633955
11. Financial modeling using Gaussian process models.
Czech Academy of Sciences Publication Activity Database
Petelin, D.; Šindelá?, Jan; P?ikryl, Jan; Kocijan, J.
Piscataway : IEEE, 2011, s. 672-677. ISBN 978-1-4577-1424-5. [6th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications. Prague (CZ), 15.09.2011-17.09.2011] R&D Projects: GA MŠk 1M0572; GA TA ?R TA01030603; GA ?R GA102/08/0567; GA MŠk(CZ) MEB091015 Institutional research plan: CEZ:AV0Z10750506 Keywords : gaussian process models * autoregression * financial * efficient markets Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/sindelar-financial modeling using gaussian process models.pdf
12. Empirical Model Building Data, Models, and Reality
CERN Document Server
Thompson, James R
2011-01-01
Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m
13. Modeling Complex Time Limits
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
14. Forest growth models
International Nuclear Information System (INIS)
This paper discusses projection models fitted to three data set structures formed from a real growth series. The series is offered. Results of comparisons of model forms fitted with the tree data structures are described
15. Modelling of Corrosion Cracks
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
2003-01-01
Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed.
16. Bounding Species Distribution Models
Science.gov (United States)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
17. Modelling of Corrosion Cracks
OpenAIRE
Thoft-christensen, Palle
2010-01-01
Modelling of corrosion cracking of reinforced concrete structures is complicated as a great number of uncertain factors are involved. To get a reliable modelling a physical and mechanical understanding of the process behind corrosion in needed.
18. The cloudy bag model
International Nuclear Information System (INIS)
Recent developments are reviewed in the bag model, in which the constraints of chiral symmetry are explicitly included. The model has significant implications for nuclear, medium energy and high energy physics. (author)
19. Melanoma Risk Prediction Models
Science.gov (United States)
The following risk prediction models are intended primarily for research use and have been peer-reviewed, meaning the methodology and results of these models have been evaluated by qualified scientists and clinicians and published in scientific and medical journals.
20. Protein solubility modeling.
Science.gov (United States)
Agena, S M; Pusey, M L; Bogle, I D
1999-07-20
A thermodynamic framework (UNIQUAC model with temperature dependent parameters) is applied to model the salt-induced protein crystallization equilibrium, i.e., protein solubility. The framework introduces a term for the solubility product describing protein transfer between the liquid and solid phase and a term for the solution behavior describing deviation from ideal solution. Protein solubility is modeled as a function of salt concentration and temperature for a four-component system consisting of a protein, pseudo solvent (water and buffer), cation, and anion (salt). Two different systems, lysozyme with sodium chloride and concanavalin A with ammonium sulfate, are investigated. Comparison of the modeled and experimental protein solubility data results in an average root mean square deviation of 5.8%, demonstrating that the model closely follows the experimental behavior. Model calculations and model parameters are reviewed to examine the model and protein crystallization process. PMID:10397850
1. Supersymmetry and model building
International Nuclear Information System (INIS)
An introductory review of supersymmetry and supersymmetric model building is presented. The topics discussed include, a brief introduction to the formalism of supersymmetry, the gauge hierarchy problem, the minimal supersymmetric standard model and supersymmetric grand unified theories
2. PARTICIPANT MODELING IN STUTTERING
OpenAIRE
Bhargava, S. C.
1988-01-01
Participant modeling was tried in twenty five stutterers; auditory feedback of modelled speech and guided exposure were also done along with. The patients were able to have a fluent stuttering free speech in most situations.
3. Modeling Infectious Diseases
Science.gov (United States)
... Background Information > Modeling Infectious Diseases Fact Sheet Modeling Infectious Diseases Fact Sheet Tagline (Optional) Using computers to ... Content Area Predicting the potential spread of an infectious disease requires much more than simply connecting cities ...
4. Bounding species distribution models
Directory of Open Access Journals (Sweden)
Thomas J. STOHLGREN, Catherine S. JARNEVICH, Wayne E. ESAIAS,Jeffrey T. MORISETTE
2011-10-01
Full Text Available Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for “clamping” model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART and maximum entropy (Maxent models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5: 642–647, 2011].
5. The ATLAS Analysis Model
CERN Document Server
Amir Farbin
The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...
6. TMDL RUSLE MODEL
Science.gov (United States)
We developed a simplified spreadsheet modeling approach for characterizing and prioritizing sources of sediment loadings from watersheds in the United States. A simplified modeling approach was developed to evaluate sediment loadings from watersheds and selected land segments. ...
7. Introduction to Modeling (SAM)
Science.gov (United States)
In this activity, by the Concord Consortium's Molecular Literacy project, students are introduced to "key characteristics of 2D and 3D models as they are created and used in the Molecular Workbench Software. It ranges from 2D modeling of a superball to roving through 3-D molecules." The activity itself is a java-based interactive resource built upon the free, open source Molecular Workbench software. In the activity, students are allowed to explore at their own pace in a digital environment full of demonstrations, illustrations, and models they can manipulate. The content of the module is divided into seven pages: Designing a computer model, Running and visualizing the model, Extended visualization, Annotation and sharing, Using the model to do experiments, Modeling an atom, and 3D static models. In addition to the activity, visitors will find an overview of the activity and details of the central concepts.
8. Monte Carlo Modeling
Science.gov (United States)
David Joiner
Monte Carlo modeling refers to the solution of mathematical problems with the use of random numbers. This can include both function integration and the modeling of stochastic phenomena using random processes.
9. Osteoporotic fracture models.
Science.gov (United States)
Simpson, A Hamish; Murray, Iain R
2015-02-01
Animal models are widely used to investigate the pathogenesis of osteoporosis and for the clinical testing of anti-resorptive drugs. However, osteoporotic fracture models designed to investigate novel ways to treat fractures of osteoporotic bone must fulfil requirements distinct from those of pharmacological testing. Bone strength and toughness, implant fixation and osteointegration and fracture repair are of particular interest. Osteoporotic models should reflect the underlying clinical scenario be that primary type 1 (post-menopausal) osteoporosis, primary type 2 (senile) osteoporosis or secondary osteoporosis. In each scenario, small and large animal models have been developed. While rodent models facilitate the study of fractures in strains specifically established to facilitate understanding of the pathologic basis of disease, concerns remain about the relevance of small animal fracture models to the human situation. There is currently no all-encompassing model, and the choice of species and model must be individualized to the scientific question being addressed. PMID:25388154
10. Ginocchio model with isospin
International Nuclear Information System (INIS)
We study the sp(8) subgroup of the isospin-invariant Ginocchio model. The allowed quantum numbers are determined in terms of Young's diagrams. Using this results, we discuss the excitation energy of a model hamiltonian. (orig.)
11. Exponential Random Energy Model
OpenAIRE
Jana, Nabin Kumar
2006-01-01
In this paper the Random Energy Model(REM) under exponential type environment is considered which includes double exponential and Gaussian cases. Limiting Free Energy is evaluated in these models. Limiting Gibbs' distribution is evaluated in the double exponential case.
12. On Model Typing
OpenAIRE
Steel, Jim; Je?ze?quel, Jean-marc
2007-01-01
Where object-oriented languages deal with objects as described by classes, model-driven development uses models, as graphs of interconnected objects, described by metamodels. A number of new languages have been and continue to be developed for this modelbased paradigm, both for model transformation and for general programming using models. Many of these use single-object approaches to typing, derived from solutions found in object-oriented systems, while others use metamodels asmodel types, b...
13. Nonuniform Markov models
OpenAIRE
Ristad, Eric Sven; Thomas, Robert G.
1996-01-01
A statistical language model assigns probability to strings of arbitrary length. Unfortunately, it is not possible to gather reliable statistics on strings of arbitrary length from a finite corpus. Therefore, a statistical language model must decide that each symbol in a string depends on at most a small, finite number of other symbols in the string. In this report we propose a new way to model conditional independence in Markov models. The central feature of our nonuniform ...
14. Parameterization of connectionist models.
OpenAIRE
Bogacz, R.; Cohen, Jd
2004-01-01
We present a method for estimating parameters of connectionist models that allows the model's output to fit as closely as possible to empirical data. The method minimizes a cost function that measures the difference between statistics computed from the model's output and statistics computed from the subjects' performance. An optimization algorithm finds the values of the parameters that minimize the value of this cost function. The cost function also indicates whether the model's statistics a...
15. Generic Market Models
OpenAIRE
Pietersz, R.; Regenmortel, M.
2005-01-01
Currently, there are two market models for valuation and risk management of interest rate derivatives, the LIBOR and swap market models. In this paper, we introduce arbitrage-free constant maturity swap (CMS) market models and generic market models featuring forward rates that span periods other than the classical LIBOR and swap periods. We develop generic expressions for the drift terms occurring in the stochastic differential equation driving the forward rates under a single pricing meas...
16. Accretion Disk Models
OpenAIRE
Beloborodov, Andrei M.
1999-01-01
Models of black hole accretion disks are reviewed, with an emphasis on the theory of hard X-ray production. The following models are considered: i) standard, ii) super-critical, iii) two-temperature, and iv) disk+corona. New developments have recently been made in hydrodynamical models of accretion and in phenomenological radiative models fitting the observed X-ray spectra. Attempts to unify the two approaches are only partly successful.
17. Future of groundwater modeling
Science.gov (United States)
Langevin, Christian D.; Panday, Sorab
2012-01-01
With an increasing need to better manage water resources, the future of groundwater modeling is bright and exciting. However, while the past can be described and the present is known, the future of groundwater modeling, just like a groundwater model result, is highly uncertain and any prediction is probably not going to be entirely representative. Thus we acknowledge this as we present our vision of where groundwater modeling may be headed.
18. Bicycles, motorcycles, and models
OpenAIRE
Limebeer, D. J. N.; Sharp, R. S.
2006-01-01
Single-track vehicles are multibody systems which include bicycles, motorcycles and motor scooters. The Whipple's model of bicycle consists of two frames, the rear frame and the front frame, which are hinged together along an inclined steering-head assembly. The nonslipping road wheels as with this model are modeled by holonomic constraints in the normal direction and by nonholonomic constraints in the longitudinal and lateral directions. The bicycle model has three degrees of freedom such as...
19. Curricula Modeling and Checking
OpenAIRE
Baldoni, Matteo; Baroglio, Cristina; Marengo, Elisa
2007-01-01
In this work, we present a constrained-based representation for specifying the goals of “course design”, that we call curricula model, and introduce a graphical language, grounded into Linear Time Logic, to design curricula models which include knowledge of proficiency levels. Based on this representation, we show how model checking techniques can be used to verify that the user’s learning goal is supplied by a curriculum, that a curriculum is compliant to a curricula model, and that co...
20. Modeling Design Process
OpenAIRE
Takeda, Hideaki; Veerkamp, Paul; Yoshikawa, Hiroyuki
1990-01-01
This article discusses building a computable design process model, which is a prerequisite for realizing intelligent computer-aided design systems. First, we introduce general design theory, from which a descriptive model of design processes is derived. In this model, the concept of metamodels plays a crucial role in describing the evolutionary nature of design. Second, we show a cognitive design process model obtained by observing design processes using a protocol analysis method. We then di...
1. Mathematical circulatory system model
Science.gov (United States)
Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)
2010-01-01
A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.
2. Toric models of graphs
CERN Document Server
Buczy?ska, Weronika
2010-01-01
We define toric projective model of a trivalent graph as a generalization of a binary symmetric model of a trivalent phylogenetic tree. Generators of the projective coordinate ring of the models of graphs with one cycle are explicitly described. The models of graphs with the same topological invariants are deformation equivalent and share the same Hilbert function. We also provide an algorithm to compute the Hilbert function.
3. Model averaging in economics
OpenAIRE
Moral-benito, Enrique
2010-01-01
Fragility of regression analysis to arbitrary assumptions and decisions about choice of control variables is an important concern for applied econometricians (e.g. Leamer (1983)). Sensitivity analysis in the form of model averaging represents an (agnostic) approach that formally addresses this problem of model uncertainty. This paper presents an overview of model averaging methods with emphasis on recent developments in the combination of model averaging with IV and panel data settings.
4. Monoidal model categories
OpenAIRE
Hovey, Mark
1998-01-01
A monoidal model category is a model category with a compatible closed monoidal structure. Such things abound in nature; simplicial sets and chain complexes of abelian groups are examples. Given a monoidal model category, one can consider monoids and modules over a given monoid. We would like to be able to study the homotopy theory of these monoids and modules. This question was first addressed by Stefan Schwede and Brooke Shipley in "Algebras and modules in monoidal model c...
5. A Quasar Wind Model
OpenAIRE
Ruff, Andrea
2008-01-01
A quasar wind model is proposed to describe the spatial and velocity structure of the broad line region. This model requires detailed photoionization and magnetohydrodynamic simulation, as the broad line region it too small for direct spatial resolution. The emission lines are Doppler broadened, since the gas is moving at high velocity. The high velocity is attained by the gas from a combination of radiative and magnetic driving forces. Once this model is complete, the model...
6. Modeling of ultrasound transducers
OpenAIRE
Bæk, David
2011-01-01
This Ph.D. dissertation addresses ultrasound transducer modeling for medical ultrasound imaging and combines the modeling with the ultrasound simulation program Field II. The project firstly presents two new models for spatial impulse responses (SIR)s to a rectangular elevation focused transducer (REFT) and to a convex rectangular elevation focused transducer (CREFT). These models are solvable on an analog time scale and give exact smooth solutions to the Rayleigh integral. ...
7. Hierarchical Bass model
Science.gov (United States)
Tashiro, Tohru
2014-03-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
8. Hierarchical Bass model
OpenAIRE
Tashiro, Tohru
2013-01-01
We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.
9. Modelling agricultural production
OpenAIRE
Wit, C. T.
1986-01-01
In modelling in general and biological modelling in particular two approaches may be distinguished: a descriptive and an explanatory approach. In descriptive models the system and its behaviour are described at the same level at which the observations about it are made. A good example are the chilling unit models that were discussed in this symposium and are used to calculate at what time the temperature demand is met to break the dormancy of buds. Another example is the statistical analysis...
10. Raman generator modeling
International Nuclear Information System (INIS)
Stokes seed production in a Raman generator has been modeled for a pump of long duration and with many longitudinal modes. The model uses three-dimensional wave optics and includes pump depletion. The modeling is similar to that of others in its use of stochastic c-number equations. The model simulates the spontaneous Stokes source with effective Stokes fields supplied in a prescribed manner. The analysis to support this simulation and examples of computer results is presented
11. Chemisorption on a model transition model
International Nuclear Information System (INIS)
The adsorption both of a single atom and a monolayer of atoms on the (001) surface of the model transition metal is investigated using the Green's function formalism and the phase shift technique. The electronic structure of the surface is obtained by the Kalkstein-Soven method. For comparison, both the single- and two-peaked models of the surface density of states (DOS) are used. The adatom charge, heat of adsorption, and the change in the DOS due to chemisorption are calculated within the Newns-Anderson model and are compared with the available experimental results as well as with those of the previous chemisorption calculations. It is shown that the two-peaked substrate DOS model can qualitatively account for the strong coverage dependence of the photoemission spectra observed in the system such as H/W(100). The present theory is also extended to the chemisorption system with general coverages. (author)
12. Solid Waste Projection Model: Model user's guide
International Nuclear Information System (INIS)
The Solid Waste Projection Model (SWPM) system is an analytical tool developed by Pacific Northwest Laboratory (PNL) for Westinghouse Hanford company (WHC) specifically to address solid waste management issues at the Hanford Central Waste Complex (HCWC). This document, one of six documents supporting the SWPM system, contains a description of the system and instructions for preparing to use SWPM and operating Version 1 of the model. 4 figs., 1 tab
Directory of Open Access Journals (Sweden)
Jurgita Vabinskait?
2011-04-01
Full Text Available The study deals with the theoretical models of business internationalisation: the “Uppsala” Internationalisation Model, modified “Uppsala” model, the Eclectic Paradigm and analysis of transactional costs, Industrial Network approach, the Advantage Package and the Advantage Cycle.Article in Lithuanian
14. Superstatistical turbulence models
OpenAIRE
Beck, Christian
2005-01-01
Recently there has been some progress in modeling the statistical properties of turbulent flows using simple superstatistical models. Here we briefly review the concept of superstatistics in turbulence. In particular, we discuss a superstatistical extension of the Sawford model and compare with experimental data.
15. Independent Mathematical Modeling.
Science.gov (United States)
Smith, D. N.
1997-01-01
Argues that a major difficulty in learning how to do mathematical modeling is in the first independent run through the modeling cycle. Reviews a case study (N=12) on mathematical modeling and presents the conclusions in three sections: (1) the choice of task; (2) the presentation of the task; and (3) tutor intervention and support. (ASK)
16. Retention or Attrition Models.
Science.gov (United States)
Guttman, Irwin; Olkin, Ingram
1989-01-01
A model for student retention and attrition is presented. Focus is on alternative models for the "dampening" in attrition rates as educational programs progress. Maximum likelihood estimates for the underlying parameters in each model and a Bayesian analysis are provided. (TJH)
17. Fuzzy Bag Models
CERN Document Server
Forkel, H
1999-01-01
We show how hadronic bag models can be generalized to implement effects of a smooth and extended boundary. Our approach is based on fuzzy set theory and can be straightforwardly applied to any type of bag model. We illustrate the underlying ideas by calculating static nucleon properties in a fuzzy chiral bag model.
18. Fuzzy Bag Models
OpenAIRE
Forkel, Hilmar
1998-01-01
We show how hadronic bag models can be generalized to implement effects of a smooth and extended boundary. Our approach is based on fuzzy set theory and can be straightforwardly applied to any type of bag model. We illustrate the underlying ideas by calculating static nucleon properties in a fuzzy chiral bag model.
19. The Model Neuron
Science.gov (United States)
2012-06-26
In this activity, learners create a model of a neuron by using colored clay or play dough. Learners use diagrams to build the model and then label the parts on a piece of paper. This resource guide includes extension ideas like using fruit or candy instead of clay. See the "Modeling the Nervous System" page for a recipe for play dough.
20. Boundary representation modelling techniques
CERN Document Server
Stroud, I
2006-01-01
Boundary representation is the principle solid modelling method used in modern CAD/CAM systems. This book includes: data structures algorithms and other related techniques, including non-manifold modelling, product modelling, graphics, disc files and data exchange, and some application related topics. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7055968642234802, "perplexity": 4063.9762156521365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098924.1/warc/CC-MAIN-20150627031818-00231-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/72984/what-is-a-relatively-bound-variable | # What is a relatively bound variable?
edit: Interestingly, the authors also state at one point that the choice of introduction rule is determined by the structure of the previous goal and the list of introduction rules; but at another point, that the choice of introduction rule is determined by the structure of the current goal and the list of introduction rules. I believe the latter is correct, but I'll have to be more careful implementing the paper's algorithm than I previously thought.
In any case, I believe the "no variable can relatively bind itself" restriction intends to preclude abstraction on a variable using that variable, but I could be wrong. If anything rests on whether or not this is correct, I'll attempt to contact the authors.
original: In a paper on natural deduction proof search, the authors write that the free variables $\gamma_1,\ldots,\gamma_n$ occurring in the formulae $\forall \alpha.\phi(\alpha,\gamma_1,\ldots,\gamma_n)$ and $\exists \alpha . \phi(\alpha,\gamma_1,\ldots,\gamma_n)$ in the conclusions of the quantifier introduction rules $$\frac{\phi(\alpha/\beta,\gamma_1,\ldots,\gamma_n)}{\forall \alpha . \phi(\alpha,\gamma_1,\ldots,\gamma_n)} \quad (\forall\text{-in})$$ and $$\frac{\phi(\alpha/t)}{\exists \alpha . \phi (\alpha,\gamma_1,\ldots,\gamma_n)}\quad(\exists\text{-in})$$ are relatively flagged or bound. Presumably, then, a variable is relatively bound if and only if it occurs free in the conclusion of a quantifier introduction rule. However, in defining a natural deduction derivation and presenting algorithms, they remark that no variable may relatively bind itself. For instance, they require that unification not produce substitutions causing variables to relatively bind themselves.
I searched through several logic books I had lying around, especially ones focusing on automated reasoning, but couldn't find a definition of relatively bound variables. Has anyone encountered this term before? If so, what does it mean?
-
I apologize if this seems too obvious of a thing to do, but... since they provided it, have you e-mailed any of the authors with this question? – Doug Spoonwood Oct 17 '11 at 1:43
If anything ends up resting on the definition of the term, I'll contact the authors. I assume, however, that spending more time with the paper should allow me to deduce its meaning. Thanks for the suggestion, however. – danportin Oct 18 '11 at 3:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8720492124557495, "perplexity": 478.1705689785621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246642012.28/warc/CC-MAIN-20150417045722-00215-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.zbmath.org/authors/?q=ai%3Awang.yanyan | ×
# zbMATH — the first resource for mathematics
## Wang, Yanyan
Compute Distance To:
Author ID: wang.yanyan Published as: Wang, Y.; Wang, Y. Y.; Wang, Yan-yan; Wang, Yanyan
Documents Indexed: 134 Publications since 1975
all top 5
#### Co-Authors
5 single-authored 10 Liu, Wei 4 Li, Yingsong 3 Wang, Yanning 2 Cao, Jinde 2 Tong, Yanchun 2 Wang, Zhiming 2 Zhang, Xuguang 2 Zhou, Jianping 1 Albu, Felix 1 Ba, Na 1 Cao, Xiwang 1 Chen, Huaxiong 1 Chen, Qun 1 Feng, Gang 1 Fu, Rui 1 Gao, Hongya 1 Gao, Ran 1 Gu, Cong 1 Huo, Hai-Feng 1 Jiang, Jingshan 1 Kong, Qingkai 1 Lei, Youming 1 Li, Hong 1 Li, Jianhua 1 Li, Jina 1 Li, Xuezhi 1 Li, Zhanhuai 1 Liu, Guangjun 1 Ma, Fengli 1 Ni, Mingkang 1 Sun, Laijun 1 Wang, Chenmu 1 Wang, Zhen 1 Wu, Yaohua 1 Xia, Delong 1 Xiang, Hong 1 Xu, Jianzhong 1 Xue, Chunshan 1 Yang, Rui 1 Yang, Yong 1 Zhai, Ying 1 Zhang, Shengdan 1 Zheng, Lie 1 Zhong, Ping 1 Zou, Xia 1 Zuo, Suli
all top 5
#### Serials
3 Symmetry 2 Applied Mathematics. Series B (English Edition) 2 Mathematical Problems in Engineering 2 Nonlinear Dynamics 2 Frontiers of Mathematics in China 2 Journal of Xinyang Normal University. Natural Science Edition 1 Information Processing Letters 1 Journal of the Franklin Institute 1 Mathematical Methods in the Applied Sciences 1 Annales Polonici Mathematici 1 Circuits, Systems, and Signal Processing 1 Mathematics in Practice and Theory 1 Chinese Annals of Mathematics. Series B 1 Physica D 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Vibration and Control 1 Abstract and Applied Analysis 1 Chinese Quarterly Journal of Mathematics 1 Wuhan University Journal of Natural Sciences (WUJNS) 1 Physica Scripta 1 Journal of Shenzhen University. Science & Engineering 1 Journal of Henan Normal University. Natural Science 1 Journal of Applied Mathematics 1 Acta Mathematica Scientia. Series A. (Chinese Edition) 1 Journal of Software 1 Journal of Beijing University of Technology 1 Advances in Difference Equations 1 Chinese Journal of Engineering Mathematics 1 Asian Journal of Control 1 Journal of Applied Analysis and Computation
all top 5
#### Fields
18 Systems theory; control (93-XX) 7 Partial differential equations (35-XX) 5 Ordinary differential equations (34-XX) 5 Information and communication theory, circuits (94-XX) 4 Combinatorics (05-XX) 4 Computer science (68-XX) 4 Biology and other natural sciences (92-XX) 2 Operator theory (47-XX) 2 Operations research, mathematical programming (90-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Number theory (11-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Statistical mechanics, structure of matter (82-XX)
#### Citations contained in zbMATH Open
71 Publications have been cited 613 times in 517 Documents Cited by Year
A linearly conforming point interpolation method (LC-PIM) for 2D solid mechanics problems. Zbl 1137.74303
Liu, G. R.; Zhang, G. Y.; Dai, K. Y.; Wang, Y. Y.; Zhong, Z. H.; Li, G. Y.; Han, X.
2005
A linearly conforming point interpolation method (LC-PIM) for three-dimensional elasticity problems. Zbl 1194.74543
Zhang, G. Y.; Liu, G. R.; Wang, Y. Y.; Huang, H. T.; Zhong, Z. H.; Li, G. Y.; Han, X.
2007
A meshfree radial point interpolation method (RPIM) for three-dimensional solids. Zbl 1138.74420
Liu, G. R.; Zhang, G. Y.; Gu, Y. T.; Wang, Y. Y.
2005
Multistability and multiperiodicity of delayed Cohen-Grossberg neural networks with a general class of activation functions. Zbl 1161.34044
Cao, Jinde; Feng, Gang; Wang, Yanyan
2008
A nodal integration technique for meshfree radial point interpolation method (NI-RPIM). Zbl 1135.74050
Liu, G. R.; Zhang, G. Y.; Wang, Y. Y.; Zhong, Z. H.; Li, G. Y.; Han, X.
2007
A reformulation of the strong ellipticity conditions for unconstrained hyperelastic media. Zbl 0876.73030
Wang, Y.; Aron, M.
1996
Characteristic boundary conditions for direct simulations of turbulent counterflow flames. Zbl 1086.80006
Yoo, C. S.; Wang, Y.; Trouvé, A.; Im, H. G.
2005
A note on the numerical solution of high-order differential equations. Zbl 1031.65087
Wang, Y.; Zhao, Y. B.; Wei, G. W.
2003
Free vibration analysis of circular plates using generalized differential quadrature rule. Zbl 1083.74549
Wu, T. Y.; Wang, Y. Y.; Liu, G. R.
2002
Implicit-explicit finite-difference lattice Boltzmann method for compressible flows. Zbl 1151.82405
Wang, Y.; He, Y. L.; Zhao, T. S.; Tang, G. H.; Tao, W. Q.
2007
Synchronization in complex dynamical networks with interval time-varying coupling delays. Zbl 1268.34138
Zhou, Jianping; Wang, Zhen; Wang, Yanyan; Kong, Qingkai
2013
Rapid inverse parameter estimation using reduced-basis approximation with asymptotic error estimation. Zbl 1194.74102
Liu, G. R.; Zaw, Khin; Wang, Y. Y.
2008
A novel reduced-basis method with upper and lower bounds for real-time computation of linear elasticity problems. Zbl 1194.74434
Liu, G. R.; Zaw, Khin; Wang, Y. Y.; Deng, B.
2008
Bi-periodicity evoked by periodic external inputs in delayed Cohen-Grossberg-type bidirectional associative memory networks. Zbl 1192.82060
Cao, Jinde; Wang, Yanyan
2010
Statistical analysis for stochastic systems including fractional derivatives. Zbl 1183.70062
Huang, Z. L.; Jin, X. L.; Lim, C. W.; Wang, Y.
2010
A strip element method for the transient analysis of symmetric laminated plates. Zbl 0996.74082
Wang, Y. Y.; Lam, K. Y.; Liu, G. R.
2001
Three-dimensional non-free-parameter lattice-Boltzmann model and its application to inviscid compressible flows. Zbl 1229.76090
Li, Q.; He, Y. L.; Wang, Y.; Tang, G. H.
2009
Application of the generalized differential quadrature rule to initial-boundary-value problems. Zbl 1236.74303
Wu, T. Y.; Liu, G. R.; Wang, Y. Y.
2003
Scheduling projects with labor constraints. Zbl 0984.90012
Cavalcante, C. C. B.; de Souza, C. Carvalho; Savelsbergh, M. W. P.; Wang, Y.; Wolsey, L. A.
2001
Mechanics of corrugated surfaces. Zbl 1200.74008
Wang, Y.; Weissmüller, J.; Duan, H. L.
2010
Co-ordinated control design of generator excitation and SVC for transient stability and voltage regulation enhancement of multi-machine power systems. Zbl 1057.93038
Cong, L.; Wang, Y.; Hill, D. J.
2004
Probability-one homotopy algorithms for solving the coupled Lyapunov equations arising in reduced-order $$H^2/H^\infty$$ modeling, estimation, and control. Zbl 1028.93011
Wang, Y.; Bernstein, D. S.; Watson, L. T.
2001
Sparse adaptive channel estimation based on mixed controlled $$l_2$$ and $$l_p$$-norm error criterion. Zbl 1373.93333
Wang, Yanyan; Li, Yingsong; Yang, Rui
2017
Lyapunov function construction for nonlinear stochastic dynamical systems. Zbl 1284.93207
Ling, Quan; Jin, Xiao Ling; Wang, Y.; Li, H. F.; Huang, Zhi Long
2013
Instabilities of core-shell heterostructured cylinders due to diffusions and epitaxy: Spheroidization and blossom of nanowires. Zbl 1162.74317
Duan, H. L.; Weissmüller, J.; Wang, Y.
2008
A generalized differential quadrature rule for bending analysis of cylindrical barrel shells. Zbl 1050.74058
Wu, T. Y.; Wang, Y. Y.; Liu, G. R.
2003
Impingement of filler dropelts and weld pool dynamics during gas metal arc welding process. Zbl 1064.76624
Wang, Y.; Tsai, Hai Lung
2001
Stochastic vibration model of gear transmission systems considering speed-dependent random errors. Zbl 0946.70505
Wang, Y.; Zhang, W. J.
1998
On deformations with constant modified stretches describing the bending of rectangular blocks. Zbl 0830.73010
Aron, M.; Wang, Y.
1995
Numerical solutions comparison for interval linear programming problems based on coverage and validity rates. Zbl 1427.65094
Lu, H. W.; Cao, M. F.; Wang, Y.; Fan, X.; He, L.
2014
Weak $${\mathcal WT}_2$$-class of differential forms and weakly $${\mathcal A}$$-harmonic tensors. Zbl 1240.35163
Gao, Hongya; Wang, Yanyan
2010
Robustness of non-linear stochastic optimal control for quasi-Hamiltonian systems with parametric uncertainty. Zbl 1287.93110
Wang, Y.; Ying, Z. G.; Zhu, W. Q.
2009
Numerical simulations of gas resonant oscillations in a closed tube using lattice Boltzmann method. Zbl 1143.80332
Wang, Y.; He, Y. L.; Li, Q.; Tang, G. H.
2008
Two-mode ILC with pseudo-downsampled learning in high frequency range. Zbl 1117.93045
Zhang, B.; Wang, D.; Ye, Y.; Wang, Y.; Zhou, K.
2007
Time-dependent Poiseuille flows of visco-elasto-plastic fluids. Zbl 1106.76011
Wang, Y.
2006
On a straightening of compressible, nonlinearly elastic, annular cylindrical sectors. Zbl 1001.74531
Aron, M.; Christopher, C.; Wang, Y.
1998
Remarks concerning the flexure of a compressible nonlinearly elastic rectangular block. Zbl 0835.73012
Aron, M.; Wang, Y.
1995
Numerical inverse isoparametric mapping in 3D FEM. Zbl 0638.73037
Murti, V.; Wang, Y.; Valliappan, S.
1988
Theoretical analysis for blow-up behaviors of differential equations with piecewise constant arguments. Zbl 1410.34184
Zhou, Y. C.; Yang, Z. W.; Zhang, H. Y.; Wang, Y.
2016
Some conclusion on unique $$k$$-list colorable complete multipartite graphs. Zbl 1397.05068
Wang, Yanning; Wang, Yanyan; Zhang, Xuguang
2013
Chaos and Hopf bifurcation of a finance system with distributed time delay. Zbl 1217.91225
Wang, Y.; Zhai, Y. H.; Wang, J.
2010
Nearly $$s$$-normal subgroups of a finite group. Zbl 1170.20015
Guo, W.; Wang, Y.; Shi, L.
2008
Mass modified outlet boundary for a fully developed flow in the lattice Boltzmann equation. Zbl 1200.76151
Tong, C. Q.; He, Y. L.; Tang, G. H.; Wang, Y.; Liu, Y. W.
2007
Simulation of two-dimensional oscillating flow using the lattice Boltzmann method. Zbl 1107.82370
Wang, Y.; He, Y. L.; Tang, G. H.; Tao, W. Q.
2006
A finite element approach to dynamic modeling of flexible spatial compound bar-gear systems. Zbl 1140.70447
Wang, Y.; Zhang, W. J.; Cheung, H. M. E.
2001
3D dynamic modelling of spatial geared systems. Zbl 1015.70004
Wang, Y.; Cheung, H. M. E.; Zhang, W. J.
2001
Transient characterization of flat plate heat pipes during startup and shutdown operations. Zbl 1065.80500
Wang, Y.; Vafai, K.
2000
The influence of the compiler on the cost of mathematical software - in particular on the cost of triangular factorization. Zbl 0312.68017
Parlett, B. N.; Wang, Y.
1975
Norm penalized joint-optimization NLMS algorithms for broadband sparse adaptive channel estimation. Zbl 1423.94026
Wang, Yanyan; Li, Yingsong
2017
A general zero attraction proportionate normalized maximum correntropy criterion algorithm for sparse system identification. Zbl 1423.93094
Li, Yingsong; Wang, Yanyan; Albu, Felix; Jiang, Jingshan
2017
Adaptive feedback synchronization of fractional-order complex dynamic networks. Zbl 1387.93100
Lei, Youming; Yang, Yong; Fu, Rui; Wang, Yanyan
2017
H $$\infty$$ observer-based sliding mode control for singularly perturbed systems with input nonlinearity. Zbl 1349.93138
Liu, Wei; Wang, Yanyan; Wang, Zhiming
2016
Fluctuating hydrodynamics methods for dynamic coarse-grained implicit-solvent simulations in LAMMPS. Zbl 1364.65172
Wang, Y.; Sigurdsson, J. K.; Atzberger, P. J.
2016
Multiplicity and uniqueness of positive solutions for nonhomogeneous semilinear elliptic equation with critical exponent. Zbl 1337.35054
Ba, Na; Wang, Yan-yan; Zheng, Lie
2016
Fixed point characterisation for exact and amenable action. Zbl 1436.43001
Dong, Z.; Wang, Y. Y.
2015
Computationally efficient banding of large covariance matrices for ordered data and connections to banding the inverse Cholesky factor. Zbl 1292.62082
Wang, Y.; Daniels, M. J.
2014
Strict plurisubharmonicity of Bergman kernels on generalized annuli. Zbl 1310.32006
Wang, Yanyan
2014
Bayesian modeling of the dependence in longitudinal data via partial autocorrelations and marginal variances. Zbl 1277.62088
Wang, Y.; Daniels, M. J.
2013
Global convergence for Cohen-Grossberg neural networks with discontinuous activation functions. Zbl 1261.34058
Wang, Yanyan; Zhou, Jianping
2012
Eikonal equation-based front propagation for arbitrary complex configurations. Zbl 1167.74051
Wang, Y.; Guibault, F.; Camarero, R.
2008
Robust adaptive control of synchronous generators with SMES unit via Hamiltonian function method. Zbl 1117.93040
Li, Shujuan; Wang, Y.
2007
Superlinearly convergent trust-region method without the assumption of positive-definite Hessian. Zbl 1139.90032
Zhang, J. L.; Wang, Y.; Zhang, X. S.
2006
Axisymmetric buckling of transversely isotropic circular and annular plates. Zbl 1097.74025
Xu, R. Q.; Wang, Y.; Chen, W. Q.
2005
Detrended fluctuation analysis of human brain electroencephalogram. Zbl 1209.92032
Pan, C. P.; Zheng, B.; Wu, Y. Z.; Wang, Y.; Tang, X. W.
2004
Responses of near-optimal, continuous horizontally curved beams to transit loads. Zbl 1235.74201
Wilson, J. F.; Wang, Y.; Threlfall, I.
1999
Subsea vehicle path planning using nonlinear programming and constructive solid geometry. Zbl 0875.93293
Wang, Y.; Lane, D. M.
1997
Radial deformations of cylindrical and spherical shells composed of a generalized Blatz-Ko material. Zbl 0925.73564
Wang, Y.; Aron, Marian
1996
Dynamic crack growth in TDCB specimens. Zbl 0857.73066
Wang, Y.; Williams, J. G.
1996
Uncertainty, variability, and sensitivity analysis in physiological pharmacokinetic models. Zbl 0907.62127
Krewski, D.; Wang, Y.; Bartlett, S.; Krishnan, K.
1995
Parallelizing Strassen’s method for matrix multiplication on distributed-memory MIMD architectures. Zbl 0839.68093
Chou, C.-C.; Deng, Y.-F.; Li, G.; Wang, Y.
1995
Variable structure, bilinear and intelligent control with power system application. Zbl 0774.93017
Mohler, R. R.; Wang, Y.; Zakrzewski, R. R.; Vedam, Rajkumar
1992
Sparse adaptive channel estimation based on mixed controlled $$l_2$$ and $$l_p$$-norm error criterion. Zbl 1373.93333
Wang, Yanyan; Li, Yingsong; Yang, Rui
2017
Norm penalized joint-optimization NLMS algorithms for broadband sparse adaptive channel estimation. Zbl 1423.94026
Wang, Yanyan; Li, Yingsong
2017
A general zero attraction proportionate normalized maximum correntropy criterion algorithm for sparse system identification. Zbl 1423.93094
Li, Yingsong; Wang, Yanyan; Albu, Felix; Jiang, Jingshan
2017
Adaptive feedback synchronization of fractional-order complex dynamic networks. Zbl 1387.93100
Lei, Youming; Yang, Yong; Fu, Rui; Wang, Yanyan
2017
Theoretical analysis for blow-up behaviors of differential equations with piecewise constant arguments. Zbl 1410.34184
Zhou, Y. C.; Yang, Z. W.; Zhang, H. Y.; Wang, Y.
2016
H $$\infty$$ observer-based sliding mode control for singularly perturbed systems with input nonlinearity. Zbl 1349.93138
Liu, Wei; Wang, Yanyan; Wang, Zhiming
2016
Fluctuating hydrodynamics methods for dynamic coarse-grained implicit-solvent simulations in LAMMPS. Zbl 1364.65172
Wang, Y.; Sigurdsson, J. K.; Atzberger, P. J.
2016
Multiplicity and uniqueness of positive solutions for nonhomogeneous semilinear elliptic equation with critical exponent. Zbl 1337.35054
Ba, Na; Wang, Yan-yan; Zheng, Lie
2016
Fixed point characterisation for exact and amenable action. Zbl 1436.43001
Dong, Z.; Wang, Y. Y.
2015
Numerical solutions comparison for interval linear programming problems based on coverage and validity rates. Zbl 1427.65094
Lu, H. W.; Cao, M. F.; Wang, Y.; Fan, X.; He, L.
2014
Computationally efficient banding of large covariance matrices for ordered data and connections to banding the inverse Cholesky factor. Zbl 1292.62082
Wang, Y.; Daniels, M. J.
2014
Strict plurisubharmonicity of Bergman kernels on generalized annuli. Zbl 1310.32006
Wang, Yanyan
2014
Synchronization in complex dynamical networks with interval time-varying coupling delays. Zbl 1268.34138
Zhou, Jianping; Wang, Zhen; Wang, Yanyan; Kong, Qingkai
2013
Lyapunov function construction for nonlinear stochastic dynamical systems. Zbl 1284.93207
Ling, Quan; Jin, Xiao Ling; Wang, Y.; Li, H. F.; Huang, Zhi Long
2013
Some conclusion on unique $$k$$-list colorable complete multipartite graphs. Zbl 1397.05068
Wang, Yanning; Wang, Yanyan; Zhang, Xuguang
2013
Bayesian modeling of the dependence in longitudinal data via partial autocorrelations and marginal variances. Zbl 1277.62088
Wang, Y.; Daniels, M. J.
2013
Global convergence for Cohen-Grossberg neural networks with discontinuous activation functions. Zbl 1261.34058
Wang, Yanyan; Zhou, Jianping
2012
Bi-periodicity evoked by periodic external inputs in delayed Cohen-Grossberg-type bidirectional associative memory networks. Zbl 1192.82060
Cao, Jinde; Wang, Yanyan
2010
Statistical analysis for stochastic systems including fractional derivatives. Zbl 1183.70062
Huang, Z. L.; Jin, X. L.; Lim, C. W.; Wang, Y.
2010
Mechanics of corrugated surfaces. Zbl 1200.74008
Wang, Y.; Weissmüller, J.; Duan, H. L.
2010
Weak $${\mathcal WT}_2$$-class of differential forms and weakly $${\mathcal A}$$-harmonic tensors. Zbl 1240.35163
Gao, Hongya; Wang, Yanyan
2010
Chaos and Hopf bifurcation of a finance system with distributed time delay. Zbl 1217.91225
Wang, Y.; Zhai, Y. H.; Wang, J.
2010
Three-dimensional non-free-parameter lattice-Boltzmann model and its application to inviscid compressible flows. Zbl 1229.76090
Li, Q.; He, Y. L.; Wang, Y.; Tang, G. H.
2009
Robustness of non-linear stochastic optimal control for quasi-Hamiltonian systems with parametric uncertainty. Zbl 1287.93110
Wang, Y.; Ying, Z. G.; Zhu, W. Q.
2009
Multistability and multiperiodicity of delayed Cohen-Grossberg neural networks with a general class of activation functions. Zbl 1161.34044
Cao, Jinde; Feng, Gang; Wang, Yanyan
2008
Rapid inverse parameter estimation using reduced-basis approximation with asymptotic error estimation. Zbl 1194.74102
Liu, G. R.; Zaw, Khin; Wang, Y. Y.
2008
A novel reduced-basis method with upper and lower bounds for real-time computation of linear elasticity problems. Zbl 1194.74434
Liu, G. R.; Zaw, Khin; Wang, Y. Y.; Deng, B.
2008
Instabilities of core-shell heterostructured cylinders due to diffusions and epitaxy: Spheroidization and blossom of nanowires. Zbl 1162.74317
Duan, H. L.; Weissmüller, J.; Wang, Y.
2008
Numerical simulations of gas resonant oscillations in a closed tube using lattice Boltzmann method. Zbl 1143.80332
Wang, Y.; He, Y. L.; Li, Q.; Tang, G. H.
2008
Nearly $$s$$-normal subgroups of a finite group. Zbl 1170.20015
Guo, W.; Wang, Y.; Shi, L.
2008
Eikonal equation-based front propagation for arbitrary complex configurations. Zbl 1167.74051
Wang, Y.; Guibault, F.; Camarero, R.
2008
A linearly conforming point interpolation method (LC-PIM) for three-dimensional elasticity problems. Zbl 1194.74543
Zhang, G. Y.; Liu, G. R.; Wang, Y. Y.; Huang, H. T.; Zhong, Z. H.; Li, G. Y.; Han, X.
2007
A nodal integration technique for meshfree radial point interpolation method (NI-RPIM). Zbl 1135.74050
Liu, G. R.; Zhang, G. Y.; Wang, Y. Y.; Zhong, Z. H.; Li, G. Y.; Han, X.
2007
Implicit-explicit finite-difference lattice Boltzmann method for compressible flows. Zbl 1151.82405
Wang, Y.; He, Y. L.; Zhao, T. S.; Tang, G. H.; Tao, W. Q.
2007
Two-mode ILC with pseudo-downsampled learning in high frequency range. Zbl 1117.93045
Zhang, B.; Wang, D.; Ye, Y.; Wang, Y.; Zhou, K.
2007
Mass modified outlet boundary for a fully developed flow in the lattice Boltzmann equation. Zbl 1200.76151
Tong, C. Q.; He, Y. L.; Tang, G. H.; Wang, Y.; Liu, Y. W.
2007
Robust adaptive control of synchronous generators with SMES unit via Hamiltonian function method. Zbl 1117.93040
Li, Shujuan; Wang, Y.
2007
Time-dependent Poiseuille flows of visco-elasto-plastic fluids. Zbl 1106.76011
Wang, Y.
2006
Simulation of two-dimensional oscillating flow using the lattice Boltzmann method. Zbl 1107.82370
Wang, Y.; He, Y. L.; Tang, G. H.; Tao, W. Q.
2006
Superlinearly convergent trust-region method without the assumption of positive-definite Hessian. Zbl 1139.90032
Zhang, J. L.; Wang, Y.; Zhang, X. S.
2006
A linearly conforming point interpolation method (LC-PIM) for 2D solid mechanics problems. Zbl 1137.74303
Liu, G. R.; Zhang, G. Y.; Dai, K. Y.; Wang, Y. Y.; Zhong, Z. H.; Li, G. Y.; Han, X.
2005
A meshfree radial point interpolation method (RPIM) for three-dimensional solids. Zbl 1138.74420
Liu, G. R.; Zhang, G. Y.; Gu, Y. T.; Wang, Y. Y.
2005
Characteristic boundary conditions for direct simulations of turbulent counterflow flames. Zbl 1086.80006
Yoo, C. S.; Wang, Y.; Trouvé, A.; Im, H. G.
2005
Axisymmetric buckling of transversely isotropic circular and annular plates. Zbl 1097.74025
Xu, R. Q.; Wang, Y.; Chen, W. Q.
2005
Co-ordinated control design of generator excitation and SVC for transient stability and voltage regulation enhancement of multi-machine power systems. Zbl 1057.93038
Cong, L.; Wang, Y.; Hill, D. J.
2004
Detrended fluctuation analysis of human brain electroencephalogram. Zbl 1209.92032
Pan, C. P.; Zheng, B.; Wu, Y. Z.; Wang, Y.; Tang, X. W.
2004
A note on the numerical solution of high-order differential equations. Zbl 1031.65087
Wang, Y.; Zhao, Y. B.; Wei, G. W.
2003
Application of the generalized differential quadrature rule to initial-boundary-value problems. Zbl 1236.74303
Wu, T. Y.; Liu, G. R.; Wang, Y. Y.
2003
A generalized differential quadrature rule for bending analysis of cylindrical barrel shells. Zbl 1050.74058
Wu, T. Y.; Wang, Y. Y.; Liu, G. R.
2003
Free vibration analysis of circular plates using generalized differential quadrature rule. Zbl 1083.74549
Wu, T. Y.; Wang, Y. Y.; Liu, G. R.
2002
A strip element method for the transient analysis of symmetric laminated plates. Zbl 0996.74082
Wang, Y. Y.; Lam, K. Y.; Liu, G. R.
2001
Scheduling projects with labor constraints. Zbl 0984.90012
Cavalcante, C. C. B.; de Souza, C. Carvalho; Savelsbergh, M. W. P.; Wang, Y.; Wolsey, L. A.
2001
Probability-one homotopy algorithms for solving the coupled Lyapunov equations arising in reduced-order $$H^2/H^\infty$$ modeling, estimation, and control. Zbl 1028.93011
Wang, Y.; Bernstein, D. S.; Watson, L. T.
2001
Impingement of filler dropelts and weld pool dynamics during gas metal arc welding process. Zbl 1064.76624
Wang, Y.; Tsai, Hai Lung
2001
A finite element approach to dynamic modeling of flexible spatial compound bar-gear systems. Zbl 1140.70447
Wang, Y.; Zhang, W. J.; Cheung, H. M. E.
2001
3D dynamic modelling of spatial geared systems. Zbl 1015.70004
Wang, Y.; Cheung, H. M. E.; Zhang, W. J.
2001
Transient characterization of flat plate heat pipes during startup and shutdown operations. Zbl 1065.80500
Wang, Y.; Vafai, K.
2000
Responses of near-optimal, continuous horizontally curved beams to transit loads. Zbl 1235.74201
Wilson, J. F.; Wang, Y.; Threlfall, I.
1999
Stochastic vibration model of gear transmission systems considering speed-dependent random errors. Zbl 0946.70505
Wang, Y.; Zhang, W. J.
1998
On a straightening of compressible, nonlinearly elastic, annular cylindrical sectors. Zbl 1001.74531
Aron, M.; Christopher, C.; Wang, Y.
1998
Subsea vehicle path planning using nonlinear programming and constructive solid geometry. Zbl 0875.93293
Wang, Y.; Lane, D. M.
1997
A reformulation of the strong ellipticity conditions for unconstrained hyperelastic media. Zbl 0876.73030
Wang, Y.; Aron, M.
1996
Radial deformations of cylindrical and spherical shells composed of a generalized Blatz-Ko material. Zbl 0925.73564
Wang, Y.; Aron, Marian
1996
Dynamic crack growth in TDCB specimens. Zbl 0857.73066
Wang, Y.; Williams, J. G.
1996
On deformations with constant modified stretches describing the bending of rectangular blocks. Zbl 0830.73010
Aron, M.; Wang, Y.
1995
Remarks concerning the flexure of a compressible nonlinearly elastic rectangular block. Zbl 0835.73012
Aron, M.; Wang, Y.
1995
Uncertainty, variability, and sensitivity analysis in physiological pharmacokinetic models. Zbl 0907.62127
Krewski, D.; Wang, Y.; Bartlett, S.; Krishnan, K.
1995
Parallelizing Strassen’s method for matrix multiplication on distributed-memory MIMD architectures. Zbl 0839.68093
Chou, C.-C.; Deng, Y.-F.; Li, G.; Wang, Y.
1995
Variable structure, bilinear and intelligent control with power system application. Zbl 0774.93017
Mohler, R. R.; Wang, Y.; Zakrzewski, R. R.; Vedam, Rajkumar
1992
Numerical inverse isoparametric mapping in 3D FEM. Zbl 0638.73037
Murti, V.; Wang, Y.; Valliappan, S.
1988
The influence of the compiler on the cost of mathematical software - in particular on the cost of triangular factorization. Zbl 0312.68017
Parlett, B. N.; Wang, Y.
1975
all top 5
#### Cited by 1,058 Authors
79 Liu, Gui-Rong 27 Zhang, GuiYong 15 Nguyen-Thoi, Trung 14 Nguyen-Xuan, Hung 11 He, Zhicheng 11 Qi, Liqun 11 Xu, Xu 10 Han, Xu 10 Zhong, Zhihua 9 Cao, Jinde 9 Cui, Xiangyang 9 Li, Guangyao 7 Civalek, Ömer 7 Duan, Qinglin 7 Gu, Yuantong 7 Zheng, Hong 6 Chen, Lei 6 Li, Eric 6 Tang, Qian 6 Wang, Bingbing 6 Yang, Yongtao 5 Li, Xikui 5 Li, Yangyang 5 Molabahrami, Ahmad 5 Nie, Xiaobing 5 Rakkiyappan, Rajan 4 Belinha, Jorge 4 Cheng, Ai Guo 4 He, Yaling 4 Hu, Dean 4 Huang, Chuangxia 4 Huang, Zhenkun 4 Ling, Chen 4 Natal Jorge, Renato Manuel 4 Thai, Chien Hoang 4 Wang, Xinwei 4 Zhang, Hongwu 4 Zhao, Jianxing 3 Aron, Miles 3 Belytschko, Ted Bohdan 3 Bu, Changjiang 3 Chai, Yingbin 3 Dai, Kaiyu 3 Dinis, Lúcia M. J. S. 3 Ferreira, António Joaquim Mendes 3 Gao, Xin 3 Ghehsareh, Hadi Roohani 3 Guo, Lixin 3 Huang, Yujiao 3 Javili, Ali 3 Kaslik, Eva 3 Lal, Roshan 3 Lam, Khin-Yong 3 Li, Wei 3 Li, Xiaodi 3 Mohamad, Sannay 3 Nourbakhshnia, N. 3 Sakthivel, N. K. 3 Sang, Caili 3 Shidfar, Abdollah 3 Sivasundaram, Seenith 3 Tan, Manchun 3 Vervisch, Luc 3 von Estorff, Otto 3 Wang, Haotian 3 Wang, Jingyue 3 Wang, Yanyan 3 Wang, Yigang 3 Wang, Zhanshan 3 Wei, Guowei 3 Yao, Hongmei 3 Yao, Linquan 3 Yi, Shichao 3 Yoo, Chun Sang 3 Zeng, Kaiyang 3 Zhang, Huaguang 3 Zhang, Xinzhen 3 Zhang, Zhiqian 3 Zhu, Weiqiu 2 Abd-Elhameed, Waleed Mohamed 2 Ahlawat, Neha 2 Alipour, Mir Mohammad 2 Andakhshideh, A. 2 Babaei, Afshin 2 Bucher, Christian G. 2 Cao, Yang 2 Cen, Kefa 2 Char, Ming-I 2 Chen, Rongqian 2 Chi, Sheng-Wei 2 Coussement, Axel 2 Cui, Hao 2 Cui, Xiaoyue 2 Dai, Hui-Hui 2 Degrez, Gérard 2 Domingo, Pascale 2 Duong, Pham Luu Trung 2 Ersoy, Hakan 2 Fang, Chung 2 Fei, Shumin ...and 958 more Authors
all top 5
#### Cited in 126 Serials
52 Engineering Analysis with Boundary Elements 45 International Journal of Computational Methods 25 International Journal for Numerical Methods in Engineering 23 Computer Methods in Applied Mechanics and Engineering 22 Applied Mathematics and Computation 18 Journal of Computational Physics 18 Computational Mechanics 13 Applied Mathematical Modelling 12 Computers and Fluids 12 Nonlinear Dynamics 10 International Journal of Heat and Mass Transfer 8 Computers & Mathematics with Applications 8 Neural Networks 8 Communications in Nonlinear Science and Numerical Simulation 7 Journal of Elasticity 7 Mathematics and Mechanics of Solids 6 Acta Mechanica 6 International Journal for Numerical Methods in Fluids 6 Journal of the Franklin Institute 6 Mathematical Problems in Engineering 5 Journal of Computational and Applied Mathematics 5 International Journal of Computer Mathematics 5 European Journal of Mechanics. A. Solids 5 Nonlinear Analysis. Real World Applications 5 Acta Mechanica Sinica 5 Frontiers of Mathematics in China 4 International Journal of Engineering Science 4 Journal of Fluid Mechanics 4 Chaos, Solitons and Fractals 4 Meccanica 4 Circuits, Systems, and Signal Processing 4 Journal of Scientific Computing 4 Archive of Applied Mechanics 4 International Journal of Robust and Nonlinear Control 4 Communications in Numerical Methods in Engineering 4 Abstract and Applied Analysis 4 Journal of Inequalities and Applications 4 Journal of Applied Mathematics 4 International Journal for Numerical Methods in Biomedical Engineering 3 Physica A 3 Continuum Mechanics and Thermodynamics 3 Chaos 3 Discrete Dynamics in Nature and Society 3 Engineering Computations 3 International Journal of Modern Physics C 2 International Journal of Systems Science 2 Journal of Engineering Mathematics 2 Journal of the Mechanics and Physics of Solids 2 Linear and Multilinear Algebra 2 Physics Letters. A 2 Automatica 2 Information Sciences 2 Mathematics and Computers in Simulation 2 Mechanics Research Communications 2 Applied Mathematics Letters 2 Journal of Global Optimization 2 European Journal of Operational Research 2 Linear Algebra and its Applications 2 Flow, Turbulence and Combustion 2 International Journal of Nonlinear Sciences and Numerical Simulation 2 Nonlinear Analysis. Modelling and Control 2 Journal of Computational Acoustics 2 Journal of Industrial and Management Optimization 2 Mathematical Modelling of Natural Phenomena 2 Symmetry 2 Arabian Journal for Science and Engineering 1 International Journal of Modern Physics B 1 International Journal of Control 1 International Journal for Numerical and Analytical Methods in Geomechanics 1 International Journal of Solids and Structures 1 International Journal of Theoretical Physics 1 Information Processing Letters 1 Journal of Applied Mathematics and Mechanics 1 Journal of Mathematical Analysis and Applications 1 Journal of Statistical Physics 1 Mathematical Biosciences 1 Mathematical Methods in the Applied Sciences 1 Algebra and Logic 1 Annales Polonici Mathematici 1 Calcolo 1 Collectanea Mathematica 1 International Journal of Circuit Theory and Applications 1 Journal of Optimization Theory and Applications 1 Mathematical Programming 1 Siberian Mathematical Journal 1 Cybernetics and Systems 1 Operations Research Letters 1 Applied Mathematics and Mechanics. (English Edition) 1 Graphs and Combinatorics 1 Computers & Operations Research 1 Numerical Methods for Partial Differential Equations 1 Neural Computation 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 SIAM Review 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Advances in Engineering Software 1 Archives of Control Sciences 1 Computational Optimization and Applications 1 SIAM Journal on Scientific Computing 1 Numerical Linear Algebra with Applications ...and 26 more Serials
all top 5
#### Cited in 40 Fields
223 Mechanics of deformable solids (74-XX) 169 Numerical analysis (65-XX) 83 Fluid mechanics (76-XX) 54 Biology and other natural sciences (92-XX) 51 Ordinary differential equations (34-XX) 48 Systems theory; control (93-XX) 36 Partial differential equations (35-XX) 25 Classical thermodynamics, heat transfer (80-XX) 23 Operations research, mathematical programming (90-XX) 21 Computer science (68-XX) 20 Linear and multilinear algebra; matrix theory (15-XX) 17 Probability theory and stochastic processes (60-XX) 13 Dynamical systems and ergodic theory (37-XX) 11 Mechanics of particles and systems (70-XX) 9 Optics, electromagnetic theory (78-XX) 9 Statistical mechanics, structure of matter (82-XX) 5 Combinatorics (05-XX) 5 Approximations and expansions (41-XX) 5 Operator theory (47-XX) 4 Calculus of variations and optimal control; optimization (49-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Difference and functional equations (39-XX) 3 Integral transforms, operational calculus (44-XX) 3 Integral equations (45-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Information and communication theory, circuits (94-XX) 2 Group theory and generalizations (20-XX) 2 Measure and integration (28-XX) 2 Statistics (62-XX) 2 Quantum theory (81-XX) 1 General and overarching topics; collections (00-XX) 1 Real functions (26-XX) 1 Functions of a complex variable (30-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Special functions (33-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Abstract harmonic analysis (43-XX) 1 Functional analysis (46-XX) 1 Differential geometry (53-XX) 1 Geophysics (86-XX) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6005428433418274, "perplexity": 14665.974606885364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00194.warc.gz"} |
http://www.aaronschlegel.com/linear-congruential-generator-r/ | ## Linear Congruential Generator in R
Part 1 of 3 in the series Random Number Generation
A Linear congruential generator (LCG) is a class of pseudorandom number generator (PRNG) algorithms used for generating sequences of random-like numbers. The generation of random numbers plays a large role in many applications ranging from cryptography to Monte Carlo methods. Linear congruential generators are one of the oldest and most well-known methods for generating random numbers primarily due to their comparative ease of implementation and speed and their need for little memory. Other methods such as the Mersenne Twister are much more common in practical use today.
Linear congruential generators are defined by a recurrence relation:
$$\large{X_{i+1} = (aX_i + c) \space \text{mod} \space m}$$
There are many choices for the parameters $m$, the modulus, $a$, the multiplier, and $c$ the increment. Wikipedia has a seemingly comprehensive list of the parameters currently in use in common programs.
#### Aside: ‘Pseudorandom’ and Selecting a Seed Number
Random number generators such as LCGs are known as ‘pseudorandom’ as they require a seed number to generate the random sequence. Due to this requirement, random number generators today are not truly ‘random.’ The theory and optimal selection of a seed number are beyond the scope of this post; however, a common choice suitable for our application is to take the current system time in microseconds.
## A Linear Congruential Generator Implementation in R
The parameters we will use for our implementation of the linear congruential generator are the same as the ANSI C implementation (Saucier, 2000.).
$$\large{m = 2^{32} \qquad a = 1103515245 \qquad c = 12345}$$
The following function is an implementation of a linear congruential generator with the given parameters above.
lcg.rand <- function(n=10) {
rng <- vector(length = n)
m <- 2 ** 32
a <- 1103515245
c <- 12345
# Set the seed using the current system time in microseconds
d <- as.numeric(Sys.time()) * 1000
for (i in 1:n) {
d <- (a * d + c) %% m
rng[i] <- d / m
}
return(rng)
}
We can use the function to generate random numbers $U(0, 1)$.
# Print 10 random numbers on the half-open interval [0, 1)
lcg.rand()
## [1] 0.4605103 0.6643705 0.6922703 0.4603930 0.1842995 0.6804419 0.8561535
## [8] 0.2435846 0.8236771 0.9643965
We can also demonstrate how apparently ‘random’ the LCG is by plotting a sample generation in 3 dimensions. To do this, we generate three random vectors $x, y, z$ using our LCG above and plot. The plot3d package is used to create the scatterplot, and the animation package is used to animate each scatterplot as the length of the random vectors, $n$, increases.
library(plot3D)
library(animation)
n <- c(3, 10, 20, 100, 500, 1000, 2000, 5000, 10000, 20000)
saveGIF({
for (i in 1:length(n)) {
x <- lcg.rand(n[i])
y <- lcg.rand(n[i])
z <- lcg.rand(n[i])
scatter3D(x, y, z, colvar = NULL, pch=20, cex = 0.5, theta=20, main = paste('n = ', n[i]))
}
}, movie.name = 'lcg.gif')
As $n$ increases, the LCG appears to be ‘random’ enough as demonstrated by the cloud of points.
## Linear Congruential Generators with Poor Parameters
The values chosen for the parameters $m, a \text{and} c$ are very important in driving how ‘random’ the generated values from the linear congruential estimator; hence that is why there are so many different parameters in use today as there has not yet been a clear consensus on the ‘best’ parameters to use.
We can demonstrate how choosing poor parameters for our LCG leads to not so random generated values by creating a new LCG function.
lcg.poor <- function(n=10) {
rng <- vector(length = n)
# Parameters taken from https://www.mimuw.edu.pl/~apalczew/CFP_lecture3.pdf
m <- 2048
a <- 1229
c <- 1
d <- as.numeric(Sys.time()) * 1000
for (i in 1:n) {
d <- (a * d + c) %% m
rng[i] <- d / m
}
return(rng)
}
Generating successively longer vectors using the ‘poor’ LCG and plotting as we did previously, we see the generated points are very sequentially correlated, and there doesn’t appear to be any ‘randomness’ at all as $n$ increases.
n <- c(3, 10, 20, 100, 500, 1000, 2000, 5000, 10000, 20000)
saveGIF({
for (i in 1:length(n)) {
x <- lcg.poor(n[i])
y <- lcg.poor(n[i])
z <- lcg.poor(n[i])
scatter3D(x, y, z, colvar = NULL, pch=20, cex=0.5, theta=20, main = paste('n = ', n[i]))
}
}, movie.name = 'lcg_poor.gif')
## References
Saucier, R. (2000). Computer Generation of Statistical Distributions (1st ed.). Aberdeen, MD. Army Research Lab.
Series NavigationMultiplicative Congruential Generators in R >>
• #### Miles McBain
August 24, 2017 at 5:54 pm Reply
Thanks for this! This made the topic quite accessible. I got asked yesterday about facility in R for parallel LCGs that are guaranteed to be independent. I had no idea. I’m guessing the application was parallel MCMC or something similar. If you have any insight into this, you might consider putting it into the series.
• #### Aaron Schlegel
August 26, 2017 at 7:33 am Reply
Thank you very much for your comment, Miles! I’m glad you liked the post! That is a great suggestion, I have seen the topic of LCGs in parallel come up a few times, but I haven’t spent much time digging into it so unfortunately I don’t have any insight on the topic at this point. It does look like an interesting topic and I think it would be a good challenge to code a parallel LCG so hopefully I can add that to the series in the near future =). Thanks again! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 11, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.555209755897522, "perplexity": 1370.5294326719215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00513.warc.gz"} |
https://www.gamedev.net/forums/topic/455824-fixed-interesting-corruption-of-html-form-values/ | • 12
• 12
• 9
• 10
• 13
# Fixed (interesting) Corruption of HTML form values.
This topic is 3902 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Original post:
Quote:
Hello. I've been testing some basic PHP with this form:
Quote:
When I submit the form, $_POST['key1'] is value \"1\" and$_POST['key2'] is v&l>u<e2. The " has been read by the browser as a literal " which is fine, sensible, and expected. What is NOT fine is that the " has become an escaped \" in the _POST variable. It is part of the HTML standard that & becomes & and " becomes ", but AFAIK nothing says that PHP should start interpreting escape characters in the literal strings which arrive as its input. Apparently I'm wrong (unless the browser is doing it). I can start searching-for and replacing \" with " but then I have to worry about occurances of \\ or other escape characters, so is there a function that will do it all reliably? Also, given that the browser is reading the value of key1 as value "1", what is responsible for the transformation to value \"1\" ?
It turns out this is an ineffective security measure intended to defeat injection attacks, it doesn't change the need to use mysql_real_escape_string or some corresponding function. It's called "magic quotes" and can be fixed by setting magic_quotes_gpc=Off in php.ini and restarting apache (there might be a .htaccess way to do this if you don't have access to php.ini) [Edited by - spraff on July 16, 2007 6:12:55 PM] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4495803117752075, "perplexity": 2850.562737390256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647885.78/warc/CC-MAIN-20180322131741-20180322151741-00582.warc.gz"} |
https://www.physicsforums.com/threads/find-acceleration-of-the-mass-immidiately-after-rope-is-cut.258657/ | # Find acceleration of the mass immidiately after rope is cut
1. Sep 23, 2008
### veronicak5678
1. The problem statement, all variables and given/known data
A 20 kg mass is suspended by two ropes. Rope 1 goes to the wall horizonatlly from the left side of the mass. Rope 2 goes from the top right corner of the mass to the wall at an angle of 30 degrees from the horizontal top of the mass.
a- If the mass is at rest, what is the tension force in rope 2?
b- determine the tension in rope 1.
c- Find acceleration of the mass immidiately after rope 1 is cut.
2. Relevant equations
EF = ma
3. The attempt at a solution
1) T2y = w
T = w / sin 30 = mg / sin 30
= 392 N
2) ?
If I have done part 1 correctly, it seems like the tension in rope 1 would be 0?
2. Sep 23, 2008
### LowlyPion
If I have the picture correctly there is a rope on each side pulling correct?
While the net horizontal forces may be 0, there is Tension in each.
3. Sep 23, 2008
### veronicak5678
Yes, there is a rope on each side, but rope 1 is halfway down the left side and pulled straight out, horizontally, to the wall. Rope 2 is coming from the right top corner of the mass and pulling to the wall diagonally. The angle of 30 is measured from the horizontal level of the top of the mass to the rope 2. I hope that makes sense. Wish I had a scanner!
Last edited: Sep 23, 2008
4. Sep 23, 2008
### LowlyPion
I think I have it.
If you look at the Tension of the angled rope it has 2 components x and y.
You know the y component must be m*g because that's all there is holding it up. The Tension then by dividing sin30 is 2*m*g.
So the horizontal component is 2*m*g*cos30 = (.866)*2*m*g
and the vertical of course is (1/2)*2*m*g = m*g.
So the horizontal Tension of one must equal the horizontal Tension of the other.
5. Sep 23, 2008
### veronicak5678
Why is the horizontal 2*m*g*cos30?
I get as far as T2y = 392 N. Isn't that independent of T2x?
6. Sep 23, 2008
### LowlyPion
You know that because there is no other vertical force component, ALL of the weight of the block MUST be carried by the vertical component.
Doesn't that translate at 30 degrees to the Horizontal the Tension in the rope as being m*g/.5 = 2*m*g?
The Tension vector is the Vector sum of the 2 components.
7. Sep 23, 2008
### veronicak5678
OK. So how do I state the total tension in rope 2? Add the components together?
8. Sep 23, 2008
### LowlyPion
Wait a minute. You already know the Total Tension. (You gave it already in your first post.) And you know the vertical component of the tension (Ty = m*g). So to find the horizontal component, that's cosθ * T2 = Cosθ *(2mg) = Tx
9. Sep 23, 2008
### veronicak5678
I thought the 392N was the tension in just the y component of rope 2. Is that wrong?
10. Sep 23, 2008
### LowlyPion
That looks like the total Tension.
M*g = 20 *9.8 = 196N is the Ty
11. Sep 23, 2008
### veronicak5678
Oh! OK, I see what I was doing. My notes for this class are a mess because the professor rushes through everything. You've explained more to me today than he has all week. Thanks again. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9319627285003662, "perplexity": 1155.4709682371429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647498.68/warc/CC-MAIN-20180320150533-20180320170533-00004.warc.gz"} |
https://math.stackexchange.com/questions/409245/calculating-interest-rate-of-car-financing | # Calculating interest rate of car financing
I want a new car which costs $\$26.000$. But there's an offer to finance the car: Immediate prepayment:$25\%$of the original price The amount left is financed with a loan: Duration:$5$years, installment of$\$400$ at the end of every month.
So I need to calculate the rate of interest of this loan. Do I need Excel for this exercise? Or which formula could I use for this exercise?
You could use Excel (see below) or you could solve the equation $(2)$ below numerically, e.g. using the secant method.
We have a so called uniform series of $n=60$ constant installments $m=400$.
Let $i$ be the nominal annual interest rate. The interest is compounded monthly, which means that the number of compounding periods per year is $12$. Consequently, the monthly installments $m$ are compounded at the interest rate per month $i/12$. The value of $m$ in the month $k$ is equivalent to the present value $m/(1+i/12)^{k}$. Summing in $k$, from $1$ to $n$, we get a sum that should be equal to $$P=26000-\frac{26000}{4}=19500.$$ This sum is the sum of a geometric progression of $n$ terms, with ratio $1+i/12$ and first term $m/(1+i/12)$. So
$$\begin{equation*} P=\sum_{k=1}^{n}\frac{m}{\left( 1+\frac{i}{12}\right) ^{k}}=\frac{m}{1+\frac{ i}{12}}\frac{\left( \frac{1}{1+i/12}\right) ^{n}-1}{\frac{1}{1+i/12}-1}=m \frac{\left( 1+\frac{i}{12}\right) ^{n}-1}{\frac{i}{12}\left( 1+\frac{i}{12} \right) ^{n}}.\tag{1} \end{equation*}$$
The ratio $P/m$ is called the series present-worth factor (uniform series)$^1$.
For $P=19500$, $m=400$ and $n=5\times 12=60$ we have:
$$\begin{equation*} 19500=400 \frac{\left( 1+\frac{i}{12}\right) ^{60}-1}{\frac{i}{12}\left( 1+\frac{i}{12} \right) ^{60}}.\tag{2} \end{equation*}$$
I solved numerically $(2)$ for $i$ using SWP and got $$\begin{equation*} i\approx 0.084923\approx 8.49\%.\tag{3} \end{equation*}$$
ADDED. Computation in Excel for the principal $P=19500$ and interest rate $i=0.084923$ computed above. I used a Portuguese version, that's why the decimal values show a comma instead of the decimal point.
• The Column $k$ is the month ($1\le k\le 60$).
• The 2nd. column is the amount $P_k$ still to be payed at the beginning of month $k$.
• The 3rd. column is the interest $P_ki/12$ due to month $k$.
• The 4th. column is the sum $P_k+P_ki/12$.
• The 5th column is the installment payed at the end of month $k$.
The amount $P_k$ satisfies $$P_{k+1}=P_k+P_ki/12-m.$$ We see that at the end of month $k=60$, $P_{60}+P_{60}i/12=400=m$. The last installment $m=400$ at the end of month $k=60$ balances entirely the remaining debt, which is also $400$. We could find $i$ by trial and error. Start with $i=0.01$ and let the spreadsheet compute the table values, until we have in the last row exactly $P_{60}+P_{60}i/12=400$.
--
$^1$ James Riggs, David Bedworwth and Sabah Randdhava, Engineering Economics,McGraw-Hill, 4th. ed., 1996, p.43.
An approximate solution can be obtained by using continuously-compounded (rather than monthly-compounded) interest.
Let
• $i$ = the nominal annual interest rate
• $P$ = the principal of the loan
• $m$ = the monthly payment amount
• $N$ = the term of the loan, in years
Let $B(t)$ = the remaining balance of the loan after $t$ years. Then $B'(t)$ = (annualized interest) - (annualized payments) = $i \cdot B(t) - 12m$. Furthermore, we have the initial condition $B(0) = P$, and the payoff condition $B(N) = 0$.
Solving the differential equation $B'(t) = i \cdot B(t) - 12m$ gives $B(t) = Ce^{it} + \frac{12m}{i}$. The initial condition $B(0) = P$ gives $C = P - \frac{12m}{i}$. Solving $B(N) = 0$ for $m$ gives the continuous-interest amortization formula:
$m = \frac{Pi}{12 (1 - e^{-iN})}$
Plugging in $P = 19500$, $m = 400$, and $N = 5$ gives you the equation:
$(19500 i - 4800) e^{5i} = -4800$
which can't be solved algebraically, but solving it numerically gives $i \approx 8.61\%$.
Edit: An algebraic approximation for the solution can be obtained by using the Taylor series $e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$
With the first-degree approximation $e^{-iN} \approx 1 - iN$, the $i$'s cancel out and give $m = \frac{P}{12N}$. This gives you the monthly payment if there were no interest, but it's not very useful for finding the interest rate.
With the second-degree approximation $e^{-iN} \approx 1 - iN + \frac{(iN)^2}{2}$, you get $i \approx \frac{12mN-P}{6mN^2}$. In your specific problem, that gives $i \approx 7.50\%$.
With the third-degree approximation $e^{-iN} \approx 1 - iN + \frac{(iN)^2}{2} - \frac{(iN)^3}{6}$, you get the quadratic equation $(2mN^3)i^2+(-6mN^2)i+(12mN-P) = 0$. Use the quadratic formula. In your problem, you get the two solutions $i \approx 8.79\%$ or $i \approx 51.21\%$. The first one is much more accurate.
• +1 In this blog post Timothy Gowers discussed the following problem: Suppose for simplicity that the interest rate for an interest-only mortgage would be 5% and that this rate never changes. If I take out a repayment mortgage of £50,000 and pay £500 a month, then roughly how long will it take me to pay off the mortgage? where the discrete problem (payments once a month) is replaced by a continuous one (money leaking out of my bank account at a constant rate), as in your answer. – Américo Tavares Jun 5 '13 at 17:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7884907722473145, "perplexity": 7182.260454428933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00348.warc.gz"} |
https://research.tue.nl/en/publications/redox-states-of-well-defined-pi-conjugated-oligothiophenes-functi | Redox states of well-defined pi-conjugated oligothiophenes functionalized with poly(benzyl ether) dendrons
J.J. Apperloo, R.A.J. Janssen, P.R.L. Malenfant, L. Groenedaal, J.M.J. Fréchet
93 Citations (Scopus)
Abstract
The redox states of a series of well-defined hybrid dendrimers based on oligothiophene cores and poly(benzyl ether) dendrons have been studied using cyclic voltammetry and variable-temperature UV/visible/near-IR spectroscopy. The oxidation potentials and the electronic transitions of the neutral, singly oxidized, and doubly oxidized states of these novel hybrid materials have been determined as a function of oligothiophene conjugation length varying between 4 and 17 repeat units. The attachment of poly(benzyl ether) dendritic wedges at the termini of these lengthy oligothiophenes considerably enhances their solubility, thus enabling the first detailed investigation of the electronic structure of oligothiophenes having 11 and 17 repeat units with minimal ß-substitution. In the case of the undecamer and heptadecamer, we find that the dicationic state consists of two individual polarons, rather than a single bipolaron. The effect of the dendritic poly(benzyl ether) solubilizers on the properties of the redox states varies with the oligothiophene length and dendron size. More specifically, we observe a kinetic limit to the electrochemical oxidation of the oligothiophene core when the dendron is large compared to the electrophore. Finally, we have observed the first example of self-complexation of cation radicals via p-dimerization leading to the formation of dendritic supramolecular assemblies.
Original language English 7042-7051 9 Journal of the American Chemical Society 122 29 https://doi.org/10.1021/ja994259x Published - 2000 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467152714729309, "perplexity": 7387.5539400816815}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00501.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/circuit-input-voltage-source-x-t-12u-t-u-t-unit-step-function-time-z1-1-ohm-z2-1-henry11-w-q717136 | ## RLC Circuits
In circuit A below, the input voltage source is x(t) = 12u(-t) where u(t) is the unit step function of time.
Also Z1= 1 Ohm and Z2= 1 Henry.
1.1 Write a first order differential equation for theoutput y(t).
1.2 Derive y(t) for t < 0.
1.3 Derive y(t) for t > 0.
1.4 Write an expression for y(t) for all time. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924688994884491, "perplexity": 5704.20085954481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00032-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://quant.stackexchange.com/questions?tab=newest&pagesize=30 | # All Questions
13,534 questions
Filter by
Sorted by
Tagged with
11 views
30 views
### Using Python Quantlib's FittedBondDiscountCurve as Evaluator of Parametric Curve - Errors
I am using Quantlib's FittedBondDiscountCurve in Python 3.7 and setting MaxIterations to 0, and giving a guess_solution, which then turns the routine into an evaluator for the parametric form I choose,...
38 views
### SDE Parameter Estimation
Have a question about "How to estimate parameters for SDE with multiple Brownian Motions ?" Let's say $X_t$ follows the process: $dX_t=\mu dt+\sigma_1 dW_t^1 + \sigma_2 dW_t^2$ I think I've checked ...
61 views
### Stochastic Vol Mathematical derivation [on hold]
I want to understand the mathematical steps done. Can someone please simplify the derivation of d(pi) from Pi? Thanks in advance.
22 views
### Stochastic Volatility and Sticky Delta
"Stochastic volatility models can be thought of as sticky delta model. And Local volatility model as sticky Strike." Please help me understand how the author has reached this conclusion.
19 views
### Implied vol vs Realised vol on Event Days
How implied vol varies vis-a-vis realised vol on an event days?
75 views
### Calculation cross-currency basis
I am trying to calculate cross-currency basis on the 3-month horizon for a certain set of currencies. The formula should be $ccb = F/S (1+y_{foreign currency}) - (1+y_{USD})$ where $y_{USD}$ is Libor ...
18 views
### ATM Curve Construction: Short-Dates
Please explain example 1 and example 2. In example 1 how does 6 appear in the square root function. And in example 2 it is written:" Assume it is known that spot will be completely static during the ...
34 views
It's written in a book by Giles Hewitt : " The bid-offer spread quoted on a Strangle in volatility terms will usually be wider than the ATM spread to the same maturity because strikes away from the ...
16 views
### Where to find the components of an index and how to replicate it by subset selection?
I am interested in replicating the performance of the eurostoxx 50 index using different statistical methods. That's what ETFs do, right? How to replicate an index using subset selection? I think I ...
35 views
### LP for max stress test
I'm trying to find a solution to the following problem: Assume a portfolio of $n$ zero coupon bonds mapped in risk by their respective DV01. Assume that the ZC portfolio created cannot exceed max ...
58 views
### Algorithmic Trading Competition at MIT
Has anyone participated in MIT trading competition before(traders@mit)? Wondering what type of data are used-tick data or bar data-and are participants connected to a web socket? Are we allowed to ...
24 views
67 views
### Bloomberg Ticker mapping with Reuters RIC
I am trying to map Bloomberg ticker into Reuters one. For example this one: EDZ3C 96.625 COMDT Few years ago aforementioned BBG ticker would be mapped to Reuters ...
42 views
### Aggregating quotes data for different time frames
I need to aggregate data for a higher time frame. I have data for 1 min time frame as follows ...
87 views
Are all FX trades ( RR, BF, ATM)quoted in implied vol term delta neutral trades? If trades are not delta neutral at the initiation does that mean it is speculative trading? Why/ why not?
123 views
+50
### Finance: Portfolio - Long Short Portfolio construction
I am trying to construct a Long / Short portfolio in R. Say I have two portfolios Tech and Mature and I want to go long on the <...
48 views
48 views
### How to compute return series for a German government bond with a 0% coupon?
Recently, the German government issued a long-dated bond with a 0% coupon. I'm trying to implement a historical VaR model and would like to know the best way to model the historical returns of this ...
37 views
### Correlation coefficient without cash flows?
I'm an intern at a company and one of our tasks is to calculate the the probability of default of both participants of a Swap(a Client and a Bank), for which we first need the correlation coefficient ...
26 views
...
59 views
### Frankfurt stock exchange companies
I can't seem to find all the symbols for companies traded at Frankfurt stock exchange, presented as csv (or any downloadable format). Could you help me?
158 views
### Determining if a time series is random
I originally posted this in the Data Science Stack Exchange. Another poster suggested I post it here. The idea would be to identify "orderly" segments within a market time series and use them to ...
49 views
Are all trades quoted in implied vol terms delta neutral trades? If trades are not delta neutral at the initiation does that mean it is speculative trading? Why/ why not?
61 views
### What is the difference between Cost of Currency Hedging and the Price of a Currency Pair Forward?
I am looking at Reuters Datastream and all they seem to provide is the settlement price of the CME EURGBP contract (which more or less equals current spot). But what does it actually cost me to ...
39 views
### About Dual Delta of FX option in the paper: FX volatility smile construction by Dimitri Reiswich & Uwe Wystup
In the paper: FX volatility smile construction by Dimitri Reiswich & Uwe Wystup. It mentions the computation of premium-adjusted spot delta as follows (Page 6): As a beginner of FX option, I ...
32 views
### How can I determine whether a UK company trades internationally
Can anyone think of ways to determine whether a UK company trades internationally? I have seen that possibly if the have 'GB' at the start of their VAT number. Can any financial ratios signal this?
41 views
### Searching for two papers of H.Leland with regards to capital structure
I am searching for two papers of H. Leland which I assume previously were online, as many published papers have cited them. The first work (Lecture notes) extends the Leland(1994a) model by ...
66 views
### Relationship between ROE and IRR
In the textbook I read the following: We can increase the present value of a share of common stock with a new investment only if $ROE > r$ , where $r$ is a discount rate (capitalization rate). ...
39 views
### Compound Plus Simple Interest Rates: Convert one expression to a sum of expressions
Suppose you have a fixed compound interest rate Ci (say 1%), a fixed simple interest rate Si (say 2%), and a total of N months (say 24). So, if you have a start value V (say 100 dollars), at the end ...
75 views
I'm setting up and following a pair trading operation by the method of summing the distances squared (SSD). After determining the best pairs, I have to track the spread between the normalized prices. ...
I am new to the pricing of bonds: Suppose that I would like to price a floating-rate bond with par value \$100, with maturity at$T$years from now, paying coupons semi-annually. Suppose that$r_{n-... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5220891833305359, "perplexity": 3677.4702272223367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00512.warc.gz"} |
https://ccssmathanswers.com/eureka-math-geometry-module-3-lesson-2/ | Eureka Math Geometry Module 3 Lesson 2 Answer Key
Engage NY Eureka Math Geometry Module 3 Lesson 2 Answer Key
Eureka Math Geometry Module 3 Lesson 2 Exploratory Challenge/Exercise Answer Key
Exercise 1.
Two congruent triangles are shown below.
a. Calculate the area of each triangle.
$$\frac{1}{2}$$(12.6)(8.4) = 52.92
b. Circle the transformations that, if applied to the first triangle, would always result in a new triangle with the same area:
Two congruent figures have equal area. If two figures are congruent, it means that there exists a transformation that maps one figure onto the other completely. For this reason, it makes sense that they would have equal area because the figures would cover the exact same region.
Exercise 2.
a. Calculate the area of the shaded figure below.
2$$\left(\frac{1}{2}\right)$$(3)(3) = 9
7(3) = 21
Area = 9 + 21 = 30
The area of the shaded figures is 30.
b. Explain how you determined the area of the figure.
First, I realized that the two shapes at the ends of the figure were triangles with a base of 3 and a height of 3 and that the shape in the middle was a rectangle with dimensions 3 × 7. To find the area of the shaded figure, I found the sum of all three shapes.
Exercise 3.
Two triangles ∆ ABC and ∆ DEF are shown below. The two triangles overlap forming ∆ DGC.
a. The base of figure ABGEF is composed of segments of the following lengths: AD = 4, DC = 3, and CF = 2. Calculate the area of the figure ABGEF.
The area of ∆ ABC: $$\frac{1}{2}$$(4)(7) = 14
The area of ∆ DEF: $$\frac{1}{2}$$(2)(5) = 5
The area of ∆ DGC: $$\frac{1}{2}$$(0.9)(3) = 1.35
The area of figure ABGEF is 14 + 5 – 1.35, or 17.65.
b. Explain how you determined the area of the figure.
Since the area of ∆ DGC is counted twice, It needs to be subtracted from the sum of the overlapping triangles.
Exercise 4.
A rectangle with dimensions 21. 6 × 12 has a right triangle with a base 9. 6 and a height of 7. 2 cut out of the rectangle.
a. Find the area of the shaded region.
The area of the rectangle: (12)(21.6) = 259.2
The area of the triangle: $$\frac{1}{2}$$(7. 2)(9. 6) = 34.56
The area of the shaded region is 259.2 – 34.56, or 224.64.
b. Explain how you determined the area of the shaded region.
I subtracted the area of the triangle from the area of the rectangle to determine the shaded region.
Eureka Math Geometry Module 3 Lesson 2 Problem Set Answer Key
Question 1.
Two squares with side length 5 meet at a vertex and together with segment AB form a triangle with base 6 as shown. Find the area of the shaded region.
The altitude of the isosceles triangle splits it into two right triangles, each having a base of 3 units in length and hypotenuse of 5 units in length. By the Pythagorean theorem, the height of the triangles must be 4 units in length. The area of the isosceles triangle is 12 square units. Since the squares and the triangle share sides only, the sum of their areas is the area of the total figure. The areas of the square regions are each 25 square units, making the total area of the shaded region 62 square units.
Question 2.
If two 2 × 2 square regions S1 and S2 meet at midpoints of sides as shown, find the area of the square region, S1 ∪ S2.
The area of S1 ∩ S2 = 1 because it is a 1 × 1 square region.
Area(S1) = Area(S2) = 4
By Property 3, the area of S1 ∪ S2 = 4 + 4 – 1 = 7.
Question 3.
The figure shown is composed of a semicircle and a non-overlapping equilateral triangle, and contains a hole that is also composed of a semicircle and a non-overlapping equilateral triangle. If the radius of the larger semicircle is 8, and the radius of the smaller semicircle is $$\frac{1}{3}$$ that of the larger semicircle, find the area of the figure.
The area of the large semicircle: Area = $$\frac{1}{2}$$ π · 82 = 32π
The area of the smaller semicircle: Area = $$\frac{1}{2}$$ π $$\left(\frac{8}{3}\right)^{2}$$ = $$\frac{32}{9}$$ π
The area of the large equilateral triangle: Area = $$\frac{1}{2}$$ · 16 ·
The area of the smaller equilateral triangle: Area = $$\frac{1}{2} \cdot \frac{16}{3} \cdot \frac{8}{3} \sqrt{3}=\frac{64}{9} \sqrt{3}$$
Total Area:
Total area = 32π – $$\frac{32}{9}$$π + 64√3 – $$\frac{64}{9}$$√3
Total area = $$\frac{256}{9} \pi+\frac{512}{9} \sqrt{3}$$ ≅ 188
The area of the figure is approximately 188.
Question 4.
Two square regions A and B each have Area(8). One vertex of square B is the center point of square A. Can you find the area of A ∪ B and A ∩ B without any further information? What are the possible areas?
Rotating the shaded area about the center point of square A by a quarter turn three times gives four congruent non-overlapping regions. Each region must have area one-fourth the area of the square. So, the shaded region has Area(2).
Area(A ∪ B) = 8 + 8 – 2 = 14
Area(A ∩ B) = 2
Question 5.
Four congruent right triangles with leg lengths a and b and hypotenuse length c are used to enclose the green region in Figure 1 with a square and then are rearranged inside the square leaving the green region in Figure 2.
a. Use Property 4 to explain why the green region in Figure 1 has the same area as the green region in Figure 2.
The white polygonal regions in each figure have the same area, so the green region (difference of the big square and the four triangles) has the same area in each figure.
b. Show that the green region in Figure 1 is a square, and compute its area.
Each vertex of the green region is adjacent to the two acute angles in the congruent right triangles whose sum is 90°. The adjacent angles also lie along a line (segment), so their sum must be 180°. By addition, it follows that each vertex of the green region in Figure 1 has a degree measure of 90°. This shows that the green region is at least a rectangle.
The green region was given as having side lengths of C, so together with having four right angles, the green region must be a square.
c. Show that the green region in Figure 2 is the union of two non-overlapping squares, and compute its area.
The congruent right triangles are rearranged such that their acute angles are adjacent, forming a right angle. The angles are adjacent to an angle at a vertex of an a × a green region, and since the angles are all adjacent along a line (segment), the angle in the green region must then be 90°. If the green region has four sides of length a, and one angle is 90°, the remaining angles must also be 90°, and the region a square.
A similar argument shows that the green b × b region is also a square. Therefore, the green region in Figure 2 is the union of two non-overlapping squares. The area of the green region is then a2 + b2.
d. How does this prove the Pythagorean theorem?
Because we showed the green regions in Figures 1 and 2 to be equal in area, the sum of the areas in Figure 2 being a2 + b2, therefore, must be equal to the area of the green square region in Figure 1, c2. The lengths a, b, and c were given as the two legs and hypotenuse of a right triangle, respectively, so the above line of questions shows that the sum of the squares of the legs of a right triangle is equal to the square of the hypotenuse.
Eureka Math Geometry Module 3 Lesson 2 Exit Ticket Answer Key
Question 1.
Wooden pieces in the following shapes and sizes are nailed together to create a sign in the shape of an arrow. The pieces are nailed together so that the rectangular piece overlaps with the triangular piece by 4 in. What is the area of the region in the shape of the arrow?
Area(Arrow) = Area(Rectangle) + Area(Triangle) – Area(Overlap)
Area(Arrow) = 144 + 104 – 24
Area(Arrow) = 224
The area of the region in the shape of the arrow is 224 in2.
Question 2.
A quadrilateral Q is the union of two triangles T1 and T2 that meet along a common side as shown in the diagram. Explain why Area(Q) = Area(T1) + Area(Tz).
Q = T1 ∪ T2, so Area(Q) = Area(T1) + Area(T2) – Area(T1 ∩ T2)
Since T1 ∩ T2 is a line segment, the area of T1 ∩ T2 is 0.
Area(T1) + Area(T2) – Area(T1 ∩ T2) = Area(T1) + Area(T2) – 0
Area(T1) + Area(T2) – Area(T1 ∩ T2) = Area(T1) + Area(T2)
Scroll to Top | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.662534773349762, "perplexity": 471.65456991242854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00162.warc.gz"} |
https://www.chocolatesparalucia.com/tag/area-function/ | ## Calculus of resonances in an uniform acoustic tube
We assume that the glottal end is closed, but the mouth is open. This is the configuration we are referring to:
The acoustic tube is uniform, and its length is L. The glottis, located at x=-L, is closed (infinite impedance) and the mouth, located at x=0, is open (impedance zero). Now, pressure variation p(x) along this uniform acoustic tube is expressed as:
$latex \frac{d^2p}{dx^2} + \left(\frac{2\pi f}{c}\right)^2p = 0 ~~(I)$
where f represents frequency in Hz, and c is the speed of sound: $latex 3.53 \times 10^4 cm/s$ at 37° C.
According to the boundary conditions (the impedances at both ends), the solution is:
$latex p(x) = P_m \sin{\frac{2\pi f}{c}x} ~~(II)$
where $latex P_m$ is the peak in sound pressure. On the other hand, we have a relation between pressure and volume velocity
$latex \frac{dp}{dx} = -\frac{j2\pi f \rho}{A}U ~~(III)$
$latex A$ is a constant representing the tube’s area. Now, volume velocity can be expressed as
$latex U(x) = jP_m \frac{A}{\rho c} \cos{\frac{2\pi f}{c}x} ~~(IV)$
where $latex \rho$ equals the average atmospheric density ($latex 1.14 \times 10^{-3} gm / cm ^ 3$ at 37°C).
As U(−L) = 0, resonances Fn of the acoustic tube are
$latex Fn = \frac{2n – 1}{4}\frac{c}{L} ~~(V)$
where n=1, 2, 3… And that’s it. We can see that the area function does not affect the location of resonances. Finally, remember that, in average, the male oral tract has a length of 16.9 cm, and the female tract has an average length of 14.1 cm. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9543006420135498, "perplexity": 969.7965485823664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00020.warc.gz"} |
http://tex.stackexchange.com/questions/101232/table-alignment | # Table alignment
Consider the following example:
Code
\documentclass{article}
\usepackage{siunitx}
\usepackage{booktabs,dcolumn,array}
\begin{document}
\begin{table}
\centering
\toprule
\midrule
& \si{\hour} & \si{\minute} & \si{\s} & \ensuremath{^\circ} & \ensuremath{\prime} & \ensuremath{\prime\prime} \\
\midrule
11{:}40{:}28 & 00 & 03 & 45.48 & 18 & 40 & 03.78 \\
\bottomrule
\end{tabular}
\end{table}
\end{document}
Why are s not centered above 45.48 and \prime\prime not centered above 03.78 and how do I achieve this?
-
I'd use siunitx' S-columns and maybe \sisetup{table-parse-only}. BTW: there's also \si{\arcminute} and \si{\arcsecond}. – cgnieder Mar 6 '13 at 22:41
I will try to do this. Thank you for the hints. – Svend Tveskæg Mar 6 '13 at 22:52
@cgnieder Could you maybe show me how to use the S-columns in this case, since I am not exactly sure how to use it? (My tries give a wrong alignment.) – Svend Tveskæg Mar 6 '13 at 22:59
dcolumn columns line up entries without a . as if they are integers, so your headings are right aligned to the position of the decimal point in the column. You can use \multicolumn{1} to supply different formatting for the cell:
\documentclass{article}
\usepackage{siunitx}
\usepackage{booktabs,dcolumn,array}
\newcolumntype{d}[1]{D{.}{.}{#1}}
\begin{document}
\begin{table}
\centering
\caption{Something.}
\label{tbl:1}
\toprule
\midrule
& \si{\hour} & \si{\minute} & \multicolumn{1}{c !{\quad}}{\si{\s}} & \ensuremath{^\circ} & \ensuremath{\prime} & \multicolumn{1}{c}{\ensuremath{\prime\prime} }\\
\midrule
11{:}40{:}28 & 00 & 03 & 45.48 & 18 & 40 & 03.78 \\
\bottomrule
\end{tabular}
\end{table}
This is Table~\ref{tbl:1}.
\end{document}
As an aside since you are loading siunitx anyway you may want to use its S column rather than my dcolumn code.
-
Very nice, indeed! – Svend Tveskæg Mar 6 '13 at 22:48
@SvendMortensen by the way it would look nicer if you used just \circ rather than ^\circ or use $'$ and $''$ so that all three were textsize or all three superscript, currently circ is small and raised and the primes are full size on the baseline – David Carlisle Mar 6 '13 at 22:54
Good point, David! I will do that. – Svend Tveskæg Mar 6 '13 at 22:59
Here's an implementation with siunitx features:
\documentclass{article}
\usepackage{siunitx}
\usepackage{booktabs,array}
\begin{document}
\begin{table}
\centering
\begin{tabular}{
c
S[table-format=2.0,minimum-integer-digits=2]
S[table-format=2.0,minimum-integer-digits=2]
S[table-format=2.2,minimum-integer-digits=2]
S[table-format=2.0,minimum-integer-digits=2]
S[table-format=2.0,minimum-integer-digits=2]
S[table-format=2.2,minimum-integer-digits=2]
}
\toprule
Times & \multicolumn{3}{c}{RAAN} & \multicolumn{3}{c}{Dec}\\
\midrule
& \si{\hour} & \si{\minute} & \si{\s} & \si{\degree} & \si{\arcminute} & \si{\arcsecond} \\
\midrule
11{:}40{:}28 & 00 & 03 & 45.48 & 18 & 40 & 03.78 \\
\bottomrule
\end{tabular}
\end{table}
\end{document}
Instead of adding minimum-integer-digits=2 to each column specifier, a \sisetup{minimum-integer-digits=2} could be made inside the table environment, so it would disappear after it. I just wrote a specifier and copied it five other times, so this is not a big nuisance when typing.
The difference is that if one column doesn't need the setting, the option should be specified. If all S columns share a setting, it may be convenient to use \sisetup.
-
Wouldn't it be easier to add \sisetup{minimum-integer-digits=2} to the table instead of adding it to every column? Except of course one wants to be able to easily change it for a single column later on... – cgnieder Mar 6 '13 at 23:13
Great answer, I must say. – Svend Tveskæg Mar 6 '13 at 23:13
@cgnieder I had the same thought. :) – Svend Tveskæg Mar 6 '13 at 23:14
@cgnieder I did it in the first version, but this might not what is really desired. I'll add a note about it, thanks. – egreg Mar 6 '13 at 23:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7743546366691589, "perplexity": 2388.5118291283316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379083.43/warc/CC-MAIN-20141119123259-00119-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-2-multiplying-and-dividing-fractions-2-4-writing-a-fraction-in-lowest-terms-2-4-exercises-page-137/56 | ## Basic College Mathematics (10th Edition)
To write a fraction in its lowest terms, divide both the numerator and the denominator by the greatest common factor. $\frac{570}{95}$ = $\frac{570 \div 95}{95 \div 95}$ = $\frac{6}{1}$ = 6 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9749013781547546, "perplexity": 152.21600101778856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00471.warc.gz"} |
http://mathhelpforum.com/latex-help/184577-latex-blogger.html | 1. ## LaTeX in Blogger
For some time my preferred blogging site has been WordPress because it supports LaTeX, well now there is an effective means of including LaTeX on Blogger (or any site that allows HTML and JavaScript).
The trick is to include some java-script to call the LaTeX system provided by MathJax
Include the code:
<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">
</script>
At the top of a post in the HTML editor then LaTeX can be included between the delimiters for in-line LaTeX and between for display LaTeX.
CB
2. ## Re: LaTeX in Blogger
Originally Posted by CaptainBlack
For some time my preferred blogging site has been WordPress because it supports LaTeX, well now there is an effective means of including LaTeX on Blogger (or any site that allows HTML and JavaScript).
The trick is to include some java-script to call the LaTeX system provided by MathJax
Include the code:
<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript">
</script>
At the top of a post in the HTML editor then LaTeX can be included between the delimiters for in-line LaTeX and between for display LaTeX.
CB
Maybe I am misunderstanding your question, but why can't you apply this to wordpress? It allows you to use HTML, no?
3. ## Re: LaTeX in Blogger
Originally Posted by Drexel28
Maybe I am misunderstanding your question, but why can't you apply this to wordpress? It allows you to use HTML, no?
You can, though I have not tried it. But why would you when wordpress has LaTeX already, and it must be better not to go outside for LaTeX rendering (especially since MathJax does this at every view as it does not create images which are stored).
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8282065987586975, "perplexity": 2917.111608460902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00143-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://phys.libretexts.org/Bookshelves/Electricity_and_Magnetism/Book%3A_Electricity_and_Magnetism_(Tatum)/03%3A_Dipole_and_Quadrupole_Moments/3.08%3A_Quadrupole_Moment | $$\require{cancel}$$
Consider the system of charges shown in Figure $$III$$.13. It has no net charge and no net dipole moment. Unlike a dipole, it will experience neither a net force nor a net torque in any uniform field. It may or may not experience a net force in an external nonuniform field. For example, if we think of the quadrupole as two dipoles, each dipole will experience a force proportional to the local field gradient in which it finds itself. If the field gradients at the location of each dipole are equal, the forces on each dipole will be equal but opposite, and there will be no net force on the quadrupole. If, however, the field gradients at the positions of the two dipoles are unequal, the forces on the two dipoles will be unequal, and there will be a net force on the quadrupole. Thus there will be a net force if there is a non-zero gradient of the field gradient. Stated another way, there will be no net force on the quadrupole if the mixed second partial derivatives of the field components (the third derivatives of the potential!) are zero. Further, if the quadrupole is in a nonuniform field, increasing, say, to the right, the upper pair will experience a force to the right and the lower pair will experience a force to the left; thus the system will experience a net torque in an inhomogeneous field, though there will be no net force unless the field gradients on the two pairs are unequal.
$$\text{FIGURE III.13}$$
The system possesses what is known as a quadrupole moment. While a single charge is a scalar quantity, and a dipole moment is a vector quantity, the quadrupole moment is a second order symmetric tensor.
The dipole moment of a system of charges is a vector with three components given by
$p_x=\sum Q_i x_i , \, p_y=\sum Q_iy_i,\,p_z=\sum Q_i z_i .$
The quadrupole moment $$\textbf{q}$$ has nine components (of which six are distinct) defined by
$q_{xx}=\sum Q_ix_i^2 \nonumber$
$q_{xy}=\sum Q_ix_iy_i \nonumber$
etc., and its matrix representation is
$\textbf{q}=\begin{pmatrix} q_{xx} & q_{xy} & q_{xz} \\ q_{xy} & q_{yy} & q_{yz} \\ q_{xz} & q_{yz} & q_{zz} \\ \end{pmatrix}\label{3.8.1}$
For a continuous charge distribution with charge density $$ρ$$ coulombs per square metre, the components will be given by $$q_{xx}=\int \rho x^2 d\tau$$, etc., where $$d\tau$$ is a volume element, given in rectangular coordinates by $$dx\,dy\,dz$$ and in spherical coordinates by $$r^2\sin θ\,dr\,dθ\,dφ$$. The SI unit of quadrupole moment is C m2, and the dimensions are L2Q, By suitable rotation of axes, in the usual way (see for example section 2.17 of Classical Mechanics), the matrix can be diagonalized, and the diagonal elements are then the eigenvalues of the quadrupole moment, and the trace of the matrix is unaltered by the rotation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287323951721191, "perplexity": 174.70201608213344}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00215.warc.gz"} |
https://math.berkeley.edu/courses/summer-2012-math-104-001-lec | Summer 2012 MATH 104 001 LEC
Introduction to Analysis
Schedule:
SectionDays/TimeLocationInstructorCCN
001 LECMTuWTh 10-11A 70 EVANSHENING, A58790
Units/CreditSession DatesEnrollment
406/18-08/10/12Limit:40 Enrolled:33 Waitlist:1 Avail Seats:7 [on 08/10/12]
Summer Fees: UC Undergraduate \$1,624.00, UC Graduate \$2,040.00, Visiting \\$1,660.00
Note: Also: REZAKHANLOU, F; MW 11-12P, 70 EVANS
Discussions:
SectionDays/TimeLocationInstructorCCN
101 DISTuTh 11-12P 70 EVANSHENING, A58795
Prerequisites: 53 and 54.
Syllabus: The real number system. Sequences, limits, and continuous functions in R and R. The concept of a metric space. Uniform convergence, interchange of limit operations. Infinite series. Mean value theorem and applications. The Riemann integral.
Office:
Office Hours:
Required Text: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6102979779243469, "perplexity": 28967.828877836124}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00562.warc.gz"} |
https://www.physicsforums.com/threads/additional-group-theory-issues.300140/ | 1. Mar 16, 2009
### ZTV
I really don't get this group theory stuff at all. These should be simple questions, but alas not...
1. The problem statement, all variables and given/known data
Assume that * is an associative operation on S and that a is an element of S.
Let C(a) = {x: x is an element of S and a*x = x*a}
Prove that C(a) is closed with respect to *
2. Relevant equations
Unsure
3. The attempt at a solution
To be perfectly honest, I don't understand the notation or anything. I have no clue where to start. I know that to prove closure I have to show that when g, h are elements of G that g*h is also an element of G. Does this mean I have to show that a*x is an element of S?
~~~~
1. The problem statement, all variables and given/known data
Prove that Aut(Z3) is isomorphic to Z2 [Z3 and Z2 the group of modulo classes, eg. Z3: { [0][1][2]}
2. Relevant equations
Once I've found Aut(Z3) which I think I've done, I need to show that Aut(Z3) is one-to-one, onto and operation preserving (homorphic)
I found Aut(Z3) to be {[0][1][2] , [0][2][1]}
3. The attempt at a solution
With Aut(Z3) = {[0][1][2] , [0][2][1]}
and
Z2 = {[0][1]}
I can see how they are supposed to be isomorphic already.
I consider f: {Aut(Z3)) -> Z2}
I'm thinking that showing that they are one-to-one and onto may be trivial, but I'm not sure.
I also looked at f ([0][1][2]) = f ([0][2][1]) and proving [0]=[1] for one-to-one but this doesn't make sense because [0]=/=[1] ... hmmm... perhaps I'm being silly.
2. Mar 16, 2009
### Focus
For the first question take two elements of the C(a), say x and y, and show that x*y is in C(a), i.e. (x*y)*a=a*(x*y)
Second question, you need to find the Automorphisms on Z3 which you have. Aut is a group under composition. Label the two automorphisms you have and make an isomorphism (which is pretty obvious) to Z2. The problem you are having is that you got the wrong impression what Aut is. It is a set of functions, so Aut(Z3) = {[0][1][2] , [0][2][1]} is not true. It consists of two functions, one of which is the identity, and the other one swaps [1] and [2].
3. Mar 16, 2009
### HallsofIvy
Staff Emeritus
4. Mar 17, 2009
### ZTV
So what are the elements of Aut(Z3) then?
Functions f that map Z3->Z3 such that fZ3 = Z3 ?
If this is the case...
I now have aut(Z3) = {fa fb} and Z2 = {[0][1]}
I'm trying to show that they are homomorpic/operation preserving.
I have to define a function y that maps
y: Aut(Z3) -> Z2 is this correct? If so, how do I show it is operation preserving?
Once I have that it is homomorphc, the fact that it is one-to-one and onto is trivial because fa -> [0] if it is homomorphic, so fb must go to [1] hence one-to-one and onto. Is this correct also?
Thanks
5. Mar 17, 2009
### Focus
Yes, mind you fa is the identity map. If you have shown it to be a homomorphism then it preserves the operation (which is what homomorphisms are). The bijection is clear, homomorphism shouldn't be too hard to prove.
As a general rule of thumb you want to specify how y maps, its the map as you said that takes fa to [0] and fb to [1].
Think of homomorphism as structure preserving, if f is a homomorphism then f(ab)=f(a)f(b), so they practically have the same operation. Isomorphism tells you that you just essentially relabeled your elements, because it is bijective and operation preserving.
Similar Discussions: Additional Group Theory Issues | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107187390327454, "perplexity": 1012.5933134628785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513512.31/warc/CC-MAIN-20171211125234-20171211145234-00133.warc.gz"} |
https://www.physicsforums.com/threads/combination-3-boys-with-7-chairs.688124/ | # Combination: 3 boys with 7 chairs
1. Apr 26, 2013
### Michael_Light
1. The problem statement, all variables and given/known data
Suppose there is 7 chairs arranged in a straight line, each of the 3 boys will sit randomly on one of the chair . In how many ways the boys can be seated if the 3 boys cannot sit next to each other? Assume that the boys are indistinguishable.
I listed out all the possible outcomes (which is 10), but i believe there is a generalized way to find the answer. Can any one enlighten me?
2. Relevant equations
3. The attempt at a solution
Let O represent seat occupied by the boys and X is empty seat.
Possible outcomes:
XOXOXOX
XOXOXXO
XOXXOXO
XXOXOXO
OXXOXOX
OXOXXOX
OXOXOXX
OXXOXXO
OXOXXXO
OXXXOXO
2. Apr 26, 2013
### Ray Vickson
Using 'b' for 'boy' and 'e' for 'empty', start with bebeb and just figure out how many ways to add the two remaining 'e's.
3. Apr 27, 2013
### haruspex
For a generalized approach, suppose C chairs and B boys, same restriction. Each occupied chair, except the rightmost, must have a vacant chair on its right. To handle that exception, introduce an extra chair on the right, guaranteed vacant. So we can pair up each occupied chair with that adjacent vacant chair, making B such pairs and C+1-2B other vacant chairs. Can you proceed from there?
4. Apr 27, 2013
### Michael_Light
Got it. Your hint is very useful. Thanks. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837172269821167, "perplexity": 1417.849414437324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00625.warc.gz"} |
https://tex.stackexchange.com/questions/346165/what-is-this-math-font/346416 | # What is this math font?
I designated "D" on the image.
What is this math font called?
.
• It's just sans-serif. Try \mathsf{D}? – Au101 Dec 29 '16 at 4:14
• Thanks! Then how about to make it bold? I tried \mathbf{\mathsf{D}} , but it does not works. – Jay Lee Dec 29 '16 at 4:23
• Computer Modern does not have a bold sans-serif face, you will have to select a font that does have such a face – Au101 Dec 29 '16 at 4:24
• Also unlike text font commands \mathxx fonts do not combine so you would need to declare a new math alphabet for bold sans not use \mathbf{\mathsf – David Carlisle Dec 29 '16 at 17:18
You can either define your own math alphabet (here \mathbfsf) for bold sans-serif
\documentclass{article}
\DeclareMathAlphabet{\mathbfsf}{OT1}{cmss}{bx}{n}
\begin{document}
$\mathsf{D}$ $\mathbfsf{D}$
\end{document}
or you can use the bm package
\documentclass{article}
\usepackage{bm}
\begin{document}
$\mathsf{D}$ $\bm{\mathsf{D}}$
\end{document}
The output is in both cases | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013646245002747, "perplexity": 3419.9652141308607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00490.warc.gz"} |
http://nextbigfuture.com/2007/06/boron-nanotubes-provide-radiation.html | ## June 29, 2007
### Boron nanotubes provide radiation shielding and more
Boron nanotubes can provide strong, light weight, cost effective radiation shielding for space and fusion reactors
Compared to CNTs, boron nanotubes have some better properties such as high chemical stability, high resistance to oxidation at high temperatures and are a stable wide band-gap semiconductor. Because of these properties, they can be used for applications at high temperatures or in corrosive environments such as batteries, fuel cells, super capacitors, high-speed machines as solid lubricant."
Space radiation is qualitatively different from the radiation humans encounter on Earth. Once astronauts leave the Earth's protective magnetic field and atmosphere, they become exposed to ionizing radiation in the form of charged atomic particles traveling at close to the speed of light. Highly charged, high-energy particles known as HZE particles pose the greatest risk to humans in space. A long-term exposure to this radiation can lead to DNA damage and cancer. One of the shielding materials under study is boron 10. Scientists have known about the ability of boron 10 to capture neutrons since the 1930s and use it as a radiation shield in geiger counters as well as a shielding layer in nuclear reactors.
Name
Email *
Message * | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274462223052979, "perplexity": 2306.2747905811875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455162832.10/warc/CC-MAIN-20150501043922-00067-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/61552/unscramble-images-without-trying-all-permutations | # Unscramble images without trying all permutations
I try to write an algorithm that unscrambles images that were before scrambled by mixing up small blocks:
My idea is that in the bottom image there are more "sharp" corners compared to the image above. Therefore I try to minimize the functional:
energy(image)
{
score = 0;
for(pixels inside the image)
score = score + abs(pixel sum of sourrounding 4 pixels - 4 value of internal pixel)
}
The energy should therefore be high if there are many hard edges and low for the original image. In a test I found that the original image has the score $845.812$ whereas the scrambled one has the score $1.085.521$ so the approach might be worth a try.
My question is whether there is an efficient method to minimize the energy function now without trying all $n!$ permutations (too many). Or a different approach I didn't come up with.
-
Why are there 3 and not 4 surrounding pixels? And don't you have to take an absolute value or a square somewhere? Otherwise the interior contributions will cancel out and your function will only describe the boundary. – joriki Sep 3 '11 at 9:49
You are right: its 4 pixels and absolute value. I did it correct in the original formula but got it wrong in the pseudocode. – Listing Sep 3 '11 at 10:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7256724834442139, "perplexity": 802.181061631788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673081.9/warc/CC-MAIN-20151001215753-00038-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://slideplayer.com/slide/772356/ | # Simplify, Add, Subtract, Multiply and Divide
## Presentation on theme: "Simplify, Add, Subtract, Multiply and Divide"— Presentation transcript:
Simplify, Add, Subtract, Multiply and Divide
Square Roots
Simplifying Square Roots
Square root is in simplest form when….. The radicand of a square root does not have a perfect-square factor greater than 1. The radicand of a square root is not a fraction. The square root is not the denominator of a fraction.
Simplifying Square Roots
The radicand of a square root does not have a perfect-square factor greater than 1.
Simplifying Square Roots
The radicand of a square root is not a fraction.
Simplifying Square Roots
The square root is not the denominator of a fraction.
Adding and Subtracting with Square Roots
Simplify The sums and differences of square roots with similar radicands can be simplified based on the distributive property. addition
Subtracting with Square Roots
The sums and differences of square roots with similar radicands can be simplified based on the distributive property. subtraction
Multiplying with Square Roots
Product Property of Square Roots: For all positive real numbers a and b,
Dividing with Square Roots
Quotient Property of Square Roots: For all positive real numbers a and b,
Similar presentations | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910043239593506, "perplexity": 1321.9340338869642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158279.11/warc/CC-MAIN-20180922084059-20180922104459-00052.warc.gz"} |
http://beamimaging.com/product/model-450-decelerator/ | ## Product Overview
Beam Imaging Solutions presents the new model 450 ion beam decelerator for generating high current low energy ion beams. This decelerator was designed to increase low energy ion beam current by minimizing ion beam scattering with larger lens elements and also allowing for much higher initial ion beam energies before deceleration.
Model 450-L Lens System
Model 450 Decelerator
Available in vacuum housing with two rotatable 8″ conflat flanges and two 2.75″ conflat ports for the electrical connections. The housing is 6.750″ (171.45mm) long.
The performance curve above was generated by measuring the Ar+ beam current with a 10mm diameter collector approximately 2.5cm from the exit of the decelerator. The performance was measured up to 5keV initial Ar+ ion energy but the decelerator is designed to be used with beams up to 10keV. For these measurements, the decelerator was mounted 12″ (30cm) from the exit of a Beam Imaging Solutions model G-2 ion gun system with RFIS-100 ion source. The target/collector was floated to the ion beam retarding potential. Final ion beam energy was calculated by measuring the difference between the initial ion energy (accelerating potential) and retarding potential. In cases were the target must be kept at ground potential, the ion gun electrodes can be floated to a negative accelerating potential with maximum floating potential -1kV. The beam performance curve for a primary ion beam with 1 keV is shown below.
The decelerator is available with deceleration lens system (Model 450-L)
SIMION Geometery files and simulations are available upon request. Please see our contact page for contact information.
## Product Specifications
Lens Elements:
• 304 Stainless Steel
Lens Element Spacers:
• Alumina/ceramic
Lens Element Voltage rating:
• Max. recommended 10kV
Feedthrough Type:
• Two , 3-terminal 5kV MHV Feedthroughs , Standard (Two, 3 terminal 10kV SHV-10 optional)
UHV Compatible
• Vacuum housing with 6″ OD tube and 8″ rotatable conflats, 6.750″ (171.45mm) long. Two, 2.75″ conflat ports included for electrical connections.
Weight with vacuum housing:
• 20.5 pounds ~9kg | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834081768989563, "perplexity": 13766.47967441027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00249.warc.gz"} |
http://physics.stackexchange.com/questions/12625/how-does-hubbles-constant-affect-the-earths-orbit/22191 | # How does Hubble's constant affect the Earth's orbit
If Hubble's constant is $2.33 \times 10^{-18} \text{ s}^{-1}$ and the earth orbits the sun with average distance of 150 million kilometers; Does that mean the earth's orbital radius increases approximately $11\text{ m}/\text{year}$? Does the earth's angular momentum change? If so, where does the torque come from? If the angular momentum doesn't change, does the earth's orbital velocity (length of a year) change? If so, where does the lost kinetic energy go?
Aside: the 11 meters per year figure comes from Hubble expansion of space the distance of the earth's orbital radius integrated over an entire year.
$$(2.33 \times 10^{-18}\text{ s}^{-1}) (1.5 \times 10^{11} \text{ m}) (3.15 \times 10^7 \text{ s}/\text{year}) = 11 \text{ m}/\text{year}$$
-
BTW-- You'll note that Henry and I have made use of the MathJax formatting utility that is active on the site---using LaTeX syntax to typeset mathematics. – dmckee Jul 22 '11 at 23:28
No. Hubble's constant roughly says how the distance between two objects at rest with the universe grows. It does not say that the distant between everything is growing - the size of the hydrogen atom is not increasing. (My size is increasing, but from dietary rather than cosmological sources.) The size of objects and orbits are maintained by a balance of forces (classically). To whatever extent one can think of the expansion of the universe as pushing the Earth and Sun apart, it is already taken into account in setting the Earth's orbit.
The change in the Hubble constant can effect the orbit, see the paper linked by Ben Crowell. But just taking the Hubble constant and multiplying it by the Earth's radius, as I believe you have done, does not give you anything sensible.
-
looks like you don't believe in the Big Rip. Regardless, there does appear to be some evidence for hubble expansion of the moon's orbit. Though it's a bit more difficult to laser measure the distance from the earth to the sun. – rae Jul 22 '11 at 21:37
Actually a system like the solar system is predicted to expand due to cosmological expansion, but the effect is calculated to be incredibly small, much too small to detect: arxiv.org/abs/astro-ph/9803097v1 The size of a hydrogen atom doesn't change at all, because it's fixed by fundamental constants. Since the solar system does expand by some tiny amount, there is predicted to be a small violation of conservation of energy. That's OK, because energy isn't conserved in general relativity. – Ben Crowell Jul 22 '11 at 21:44
@rae: The paper by Dumin was never published, and looks just plain wrong to me. It contradicts the Cooperstock paper that I referenced above, which was published in a peer-reviewed journal. There is not a scrap of GR anywhere in the Dumin paper; to my knowledge, no competent relativist has ever suggested that GR leads to an effect of the order of magnitude of the discrepancy that Dumin attributes to cosmological effects. The Big Rip is not really relevant. We don't know if the laws of physics are such as would cause a Big Rip, and the OP is not asking about the remote future. – Ben Crowell Jul 22 '11 at 21:54
@Ben Crowell: I agree that a changing Hubble constant effects the orbit which is what that paper derives (see Eqn. 4.2 depends only on the second derivative of the scale factor). I'm dealing with only the effects of a constant Hubble 'constant', since I believe that what the original poster was asking about. His number 11m/year comes I believe from multiplying the earth's orbital radius by the Hubble constant which is certainly not correct. – BebopButUnsteady Jul 22 '11 at 22:30
@rae -- The problem is that there's no clearly-defined meaning to be attached to the phrase "the same point in its orbit the following year." If you use comoving coordinates (i.e., coordinates that expand with the Universe), then "the same point" will be further out than before. If you use local Minkowski coordinates, it won't. And of course there are infinitely many other choices. The common mistake people make is to think that comoving coordinates are what space is "really" doing, but the central idea of relativity is that coordinate systems are just conveniences, not "Truth." – Ted Bunn Jul 23 '11 at 17:43
For objects smaller than cosmic scale, such as atoms, planets and solar systems, the electromagnetic and gravitational forces that hold them together are not changing (as far as we know) and so those objects do not change size.
Between galaxies, so widely separated, there's just gravity, and that tends to average out due to every galaxy being surrounded by other galaxies in all directions. On a cosmic scale, galaxies are like a gas, with galaxies being the "molecules" and described by the idea gas equation. To account for gravity and finite size of the galaxies, we might use the Van der Waals equation or some other variation, but that's beside the point, useful only for increasing accuracy.
Hubble's constant describes the rate at which the "container" of the galactic gas is expanding, the way the density of galaxies decreases over time. In an ordinary gas such as air, when in an expanding chamber, certainly the molecules are not expanding. Likewise, neither are the galaxies changing their sizes, at least not for Hubble-related reasons.
-
You're oversimplifying by treating atoms and solar systems as being the same. GR does predict that solar systems will expand, just not by very much: arxiv.org/abs/astro-ph/9803097v1 The size of a hydrogen atom is set by fundamental constants. The size of a solar system is not. – Ben Crowell Jul 25 '11 at 0:39
The reason the universe expands is gravitation, as described by Einstein's field equation. The evolution of the universe is governed by gravitation, as described by Einstein's field equation. Over cosmological scale, the universe can be seen as homogeneous and isotropic, with very small density of matter and radiation. The density of matter and radiation is too small to counteract the expansion, an effect of initial condition. In local areas, however, the density is many magnitudes higher, and the effect of expansion is all but counteracted by the binding gravitational attraction.
-
Everything you've said is true, but it fails to answer the OP's question. One way to see that it doesn't answer it is that although you mention the Einstein field equation, everything you say is equally valid in a Newtonian expanding universe. The fractional rate of change in size due to cosmological expansion is 0 for a hydrogen atom, $\sim 10^{-41}\ \text{s}^{-1}$ for the earth-sun system (arxiv.org/abs/astro-ph/9803097v1 ), and $\sim H_o$ for a photon. I don't see how you would get that from "tends to confine." – Ben Crowell Mar 11 '12 at 1:44
I also wouldn't agree that "the reason the universe expands is gravitation." The reason it has been expanding, ever since the Big Bang, is inertia, and this would be just as true in a Newtonian model as in one based on GR. – Ben Crowell Mar 11 '12 at 1:49
@BenCrowell: To your second comment: By inertial the expansion will slow down, whereas in fact the expansion is speeding up, currently modeled by a non-zero cosmological constant, an effect of gravitation. – C.R. Mar 11 '12 at 1:58
@BenCrowell: To your first comment: I read your paper, and all I see is that according to the authors themselves, this topic, the effect of cosmological expansion on local systems, is highly contentious. I doubt your paper has settled the problem and become the consensus. – C.R. Mar 11 '12 at 2:04
@BenCrowell: Regardless, it is well known that gravitation bounds the solar system. Even cosmological expansion has an effect, it is infinitesimal small. I don't see how that invalidates my phrase "tends to confine" at local scale. – C.R. Mar 11 '12 at 2:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270461201667786, "perplexity": 460.07672451674097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657125654.84/warc/CC-MAIN-20140914011205-00196-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.physicsforums.com/threads/light-moves-at-c-from-all-frames-of-references.216786/ | # Light moves at C from all frames of references?
1. Feb 20, 2008
### Bigman
there are a few things i don't get when it comes to light moving at c from all frames of reference... i mean it makes sense to me in some cases: like if an observer on the earth sees a missle going one way at half the speed of light, and a spaceship going the other way at the speed of light, the spaceship won't observe the missle's speed as the speed of light, since time goes faster on the spaceship then it does to the observer on the earth (at least that's my understanding so far from what i've read... just to double check, is everything i said in that example accurate?)
but i can think of a few examples where it doesn't work out as easily (some of them are harder to explain then others). here's one: you have a space station floating out in the middle of no where in space (this is our initial reference point... i would have used earth, but i wanted to avoid all the gravity and orbits and rotation and stuff) and a ship takes off from the station, and ends up doing about half the speed of light (from the station's frame of reference). since the ship has sped up, the clock on board the ship is now going faster then the clock on board the space station (right?). now, lets say you eject two escape pods from the ship: one out the front, and one out the back(the ship is still facing directly away from the station), and they each shoot out with a velocity which, from the ships frame of reference, has a magnitude equal to the velocity of the space station (which is less then half the speed of light, because time is moving faster on the ship then it is on the space station... right?). what confuses me is, how fast are the clocks on board each of the escape pods going in relation to the spaceship, the space station, and each other?
Last edited: Feb 20, 2008
2. Feb 20, 2008
### chroot
Staff Emeritus
Clocks do not magically change their rate simply because they are moving. If you're on-board a star ship, you will look down at your wristwatch and see it behaving perfectly normally, no matter how fast the star ship is going relative to anything else in the universe.
You must have two frames of reference in order to see any effects of time dilation. If a ship leaves a space station at half the speed of light, it will appear to observers in the space station that clocks aboard the ship are running slowly. Similarly, it will appear to observers on the ship that the clocks aboard the space station are running slowly.
Velocities do not add as simply in special relativity as you are accustomed. If an escape pod leaves the ship at 0.5c wrt the ship, and the ship is moving at 0.5c wrt the space station, the two velocities add like this:
$v = \frac{ 0.5c + 0.5c }{ 1 + \frac{ 0.5c \cdot 0.5c } { c^2 } } = 0.8c$
Observers aboard the space station will measure the escape pod as moving away with a velocity of 0.8c.
- Warren
3. Feb 20, 2008
### yuiop
No material object (anything with greater than zero rest mass) can go at the speed of light relative to any observer.
Generally the clocks on any object moving relative to an observer are measured as "ticking" slower by that observer.
slower
Use the relativistic velocity addition equation to figure out the speed of the pods and then the Lorentz transformations for time to answer this question.
See http://math.ucr.edu/home/baez/physics/Relativity/SR/velocity.html
4. Feb 20, 2008
### Bigman
woops, i meant to say that the ship was moving half the speed of light in the first example. in the second example, i was interested mostly in the time dilation, though i think that makes more sense to me now, after reading warren's post in another thread... so now i have a question that has more to do with light itself: lets say you have a ship flying by a space station at .5c, both the ship and the space station have big bright light bulbs protruding from them, and the moment that the ship flies by the space station, both lights flash for an instant (so basically, you have light coming from two sources, which are in virtually the same spot in space but have different velocities). i'm wondering, if you freeze frame everything a moment later, will the two spheres of light be overlapping each other? and if so, where will the center of these spheres be located, at the position of the ship or the station (or will the center's position be somehow dependant on the frame of reference)?
5. Feb 20, 2008
### Janus
Staff Emeritus
The center of the expanding spheres will depend on the frame of reference.
6. Feb 20, 2008
### Bigman
that can't be possible, can it? lets say you had two sensors attached to the spaceship, one out a mile in front of the ship and one out a mile in the back (imagine long mic booms sticking off the front and back), and you had two more sensors attached to the station in a similar fasion, and when the ship and station pass each other, the two sensors in the front are next to each other, as are the two sensors in the back. would the ship record that both it's sensors went off at the same time? if so, would an observer on the ship say that the light hit the station's front sensor before hitting the station's back sensor, since the ships sensors and the stations sensors are no longer next to each other by the time the light reaches them?
7. Feb 20, 2008
### yuiop
Imagine that the light sources are so close together at the passing point that we can treat them as one light source for practical purposes. From the point of view of the space station there is ring of light centred on the space station. From the point of view of the ship (some moments later) there is a ring of light centered on the ship.
The ship sees itself as stationary and from that point of view it initially sees the space station approaching. At the moment the space station was alongside it sees a flash and then it sees a ring of light spreading out evenly in all directions and the space station moving away.
The spacestation sees itself as stationary and from that point of view it initially sees the ship approaching. At the moment the ship was alongside it sees a flash and then it sees a ring of light spreading out evenly in all directions and the ship moving away.
See the symmetry?
This is what would really be observed, even if there is only one light source like a spark flashing across a small gap between the two craft at they moment they are closest to each other.
8. Feb 21, 2008
### Janus
Staff Emeritus
The ship will say the light hit its sensors simultaneouly, while hitting the sensors of the station at different times. Conversely, the station will say that the light hit it's sensors simultaneouly, while hitting the ships sensors at different times.
Welcome to "The Relativity of Simultaneity".
9. Feb 21, 2008
### Bigman
wow... so if someone in the ship were somehow able to instantaneously observe light, they would observe that the light hit the station's front sensor first, then the ships two sensors, then the stations back sensor(and someone in the station would observe everything i just said, with the words "station" and "ship" switched)?
10. Feb 21, 2008
### Janus
Staff Emeritus
Yes.
Also consider this:
We put clocks at these sensors, all reading a time of zero and designed to start ticking when the sensor next to it is tripped by the light. Then according to the ship, the clocks next to its sensors start at the same time and are synchronized so that they show the same time at all times. The station's clocks, however, will not start at the same time and thus will not show the same time after they are running (once running they both tick at the same rate, but one clock will lag behind the other.) According to the station, the reverse is true. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834047794342041, "perplexity": 552.0663223300559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00055-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/227577/solve-differential-equations-using-laplace-transform | # Solve differential equations using Laplace transform..
Solve each of the following differential equations with initial values using the Laplace Transform.
$(b)\space y''-4y'+4y=0$ Where $y(0)=0$ and $y'(0)=3$
What I have so far:
$p^2L[y]-3-4pL[y]+4L[y]=0$
$L[y]=\frac{3}{p^2-4p+4}=\frac{3}{(p-2)^2}$ I'm not sure where to go from here..
$(c)\space y''+2y'+2y=2$ Where $y(0)=0$ and $y'(0)=1$
What I have so far:
$p^2L[y]-1+2pL[y]+2L[y]=L[2]$
$L[y](p^2+2p+2)=\frac{2+p}{p}$
$L[y]=\frac{2+p}{p((p+1)^2+1)}$ From here I tried using partial fractions:
$\frac{A}{p}+\frac{B}{(p+1)^2+1}$ I found A=1 and B=-1. I'm fairly sure that is correct, but I'm not sure where to go from here.
$(d)\space y''+y'=3x^2$ Where $y(0)=0$ and $y'(0)=1$
What I have so far:
$p^2L[y]-1+pL[y]=L[3x^2]=\frac{6}{p^3}$
$L[y]=\frac{6}{p^4(p+1)}$
$(e)y''+2y'+5y=3x^{-x}sin(x)$ Where $y(0)=0$ and $y'(0)=3$
What I have so far:
$p^2L[y]-3+2pL[y]+5L[y]=\frac{3}{(p+1)^2+1}$
-
You need one step further, find the inverse laplace! – kiss my armpit Nov 2 '12 at 15:01
I have a table that gives me standard Laplace transforms, such as $sin(ax)$, but i'm not sure how to implement this.. Could you finish $(c)$ or $(d)$ for me? So, I can try solving the others? Thanks – Dmitri.Mendeleev Nov 2 '12 at 15:04
## For the question c:
$$L\{y\}=\frac{1}{p}-\frac{1}{(p+1)^2+1}$$
The inverse Laplace becomes,
$$y(x)=1-e^{-x}\sin(x)$$
Explaination:
$$L^{-1}\{\frac{1}{p}\}=1$$
$$L^{-1}\{\frac{1}{p^2+1^2}\}=\sin(x)$$
$$L^{-1}\{\frac{1}{(p-(-1))^2+1^2}\}=e^{-x}\sin(x)$$
## For the question e:
$$\frac{L\{y\}}{3}=\frac{1}{(p+1)^2+2^2}+\frac{1}{(p+1)^2+2^2}\frac{1}{(p+1)^2+1^2}$$
$$\frac{y(x)}{3}=e^{-x}\sin(2x)+(e^{-x}\sin(2x))*(e^{-x}\sin(x))$$
$$\frac{y(x)}{3}=e^{-x}\sin(2x)+\int_0^x\left[e^{-\lambda}\sin(2\lambda) e^{-(x-\lambda)}\sin(x-\lambda)\right]\textrm{d}\lambda$$
$$y(x)=2e^{-x}\left(\sin x+\sin(2x)\right)$$
-
So it is a rule that for $-\frac{1}{(p+1)^2+1}$ the inverse laplace is $-e^xsin(x)$? – Dmitri.Mendeleev Nov 2 '12 at 15:13
I got them all now, except for $(e)$. Could you help me finish that one? Thanks – Dmitri.Mendeleev Nov 2 '12 at 15:53
The final answer for question number e must be checked again. I feel there is a tiny mistake there. – kiss my armpit Nov 2 '12 at 18:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717110991477966, "perplexity": 269.19832020373605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162648.4/warc/CC-MAIN-20160205193922-00108-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://worldwidescience.org/topicpages/h/high+resolution+scintillator.html | #### Sample records for high resolution scintillator
1. Multi element high resolution scintillator structure
International Nuclear Information System (INIS)
Cusano, D.A.
1980-01-01
A gamma camera scintillator structure, suitable for detecting high energy gamma photons which, in a single scintillator camera, would require a comparatively thick scintillator crystal, so resulting in unacceptable dispersion of light photons, comprises a collimator array of a high Z material with elongated, parallel wall channels with the scintillator material being disposed in one end of the channels so as to form an integrated collimator/scintillator structure. The collimator channel walls are preferably coated with light reflective material and further light reflective surfaces being translucent to gamma photons, may be provided in each channel. The scintillators may be single crystals or preferably comprise a phosphor dispersed in a thermosetting translucent matrix as disclosed in GB2012800A. The light detectors of the assembled camera may be photomultiplier tubes charge coupled devices or charge injection devices. (author)
2. Liquid Scintillation High Resolution Spectral Analysis
Energy Technology Data Exchange (ETDEWEB)
Grau Carles, A.; Grau Malonda, A.
2010-08-06
The CIEMAT/NIST and the TDCR methods in liquid scintillation counting are based on the determination of the efficiency for total counting. This paper tries to expand these methods analysing the pulse-height spectrum of radionuclides. To reach this objective we have to generalize the equations used in the model and to analyse the influence of ionization and chemical quench in both spectra and counting efficiency. We present equations to study the influence of different photomultipliers response in systems with one, two or three photomultipliers. We study the effect of the electronic noise discriminator level in both spectra and counting efficiency. The described method permits one to study problems that up to now was not possible to approach, such as the high uncertainty in the standardization of pure beta-ray emitter with low energy when we apply the TDCR method, or the discrepancies in the standardization of some electron capture radionuclides, when the CIEMAT/NIST method is applied. (Author) 107 refs.
3. Liquid Scintillation High Resolution Spectral Analysis
International Nuclear Information System (INIS)
Grau Carles, A.; Grau Malonda, A.
2010-01-01
The CIEMAT/NIST and the TDCR methods in liquid scintillation counting are based on the determination of the efficiency for total counting. This paper tries to expand these methods analysing the pulse-height spectrum of radionuclides. To reach this objective we have to generalize the equations used in the model and to analyse the influence of ionization and chemical quench in both spectra and counting efficiency. We present equations to study the influence of different photomultipliers response in systems with one, two or three photomultipliers. We study the effect of the electronic noise discriminator level in both spectra and counting efficiency. The described method permits one to study problems that up to now was not possible to approach, such as the high uncertainty in the standardization of pure beta-ray emitter with low energy when we apply the TDCR method, or the discrepancies in the standardization of some electron capture radionuclides, when the CIEMAT/NIST method is applied. (Author) 107 refs.
4. Development of High-Resolution Scintillator Systems
International Nuclear Information System (INIS)
Larry A. Franks; Warnick J. Kernan
2007-01-01
Mercuric iodide (HgI2) is a well known material for the direct detection of gamma-rays; however, the largest volume achievable is limited by the thickness of the detector which needs to be a small fraction of the average trapping length for electrons. We report results of using HgI2 crystals to fabricate photocells used in the readout of scintillators. The optical spectral response and efficiency of these photocells were measured and will be reported. Nuclear response from an HgI2 photocell that was optically matched to a cerium-activated scintillator is presented and discussed. Further improvements can be expected by optimizing the transparent contact technology
5. High-resolution x-ray imaging using a structured scintillator
Energy Technology Data Exchange (ETDEWEB)
Hormozan, Yashar, E-mail: hormozan@kth.se; Sychugov, Ilya; Linnros, Jan [Materials and Nano Physics, School of Information and Communication Technology, KTH Royal Institute of Technology, Electrum 229, Kista, Stockholm SE-16440 (Sweden)
2016-02-15
Purpose: In this study, the authors introduce a new generation of finely structured scintillators with a very high spatial resolution (a few micrometers) compared to conventional scintillators, yet maintaining a thick absorbing layer for improved detectivity. Methods: Their concept is based on a 2D array of high aspect ratio pores which are fabricated by ICP etching, with spacings (pitches) of a few micrometers, on silicon and oxidation of the pore walls. The pores were subsequently filled by melting of powdered CsI(Tl), as the scintillating agent. In order to couple the secondary emitted photons of the back of the scintillator array to a CCD device, having a larger pixel size than the pore pitch, an open optical microscope with adjustable magnification was designed and implemented. By imaging a sharp edge, the authors were able to calculate the modulation transfer function (MTF) of this finely structured scintillator. Results: The x-ray images of individually resolved pores suggest that they have been almost uniformly filled, and the MTF measurements show the feasibility of a few microns spatial resolution imaging, as set by the scintillator pore size. Compared to existing techniques utilizing CsI needles as a structured scintillator, their results imply an almost sevenfold improvement in resolution. Finally, high resolution images, taken by their detector, are presented. Conclusions: The presented work successfully shows the functionality of their detector concept for high resolution imaging and further fabrication developments are most likely to result in higher quantum efficiencies.
6. Gas scintillation glass GEM detector for high-resolution X-ray imaging and CT
Energy Technology Data Exchange (ETDEWEB)
Fujiwara, T., E-mail: fujiwara-t@aist.go.jp [Research Institute for Measurement and Analytical Instrumentation, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Mitsuya, Y. [Nuclear Professional School, The University of Tokyo, Tokai, Naka, Ibaraki 319-1188 (Japan); Fushie, T. [Radiment Lab. Inc., Setagaya, Tokyo 156-0044 (Japan); Murata, K.; Kawamura, A.; Koishikawa, A. [XIT Co., Naruse, Machida, Tokyo 194-0045 (Japan); Toyokawa, H. [Research Institute for Measurement and Analytical Instrumentation, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, H. [Institute of Engineering Innovation, School of Engineering, The University of Tokyo, Bunkyo, Tokyo 113-8654 (Japan)
2017-04-01
A high-spatial-resolution X-ray-imaging gaseous detector has been developed with a single high-gas-gain glass gas electron multiplier (G-GEM), scintillation gas, and optical camera. High-resolution X-ray imaging of soft elements is performed with a spatial resolution of 281 µm rms and an effective area of 100×100 mm. In addition, high-resolution X-ray 3D computed tomography (CT) is successfully demonstrated with the gaseous detector. It shows high sensitivity to low-energy X-rays, which results in high-contrast radiographs of objects containing elements with low atomic numbers. In addition, the high yield of scintillation light enables fast X-ray imaging, which is an advantage for constructing CT images with low-energy X-rays.
7. Porous silicon phantoms for high-resolution scintillation imaging
Energy Technology Data Exchange (ETDEWEB)
Di Francia, G. [Portici Research Centre, ENEA, Via Vecchio Macello, 80055 Portici, Naples (Italy); Scafe, R. [Casaccia Research Centre, ENEA, 00060 S.Maria di Galeria, Rome (Italy)]. E-mail: scafe@casaccia.enea.it; De Vincentis, G. [Department of Radiological Sciences, University of Rome ' La Sapienza' , V.le Regina Elena, 324, 00161 Rome (Italy); La Ferrara, V. [Portici Research Centre, ENEA, Via Vecchio Macello, 80055 Portici, Naples (Italy); Iurlaro, G. [Casaccia Research Centre, ENEA, 00060 S.Maria di Galeria, Rome (Italy); Nasti, I. [Portici Research Centre, ENEA, Via Vecchio Macello, 80055 Portici, Naples (Italy); Montani, L. [Casaccia Research Centre, ENEA, 00060 S.Maria di Galeria, Rome (Italy); Pellegrini, R. [Department of Experimental Medicine, University of Rome ' La Sapienza' , V.le Regina Elena, 324, 00161 Rome (Italy); Betti, M. [Department of Experimental Medicine, University of Rome ' La Sapienza' , V.le Regina Elena, 324, 00161 Rome (Italy); Martucciello, N. [Portici Research Centre, ENEA, Via Vecchio Macello, 80055 Portici, Naples (Italy); Pani, R. [Department of Experimental Medicine, University of Rome ' La Sapienza' , V.le Regina Elena, 324, 00161 Rome (Italy)
2006-12-20
High resolution radionuclide imaging requires phantoms with precise geometries and known activities using either Anger cameras equipped with pinhole collimators or dedicated small animal devices. Porous silicon samples, having areas of different shape and size, can be made and loaded with a radioactive material, obtaining: (a) precise radio-emitting figures corresponding to the porous areas geometry (b) a radioactivity of each figure depending on the pore's specifications, and (c) the same emission energy to be used in true exams. To this aim a sample with porous circular areas has been made and loaded with a {sup 99m}TcO{sub 4} {sup -} solution. Imaging has been obtained using both general purpose and pinhole collimators. This first sample shows some defects that are analyzed and discussed.
8. Time resolution research in liquid scintillating detection
International Nuclear Information System (INIS)
He Hongkun; Shi Haoshan
2006-01-01
The signal processing design method is introduced into liquid scintillating detection system design. By analyzing the signal of liquid scintillating detection, improving time resolution is propitious to upgrade efficiency of detecting. The scheme of realization and satisfactory experiment data is demonstrated. Besides other types of liquid scintillating detection is the same, just using more high speed data signal processing techniques and elements. (authors)
9. High-resolution tracking using large capillary bundles filled with liquid scintillator
CERN Document Server
Annis, P; Benussi, L; Bruski, N; Buontempo, S; Currat, C; D'Ambrosio, N; Van Dantzig, R; Dupraz, J P; Ereditato, A; Fabre, Jean-Paul; Fanti, V; Feyt, J; Frekers, D; Frenkel, A; Galeazzi, F; Garufi, F; Goldberg, J; Golovkin, S V; Gorin, A M; Grégoire, G; Harrison, K; Höpfner, K; Holtz, K; Konijn, J; Kozarenko, E N; Kreslo, I E; Kushnirenko, A E; Liberti, B; Martellotti, G; Medvedkov, A M; Michel, L; Migliozzi, P; Mommaert, C; Mondardini, M R; Panman, J; Penso, G; Petukhov, Yu P; Rondeshagen, D; Siegmund, W P; Tyukov, V E; Van Beek, G; Vasilchenko, V G; Vilain, P; Visschers, J L; Wilquet, G; Winter, Klaus; Wolff, T; Wörtche, H J; Wong, H; Zimyn, K V
2000-01-01
We have developed large high-resolution tracking detectors based on glass capillaries filled with organic liquid scintillator of high refractive index. These liquid-core scintillating optical fibres act simultaneously as detectors of charged particles and as image guides. Track images projected onto the readout end of a capillary bundle are visualized by an optoelectronic chain consisting of a set of image-intensifier tubes followed by a photosensitive CCD or by an EBCCD camera. Two prototype detectors, each composed of \\hbox{$\\approx 10^6$} capillaries with \\hbox{20$-$25 $\\mu$m} diameter and \\hbox{0.9$-$1.8 m} length, have been tested, and a spatial resolution of the order of \\hbox{20$-$40 $\\mu$m} has been attained. A high scintillation efficiency and a large light-attenuation length, in excess of 3 m, was achieved through special purification of the liquid scintillator. Along the tracks of minimum-ionizing particles, the hit densities obtained were $\\sim$ 8 hits/mm at the readout window, and \\hbox{$\\sim$ 3 ...
10. A high resolution scintillating fibre (SCIFI) tracking device with CCD readout
International Nuclear Information System (INIS)
Atkinson, M.N.; Crennell, D.J.; Fisher, C.M.; Hughes, P.T.; Kirkby, J.; Fent, J.; Freund, P.; Osthoff, A.; Pretzl, K.
1987-06-01
The authors present initial test beam measurements of a high resolution scintillating fibre detector with charge coupled device readout. The analysis procedure is discussed and the performance of the detector and its readout assembly is evaluated. A detected photon density is found along minimum ionising tracks of 2.0 mm -1 , with a straight-line RMS residual of 19.3 +- 2.9 μm, giving rise to a track impact parameter precision of 8.8 +- 2.0 μm. The two-track resolution is found to be 52 μm. (author)
11. Characterization of scintillator-based detectors for few-ten-keV high-spatial-resolution x-ray imaging
Energy Technology Data Exchange (ETDEWEB)
Larsson, Jakob C., E-mail: jakob.larsson@biox.kth.se; Lundström, Ulf; Hertz, Hans M. [Biomedical and X-ray Physics, Department of Applied Physics, KTH Royal Institute of Technology/Albanova, Stockholm 10691 (Sweden)
2016-06-15
Purpose: High-spatial-resolution x-ray imaging in the few-ten-keV range is becoming increasingly important in several applications, such as small-animal imaging and phase-contrast imaging. The detector properties critically influence the quality of such imaging. Here the authors present a quantitative comparison of scintillator-based detectors for this energy range and at high spatial frequencies. Methods: The authors determine the modulation transfer function, noise power spectrum (NPS), and detective quantum efficiency for Gadox, needle CsI, and structured CsI scintillators of different thicknesses and at different photon energies. An extended analysis of the NPS allows for direct measurements of the scintillator effective absorption efficiency and effective light yield as well as providing an alternative method to assess the underlying factors behind the detector properties. Results: There is a substantial difference in performance between the scintillators depending on the imaging task but in general, the CsI based scintillators perform better than the Gadox scintillators. At low energies (16 keV), a thin needle CsI scintillator has the best performance at all frequencies. At higher energies (28–38 keV), the thicker needle CsI scintillators and the structured CsI scintillator all have very good performance. The needle CsI scintillators have higher absorption efficiencies but the structured CsI scintillator has higher resolution. Conclusions: The choice of scintillator is greatly dependent on the imaging task. The presented comparison and methodology will assist the imaging scientist in optimizing their high-resolution few-ten-keV imaging system for best performance.
12. High Resolution Tracking Devices Based on Capillaries Filled with Liquid Scintillator
CERN Multimedia
Bonekamper, D; Vassiltchenko, V; Wolff, T
2002-01-01
%RD46 %title\\\\ \\\\The aim of the project is to develop high resolution tracking devices based on thin glass capillary arrays filled with liquid scintillator. This technique provides high hit densities and a position resolution better than 20 $\\mu$m. Further, their radiation hardness makes them superior to other types of tracking devices with comparable performance. Therefore, the technique is attractive for inner tracking in collider experiments, microvertex devices, or active targets for short-lived particle detection. High integration levels in the read-out based on the use of multi-pixel photon detectors and the possibility of optical multiplexing allow to reduce considerably the number of output channels, and, thus, the cost for the detector.\\\\ \\\\New optoelectronic devices have been developed and tested: the megapixel Electron Bombarded CCD (EBCCD), a high resolution image-detector having an outstanding capability of single photo-electron detection; the Vacuum Image Pipeline (VIP), a high-speed gateable pi...
13. Design and test of a high resolution plastic scintillating fiber detector with intensified CCD readout
International Nuclear Information System (INIS)
Rebourgeard, P.
1991-01-01
We present the design of a particle detector involving a coherent array of 100 000 plastic scintillating microfibers, with an individual core diameter around 50 micrometers, and an intensified bidimensional CCD array. We investigate both theoretically and experimentally the use of polystyrene based scintillators in optical multimodal fibers. The isotropic excitation of modes and the characteristics of energy transfers between the polystyrene matrix and the added fluorescent dyes are of particular interest. An experimental approach is proposed and applied to the development of a new binary scintillator. In order to study the transmission of the signal from the interaction area to the output face, we specify the loss factors, the resolution and the signal to noise ratio within the fiber array. The low light level at the output face of the detector leads us to use image intensifiers in photon counting mode. This requires a detailed analysis of resolutions, gain, noise and detectivity concepts. We propose to describe these strongly correlated notions by the moment generation formalism. Thus, a previous modelisation of the photoelectronic devices allows us to evaluate the performance of the readout chain. A complete detector has been assembled and tested on a high energy hadron beam; the measurements are in good agreement with the modelisation [fr
14. Search for new scintillators for high-energy resolution electromagnetic calorimeters
International Nuclear Information System (INIS)
Britvich, G.I.; Britvich, I.G.; Vasil'chenko, V.G.; Lishin, V.A.; Obraztsov, V.F.; Polyakov, V.A.; Solovjev, A.S.
1999-01-01
Some opportunities of creation of radiation-resistant heterogeneous electro-magnetic-calorimeters with an energy resolution of about σ/E≅4-5%/√E is given in this article. Investigation results of 2scintillation and radiation characteristics for thin molded plates and new heavy scintillators based on the polystyrene and containing metalloorganic additives are presented. The radiation resistance of thin molded scintillator plates of about 1.1 mm thick containing 2% pTP+0.05% POPOP has reached a level of about 15-20 kGy
15. High resolution time-of-flight measurements in small and large scintillation counters
International Nuclear Information System (INIS)
D'Agostini, G.; Marini, G.; Martellotti, G.; Massa, F.; Rambaldi, A.; Sciubba, A.
1981-01-01
In a test run, the experimental time-of-flight resolution was measured for several different scintillation counters of small (10 x 5 cm 2 ) and large (100 x 15 cm 2 and 75 x 25 cm 2 ) area. The design characteristics were decided on the basis of theoretical Monte Carlo calculations. We report results using twisted, fish-tail, and rectangular light- guides and different types of scintillator (NE 114 and PILOT U). Time resolution up to approx. equal to 130-150 ps fwhm for the small counters and up to approx. equal to 280-300 ps fwhm for the large counters were obtained. The spatial resolution from time measurements in the large counters is also reported. The results of Monte Carlo calculations on the type of scintillator, the shape and dimensions of the light-guides, and the nature of the external wrapping surfaces - to be used in order to optimize the time resolution - are also summarized. (orig.)
16. Clinical dosimetry with plastic scintillators - Almost energy independent, direct absorbed dose reading with high resolution
Energy Technology Data Exchange (ETDEWEB)
Quast, U; Fluehs, D [Department of Radiotherapy, Essen (Germany). Div. of Clinical Radiation Physics; Fluehs, D; Kolanoski, H [Dortmund Univ. (Germany). Inst. fuer Physik
1996-08-01
Clinical dosimetry is still far behind the goal to measure any spatial or temporal distribution of absorbed dose fast and precise without disturbing the physical situation by the dosimetry procedure. NE 102A plastic scintillators overcome this border. These tissue substituting dosemeter probes open a wide range of new clinical applications of dosimetry. This versatile new dosimetry system enables fast measurement of the absorbed dose to water in water also in regions with a steep dose gradient, close to interfaces, or in partly shielded regions. It allows direct reading dosimetry in the energy range of all clinically used external photon and electron beams, or around all branchytherapy sources. Thin detector arrays permit fast and high resolution measurements in quality assurance, such as in-vivo dosimetry or even afterloading dose monitoring. A main field of application is the dosimetric treatment planning, the individual optimization of brachytherapy applicators. Thus, plastic scintillator dosemeters cover optimally all difficult fields of clinical dosimetry. An overview about its characteristics and applications is given here. 20 refs, 1 fig.
17. Energy resolution of scintillation detectors
Energy Technology Data Exchange (ETDEWEB)
Moszyński, M., E-mail: M.Moszynski@ncbj.gov.pl; Syntfeld-Każuch, A.; Swiderski, L.; Grodzicka, M.; Iwanowska, J.; Sibczyński, P.; Szczęśniak, T.
2016-01-01
According to current knowledge, the non-proportionality of the light yield of scintillators appears to be a fundamental limitation of energy resolution. A good energy resolution is of great importance for most applications of scintillation detectors. Thus, its limitations are discussed below; which arise from the non-proportional response of scintillators to gamma rays and electrons, being of crucial importance to the intrinsic energy resolution of crystals. The important influence of Landau fluctuations and the scattering of secondary electrons (δ-rays) on intrinsic resolution is pointed out here. The study on undoped NaI and CsI at liquid nitrogen temperature with a light readout by avalanche photodiodes strongly suggests that the non-proportionality of many crystals is not their intrinsic property and may be improved by selective co-doping. Finally, several observations that have been collected in the last 15 years on the influence of the slow components of light pulses on energy resolution suggest that more complex processes are taking place in the scintillators. This was observed with CsI(Tl), CsI(Na), ZnSe(Te), and undoped NaI at liquid nitrogen temperature and, finally, for NaI(Tl) at temperatures reduced below 0 °C. A common conclusion of these observations is that the highest energy resolution, and particularly intrinsic resolution measured with the scintillators, characterized by two or more components of the light pulse decay, is obtainable when the spectrometry equipment integrates the whole light of the components. In contrast, the slow components observed in many other crystals degrade the intrinsic resolution. In the limiting case, afterglow could also be considered as a very slow component that spoils the energy resolution. The aim of this work is to summarize all of the above observations by looking for their origin.
18. The development of a high-resolution scintillating fiber tracker with silicon photomultiplier readout
International Nuclear Information System (INIS)
Roper Yearwood, Gregorio
2013-01-01
In this work I present the design and test results for a novel, modular tracking detector from scintillating fibers which are read out by silicon photomultiplier (SiPM) arrays. The detector modules consist of 0.25 mm thin scintillating fibers which are closely packed in five-layer ribbons. Two ribbons are fixed to both sides of a carbon-fiber composite structure. Custom made SiPM arrays with a photo-detection efficiency of about 50% read out the fibers. Several 860 mm long and 32 mm wide tracker modules were tested in a secondary 12 GeV/c beam at the PS facilities, CERN in November of 2009. During this test a spatial resolution better than 0.05 mm at an average light yield of about 20 photons for a minimum ionizing particle was determined. This work details the characterization of scintillating fibers and silicon photomultipliers of different make and model. It gives an overview of the production of scintillating fiber modules. The behavior of detector modules during the test-beam is analyzed in detail and different options for the front-end electronics are compared. Furthermore, the implementation of the proposed tracking detector from scintillating fibers within the scope of the PERDaix experiment is discussed. The PERDaix detector is a permanent magnet spectrometer with a weight of 40 kg. It consists of 8 tracking detector layers from scintillating fibers, a time-of-flight detector from plastic scintillator bars with silicon photomultiplier readout and a transition radiation detector from an irregular fleece radiator and Xe/CO 2 filled proportional counting tubes. The PERDaix detector was launched with a helium balloon within the scope of the ''Balloon-Experiments for University Students'' (BEXUS) program from Kiruna, Sweden in November 2010. For a few hours PERDaix reached an altitude of 33 km and measured cosmic rays. In May 2011, the PERDaix detector was characterized during a test-beam at the PS-facilities at CERN. This work introduces methods for event
19. A high-resolution tracking hodoscope based on capillary layers filled with liquid scintillator
CERN Document Server
Bay, A; Bruski, N; Buontempo, S; Currat, C; D'Ambrosio, N; Ekimov, A V; Ereditato, A; Fabre, Jean-Paul; Fanti, V; Frekers, D; Frenkel, A; Golovkin, S V; Govorun, V N; Harrison, K; Koppenburg, P; Kozarenko, E N; Kreslo, I E; Liberti, B; Martellotti, G; Medvedkov, A M; Mondardini, M R; Penso, G; Siegmund, W P; Vasilchenko, V G; Vilain, P; Wilquet, G; Winter, Klaus; Wörtche, H J
2001-01-01
Results are given on tests of a high-resolution tracking hodoscope based on layers of \\hbox{26-$\\mu$m-bore} glass capillaries filled with organic liquid scintillator (1-methylnaphthalene doped with R39). The detector prototype consisted of three 2-mm-thick parallel layers, with surface areas of $2.1 \\times 21$~cm$^2$. The layers had a centre-to-centre spacing of 6~mm, and were read by an optoelectronic chain comprising two electrostatically focused image intensifiers and an Electron-Bombarded Charge-Coupled Device (EBCCD). Tracks of cosmic-ray particles were recorded and analysed. The observed hit density was 6.6~hits/mm for particles crossing the layers perpendicularly, at a distance of 1~cm from the capillaries' readout end, and 4.2~hits/mm for particles at a distance of 20~cm. A track segment reconstructed in a single layer had an rms residual of $\\sim$~20~$\\mu$m, and allowed determination of the track position in a neighbouring layer with a precision of $\\sim$~170~$\\mu$m. This latter value corresponded to...
20. Design, construction and beam tests of the high resolution uranium scintillator calorimeter for ZEUS
International Nuclear Information System (INIS)
Straver, J.A.
1991-01-01
HERA will collide protons and electrons with energies up to 820 GeV and 30 GeV respectively. Therefore it allows measurements at momentum transfers (Q) which greatly surpass the investigations carried out so far. This extended range in Q will allow investigation of the interactions between the quarks and leptons at a distance scale of the order of 10 -18 cm. Two detectors are foreseen at HERA H1 and ZEUS. The design of the ZEUS detector is optimized for the study of neutral and charged current interactions. A calorimeter is a detector which absorbs the total incident energy of a particle while generating a signal proportional to this energy. The ZEUS calorimeter is built of alternating layers of dense absorber plates ( 238 U) and active layers of scintillator material with a fast readout system via wavelength shifters, light guides and photomultiplyers. The main subject of this thesis is the description of this calorimeter and its performance. After a short introduction to HERA and the physics topics, the importance of the quality of a calorimeter is pointed out and a brief overview of the ZEUS detector is given. In ch. 3 the principles of high resolution hadron calorimetry and the studies which led to the design of the ZEUS-calorimeter are discussed. Ch. 4 describes the mechanical design of the ZEUS forward calorimeter, the mechanical finite element calculations, and the production of the calorimeter modules at NIKHEF. Finally ch. 6 and 5 show the results of beam tests of the ZEUS forward calorimeter prototypes and the final full size forward calorimeter modules. (author). 59 refs.; 115 figs.; 29 tabs
1. High spatial resolution radiation detectors based on hydrogenated amorphous silicon and scintillator
International Nuclear Information System (INIS)
Jing, T.; Lawrence Berkeley Lab., CA
1995-05-01
Hydrogenated amorphous silicon (a-Si:H) as a large-area thin film semiconductor with ease of doping and low-cost fabrication capability has given a new impetus to the field of imaging sensors; its high radiation resistance also makes it a good material for radiation detectors. In addition, large-area microelectronics based on a-Si:H or polysilicon can be made with full integration of peripheral circuits, including readout switches and shift registers on the same substrate. Thin a-Si:H p-i-n photodiodes coupled to suitable scintillators are shown to be suitable for detecting charged particles, electrons, and X-rays. The response speed of CsI/a-Si:H diode combinations to individual particulate radiation is limited by the scintillation light decay since the charge collection time of the diode is very short (< 10ns). The reverse current of the detector is analyzed in term of contact injection, thermal generation, field enhanced emission (Poole-Frenkel effect), and edge leakage. A good collection efficiency for a diode is obtained by optimizing the p layer of the diode thickness and composition. The CsI(Tl) scintillator coupled to an a-Si:H photodiode detector shows a capability for detecting minimum ionizing particles with S/N ∼20. In such an arrangement a p-i-n diode is operated in a photovoltaic mode (reverse bias). In addition, a p-i-n diode can also work as a photoconductor under forward bias and produces a gain yield of 3--8 for shaping times of 1 micros. The mechanism of the formation of structured CsI scintillator layers is analyzed. Initial nucleation in the deposited layer is sensitive to the type of substrate medium, with imperfections generally catalyzing nucleation. Therefore, the microgeometry of a patterned substrate has a significant effect on the structure of the CsI growth
2. High spatial resolution radiation detectors based on hydrogenated amorphous silicon and scintillator
Energy Technology Data Exchange (ETDEWEB)
Jing, Tao [Univ. of California, Berkeley, CA (United States). Dept. of Engineering-Nuclear Engineering
1995-05-01
Hydrogenated amorphous silicon (a-Si:H) as a large-area thin film semiconductor with ease of doping and low-cost fabrication capability has given a new impetus to the field of imaging sensors; its high radiation resistance also makes it a good material for radiation detectors. In addition, large-area microelectronics based on a-Si:H or polysilicon can be made with full integration of peripheral circuits, including readout switches and shift registers on the same substrate. Thin a-Si:H p-i-n photodiodes coupled to suitable scintillators are shown to be suitable for detecting charged particles, electrons, and X-rays. The response speed of CsI/a-Si:H diode combinations to individual particulate radiation is limited by the scintillation light decay since the charge collection time of the diode is very short (< 10ns). The reverse current of the detector is analyzed in term of contact injection, thermal generation, field enhanced emission (Poole-Frenkel effect), and edge leakage. A good collection efficiency for a diode is obtained by optimizing the p layer of the diode thickness and composition. The CsI(Tl) scintillator coupled to an a-Si:H photodiode detector shows a capability for detecting minimum ionizing particles with S/N ~20. In such an arrangement a p-i-n diode is operated in a photovoltaic mode (reverse bias). In addition, a p-i-n diode can also work as a photoconductor under forward bias and produces a gain yield of 3--8 for shaping times of 1 {micro}s. The mechanism of the formation of structured CsI scintillator layers is analyzed. Initial nucleation in the deposited layer is sensitive to the type of substrate medium, with imperfections generally catalyzing nucleation. Therefore, the microgeometry of a patterned substrate has a significant effect on the structure of the CsI growth.
3. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging
Energy Technology Data Exchange (ETDEWEB)
Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)
2017-02-11
The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).
4. Development and Studies of Novel Microfabricated Radiation Hard Scintillation Detectors With High Spatial Resolution
CERN Document Server
Mapelli, A; Haguenauer, M; Jiguet, S; Renaud, P; Vico Triviño, N
2011-01-01
A new type of scintillation detector is being developed with standard microfabrication techniques. It consists of a dense array of scintillating waveguides obtained by coupling microfluidic channels filled with a liquid scintillator to photodetectors. Easy manipulation of liquid scintillators inside microfluidic devices allow their flushing, renewal, and exchange making the active medium intrinsically radiation hard. Prototype detectors have been fabricated by photostructuration of a radiation hard epoxy resin (SU-8) deposited on silicon wafers and coupled to a multi-anode photomultiplier tube (MAPMT) to read-out the scintillation light. They have been characterized by exciting the liquid scintillator in the 200 micrometers thick microchannels with electrons from a 90Sr yielding approximately 1 photoelectron per impinging Minimum Ionizing Particle (MIP). These promising results demonstrate the concept of microfluidic scintillating detection and are very encouraging for future developments.
5. Coincidence resolution time of two small scintillators coupled to high quantum-efficiency photomultipliers in a PET-like system
Science.gov (United States)
Galetta, G.; De Leo, R.; Garibaldi, F.; Grodzicka, M.; Lagamba, L.; Loddo, F.; Masiello, G.; Nappi, E.; Perrino, R.; Ranieri, A.; Szczęśniak, T.
2014-03-01
The lower limit of the time resolution for a positron emission tomography (PET) system has been measured for two scintillator types, LYSO:Ce and LuAG:Pr. Small dimension crystals and ultra bi-alkali phototubes have been used in order to increase the detected scintillation photons. Good timing resolutions of 118 ps and 223 ps FWHM have been obtained for two LYSO and two LuAG, respectively, exposed to a 22Na source.
6. Scintillation camera with second order resolution
International Nuclear Information System (INIS)
Muehllehner, G.
1976-01-01
A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved
7. Measurements of energy resolution with hemispheric scintillators
International Nuclear Information System (INIS)
Mendonca, A.C.S.; Binns, D.A.C.; Tauhata, L.; Poledna, R.
1980-01-01
The hemispheric configuration is used for plastic scintillators type NE 102 with the aiming to optimize the light collect. Scintillators at this configuration, with radii of 3,81 cm and 2,54 cm, are showing improvement about 16-17% in the energy resolution, on cilyndric scintillators with the same volume, for gamma rays of 511-1275 KeV. (E.G.) [pt
8. Development of large-volume, high-resolution tracking detectors based on capillaries filled with liquid scintillator
International Nuclear Information System (INIS)
Buontempo, S.; Fabre, J.P.; Frenkel, A.; Gregoire, G.; Hoepfner, K.; Konijn, J.; Kozarenko, E.; Kreslo, I.; Kushnirenko, A.; Martellotti, G.; Michel, L.; Mondardini, M.R.; Penso, G.; Siegmund, W.P.; Strack, R.; Tyukov, V.; Vasilchenko, V.; Vilain, P.; Wilquet, G.; Winter, K.; Wong, H.; Zymin, K.
1995-01-01
Searches for the decay of short-lived particles require real time, high-resolution tracking in active targets, which in the case of neutrino physics should be of large volume. The possibility of achieving this by using glass capillaries filled with organic liquid scintillator is being investigated in the framework of the CHORUS experiment at CERN. In this paper, after outlining the application foreseen, advances in the tracking technique are discussed and results from tests are reported. An active target of dimensions 180x2x2 cm 3 has been assembled from capillaries with 20 μm diameter pores. The readout scheme currently in operation allows the reading of similar 5x10 5 channels using a single chain of image intensifiers having a resolution of σ similar 20 μm. Following the development of new liquid scintillators and purification methods an attenuation length of similar 3 m has been obtained. This translates into a hit density of 3.5 per mm for a minimum-ionizing particle that crosses the active target at a distance of 1 m from the readout end. (orig.)
9. Construction techniques of the high resolution lead / scintillating fibre electromagnetic calorimeter for the KLOE experiment
International Nuclear Information System (INIS)
Anelli, M.; Bisogni, G.; Ceccarelli, A.
1997-07-01
The electromagnetic calorimeter of the KLOE experiment is a lead-scintillating fibre sampling device. This calorimeter is arranged as a 'barrel', closed at both ends with an 'end-cap'. The barrel consists in 24 modules defining a cylinder, 4.3 long, with 4 m inner diameter. Each end-cap consists of 32 modules running vertically along the chords of the circle inscribed into the barrel. In this paper the calorimeter construction techniques are described
10. Construction techniques of the high resolution lead / scintillating fibre electromagnetic calorimeter for the KLOE experiment
Energy Technology Data Exchange (ETDEWEB)
Anelli, M; Bisogni, G; Ceccarelli, A [INFN, Laboratori Nazionali di Frascati, Rome (Italy); and others
1997-07-01
The electromagnetic calorimeter of the KLOE experiment is a lead-scintillating fibre sampling device. This calorimeter is arranged as a barrel, closed at both ends with an end-cap. The barrel consists in 24 modules defining a cylinder, 4.3 long, with 4 m inner diameter. Each end-cap consists of 32 modules running vertically along the chords of the circle inscribed into the barrel. In this paper the calorimeter construction techniques are described.
11. Scintillation camera with second order resolution
International Nuclear Information System (INIS)
1975-01-01
A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved
12. Experimental measurement of a high resolution CMOS detector coupled to CsI scintillators under X-ray radiation
International Nuclear Information System (INIS)
Michail, C.; Valais, I.; Seferis, I.; Kalyvas, N.; Fountos, G.; Kandarakis, I.
2015-01-01
The purpose of the present study was to assess the information content of structured CsI:Tl scintillating screens, specially treated to be compatible to a CMOS digital imaging optical sensor, in terms of the information capacity (IC), based on Shannon's mathematical communication theory. IC was assessed after the experimental determination of the Modulation Transfer Function (MTF) and the Normalized Noise Power Spectrum (NNPS) in the mammography and general radiography energy range. The CMOS sensor was coupled to three columnar CsI:Tl scintillator screens obtained from the same manufacturer with thicknesses of 130, 140 and 170 μm respectively, which were placed in direct contact with the optical sensor. The MTF was measured using the slanted-edge method while NNPS was determined by 2D Fourier transforming of uniformly exposed images. Both parameters were assessed by irradiation under the mammographic W/Rh (130, 140 and 170 μm CsI screens) and the RQA-5 (140 and 170 μm CsI screens) (IEC 62220-1) beam qualities. The detector response function was linear for the exposure range under investigation. At 70 kVp, under the RQA-5 conditions IC values were found to range between 2229 and 2340 bits/mm 2 . At 28 kVp the corresponding IC values were found to range between 2262 and 2968 bits/mm 2 . The information content of CsI:Tl scintillating screens in combination to the high resolution CMOS sensor, investigated in the present study, where found optimized for use in digital mammography imaging systems. - Highlights: • Three structured CsI:Tl screens (130,140 & 170 um) were coupled to a CMOS sensor. • MTF of the CsI/CMOS was higher than GOS:Tb and CsI based digital imaging systems. • IC of CsI:Tl/CMOS was found optimized for use in digital mammography systems
13. Optimization of the detector and associated electronics used for high-resolution liquid-scintillation alpha spectroscopy
International Nuclear Information System (INIS)
Thorngate, J.H.; Christian, D.J.
1977-01-01
The performance of various reflector geometries, light coupling liquids, photomultiplier tubes, preamplifiers and linear amplifiers were compared and the configuration found that optimized the combination of pulse-height resolution and pulse-shape discrimination. The best combination used a hemispherical reflector, filled with distilled water, coupled to an 8575 photomultiplier tube, the output of which was conditioned by a special integrating preamplifier and a double-delay-line linear amplifier. Careful choice of the scintillator, sample preparation procedures, and electronic apparatus can produce liquid-scintillation alpha spectroscopy with a pulse-height resolution of 300 keV, or less, and, by using pulse-shape discrimination, background levels as low as 0.01 counts/min. (author)
14. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques
CERN Document Server
Adloff, C.; Blaising, J.J.; Drancourt, C.; Espargiliere, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S.T.; Sosebee, M.; White, A.P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N.K.; Goto, T.; Mavromanolakis, G.; Thomson, M.A.; Ward, D.R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Benyamna, M.; Carloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G.C.; Dyshkant, A.; Lima, J.G.R.; Zutshi, V.; Hostachy, J.Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Gottlicher, P.; Gunter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.Ch; Shen, W.; Stamen, R.; Tadday, A.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G.W.; Kawagoe, K.; Dauncey, P.D.; Magnan, A.M.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.C.; Puerta-Pelayo, J.; Balagura, V.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Dolgoshein, B.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Smirnov, S.; Kiesling, C.; Pfau, S.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Bonis, J.; Bouquet, B.; Callier, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Faucci Giannelli, M.; Fleury, J.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch.; Poschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Anduze, M.; Boudry, V.; Brient, J.C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Sauer, J.; Weber, S.; Zeitnitz, C.
2012-01-01
SPS. The energy resolution for single hadrons is determined to be approximately 58%/ √E/GeV. This resolution is improved to approximately 45%/ √E/GeV with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to GEANT4 simulations yield resolution improvements comparable to those observed for real data.
15. High efficiency scintillation detectors
International Nuclear Information System (INIS)
Noakes, J.E.
1976-01-01
A scintillation counter consisting of a scintillation detector, usually a crystal scintillator optically coupled to a photomultiplier tube which converts photons to electrical pulses is described. The photomultiplier pulses are measured to provide information on impinging radiation. In inorganic crystal scintillation detectors to achieve maximum density, optical transparency and uniform activation, it has been necessary heretofore to prepare the scintillator as a single crystal. Crystal pieces fail to give a single composite response. Means are provided herein for obtaining such a response with crystal pieces, such means comprising the combination of crystal pieces and liquid or solid organic scintillator matrices having a cyclic molecular structure favorable to fluorescence. 8 claims, 6 drawing figures
16. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques
Czech Academy of Sciences Publication Activity Database
Adloff, C.; Blaha, J.; Blaising, J.J.; Cvach, Jaroslav; Gallus, Petr; Havránek, Miroslav; Janata, Milan; Kvasnička, Jiří; Lednický, Denis; Marčišovský, Michal; Polák, Ivo; Popule, Jiří; Tomášek, Lukáš; Tomášek, Michal; Růžička, Pavel; Šícho, Petr; Smolík, Jan; Vrba, Václav; Zálešák, Jaroslav
2012-01-01
Roč. 7, SEP (2012), 1-23 ISSN 1748-0221 R&D Projects: GA MŠk LA09042; GA MŠk LC527; GA ČR GA202/05/0653 Institutional research plan: CEZ:AV0Z10100502 Keywords : hadronic calorimetry * imaging calorimetry * software compensation Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.869, year: 2011
17. High Resolution Gamma Ray Spectroscopy at MHz Counting Rates With LaBr3 Scintillators for Fusion Plasma Applications
Science.gov (United States)
Nocente, M.; Tardocchi, M.; Olariu, A.; Olariu, S.; Pereira, R. C.; Chugunov, I. N.; Fernandes, A.; Gin, D. B.; Grosso, G.; Kiptily, V. G.; Neto, A.; Shevelev, A. E.; Silva, M.; Sousa, J.; Gorini, G.
2013-04-01
High resolution γ-ray spectroscopy measurements at MHz counting rates were carried out at nuclear accelerators, combining a LaBr 3(Ce) detector with dedicated hardware and software solutions based on digitization and off-line analysis. Spectra were measured at counting rates up to 4 MHz, with little or no degradation of the energy resolution, adopting a pile up rejection algorithm. The reported results represent a step forward towards the final goal of high resolution γ-ray spectroscopy measurements on a burning plasma device.
18. Investigation of high resolution compact gamma camera module based on a continuous scintillation crystal using a novel charge division readout method
International Nuclear Information System (INIS)
Dai Qiusheng; Zhao Cuilan; Qi Yujin; Zhang Hualin
2010-01-01
The objective of this study is to investigate a high performance and lower cost compact gamma camera module for a multi-head small animal SPECT system. A compact camera module was developed using a thin Lutetium Oxyorthosilicate (LSO) scintillation crystal slice coupled to a Hamamatsu H8500 position sensitive photomultiplier tube (PSPMT). A two-stage charge division readout board based on a novel subtractive resistive readout with a truncated center-of-gravity (TCOG) positioning method was developed for the camera. The performance of the camera was evaluated using a flood 99m Tc source with a four-quadrant bar-mask phantom. The preliminary experimental results show that the image shrinkage problem associated with the conventional resistive readout can be effectively overcome by the novel subtractive resistive readout with an appropriate fraction subtraction factor. The response output area (ROA) of the camera shown in the flood image was improved up to 34%, and an intrinsic spatial resolution better than 2 mm of detector was achieved. In conclusion, the utilization of a continuous scintillation crystal and a flat-panel PSPMT equipped with a novel subtractive resistive readout is a feasible approach for developing a high performance and lower cost compact gamma camera. (authors)
19. Energy resolution measurements of LaBr3:Ce scintillating crystals with an ultra-high quantum efficiency photomultiplier tube
International Nuclear Information System (INIS)
Pani, R.; Cinti, M.N.; Scafe, R.; Pellegrini, R.; Vittorini, F.; Bennati, P.; Ridolfi, S.; Lo Meo, S.; Mattioli, M.; Baldazzi, G.; Pisacane, F.; Navarria, F.; Moschini, G.; Boccaccio, P.; Orsolini Cencelli, V.; Sacco, D.
2009-01-01
The performance of the new prototype of high quantum efficiency PMT (43% at 380 nm), Hamamatsu R7600U-200, was studied coupled to a LaBr 3 :Ce crystal with the size of o12.5 mmx12.5 mm. The energy resolution results were compared with ones from two PMTs, Hamamatsu R7600U and R6231MOD, with 22% and 30% quantum efficiency (QE), respectively. Moreover, the photodetectors were equipped with tapered and un-tapered voltage dividers to study the non-linearity effects on pulse height distribution, due to very high peak currents induced in the PMT by the fast and intense light pulse of LaBr 3 :Ce. The results show an energy resolution improvement with UBA PMT of about 20%, in the energy range of 80-662 keV, with respect to the BA one.
20. Semiconductor high-energy radiation scintillation detector
International Nuclear Information System (INIS)
Kastalsky, A.; Luryi, S.; Spivak, B.
2006-01-01
We propose a new scintillation-type detector in which high-energy radiation generates electron-hole pairs in a direct-gap semiconductor material that subsequently recombine producing infrared light to be registered by a photo-detector. The key issue is how to make the semiconductor essentially transparent to its own infrared light, so that photons generated deep inside the semiconductor could reach its surface without tangible attenuation. We discuss two ways to accomplish this, one based on doping the semiconductor with shallow impurities of one polarity type, preferably donors, the other by heterostructure bandgap engineering. The proposed semiconductor scintillator combines the best properties of currently existing radiation detectors and can be used for both simple radiation monitoring, like a Geiger counter, and for high-resolution spectrography of the high-energy radiation. An important advantage of the proposed detector is its fast response time, about 1 ns, essentially limited only by the recombination time of minority carriers. Notably, the fast response comes without any degradation in brightness. When the scintillator is implemented in a qualified semiconductor material (such as InP or GaAs), the photo-detector and associated circuits can be epitaxially integrated on the scintillator slab and the structure can be stacked-up to achieve virtually any desired absorption capability
1. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers
International Nuclear Information System (INIS)
Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak
2008-01-01
To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data
2. How Photonic Crystals Can Improve the Timing Resolution of Scintillators
CERN Document Server
Lecoq, P; Knapitsch, A
2013-01-01
Photonic crystals (PhCs) and quantum optics phenomena open interesting perspectives to enhance the light extraction from scintillating me dia with high refractive indices as demonstrated by our previous work. By doing so, they also in fl uence the timing resolution of scintillators by improving the photostatistics. The present cont ribution will demonstrate that they are actually doing much more. Indeed, photonic crystals, if properly designed, allow the extr action of fast light propagation modes in the crystal with higher efficiency, therefore contributing to increasing the density of photons in the early phase of the light pulse. This is of particular interest to tag events at future high-energy physics colliders, such as CLIC, with a bunch-crossing rate of 2 GHz, as well as for a new generation of time-of-flight positron emission tomographs (TOFPET) aiming at a coincidence timing resolution of 100 ps FWHM. At this level of precision, good control of the light propagation modes is crucial if we consid...
3. Experimental study of high-energy resolution lead/scintillating fiber calorimetry in the 600-1200 MeV energy region
International Nuclear Information System (INIS)
Bellini, V.; Bianco, S.; Capogni, M.; Casano, L.; D'Angelo, A.; Fabbri, F.L.; Ghio, F.; Giardoni, M.; Girolami, B.; Hu, L.; Levi Sandri, P.; Moricciani, D.; Nobili, G.; Passamonti, L.; Russo, V.; Sarwar, S.; Schaerf, C.
1997-01-01
An experimental investigation has been carried out on the properties of electromagnetic shower detectors, composed of a uniform array of plastic scintillating fibers and lead (50:35 by volume ratio) for photons in the energy range 600-1200 MeV. When the photon's incidence angle to the fiber axis is within ±2 circle an energy resolution of σ E /E(%)=5.12/√(E[GeV])+1.71 has been observed. (orig.)
4. High Efficiency, Low Cost Scintillators for PET
International Nuclear Information System (INIS)
Kanai Shah
2007-01-01
Inorganic scintillation detectors coupled to PMTs are an important element of medical imaging applications such as positron emission tomography (PET). Performance as well as cost of these systems is limited by the properties of the scintillation detectors available at present. The Phase I project was aimed at demonstrating the feasibility of producing high performance scintillators using a low cost fabrication approach. Samples of these scintillators were produced and their performance was evaluated. Overall, the Phase I effort was very successful. The Phase II project will be aimed at advancing the new scintillation technology for PET. Large samples of the new scintillators will be produced and their performance will be evaluated. PET modules based on the new scintillators will also be built and characterized
5. Taheri-Saramad x-ray detector (TSXD): a novel high spatial resolution x-ray imager based on ZnO nano scintillator wires in polycarbonate membrane.
Science.gov (United States)
Taheri, A; Saramad, S; Ghalenoei, S; Setayeshi, S
2014-01-01
A novel x-ray imager based on ZnO nanowires is designed and fabricated. The proposed architecture is based on scintillation properties of ZnO nanostructures in a polycarbonate track-etched membrane. Because of higher refractive index of ZnO nanowire compared to the membrane, the nanowire acts as an optical fiber that prevents the generated optical photons to spread inside the detector. This effect improves the spatial resolution of the imager. The detection quantum efficiency and spatial resolution of the fabricated imager are 11% and <6.8 μm, respectively.
6. Taheri-Saramad x-ray detector (TSXD): A novel high spatial resolution x-ray imager based on ZnO nano scintillator wires in polycarbonate membrane
Energy Technology Data Exchange (ETDEWEB)
Taheri, A., E-mail: at1361@aut.ac.ir; Saramad, S.; Ghalenoei, S.; Setayeshi, S. [Department of Energy Engineering and Physics, Amirkabir University of Technology, Tehran 15875-4413 (Iran, Islamic Republic of)
2014-01-15
A novel x-ray imager based on ZnO nanowires is designed and fabricated. The proposed architecture is based on scintillation properties of ZnO nanostructures in a polycarbonate track-etched membrane. Because of higher refractive index of ZnO nanowire compared to the membrane, the nanowire acts as an optical fiber that prevents the generated optical photons to spread inside the detector. This effect improves the spatial resolution of the imager. The detection quantum efficiency and spatial resolution of the fabricated imager are 11% and <6.8 μm, respectively.
7. WE-H-207A-01: Computational Evaluation of High-Resolution 18F Positron Imaging Using Radioluminescence Microscopy with Lu2O3: Eu Thin-Film Scintillator
Energy Technology Data Exchange (ETDEWEB)
Wang, Q; Sengupta, D; Pratx, G [Stanford University, Palo Alto, CA (United States)
2016-06-15
Purpose: Radioluminescence microscopy, an emerging and powerful tool for high resolution beta imaging, has been applied to molecular imaging of cellular metabolism to understand tumor biology. A novel thin-film (10 µm thickness) scintillator made of Lu{sub 2}O{sub 3}: Eu has been developed to enhance the system performance. However the advances of radioluminescence imaging with Lu{sub 2}O{sub 3}scintillator compared with that using conventional scintillator have not been explored theoretically to date. To validate the advantages of the thin-film scintillator, this study uses a novel computational simulation framework to evaluate the performance of radioluminescence microscopy using both conventional and thin-film scintillators. Methods: Numerical models for different stages of positron imaging are established. Positron from {sup 18}F passing through the scintillator and its neighbor structures are modeled by Monte-Carlo simulation using Geant4. The propagation and focus of photons by the microscope are modeled by convolution with a depth-varying point spread function generated by the Gibson-Lanni model. Photons focused on the detector plane are then captured and converted into electronic signals by an electron multiplication (EM) CCD camera, which is described by a photosensor model considering various noises and charge amplification. Results: The performance metrics of radioluminescence imaging with a thin-film Lu{sub 2}O{sub 3} and conventional CdWO{sub 4} scintillator are compared, including spatial resolution, sensitivity, positron track area and intensity. The spatial resolution of Lu{sub 2}O{sub 3} system can achieve 10 µm maximally, a 12 µm enhancement from that obtained from CdWO{sub 4} system. Meanwhile, the system with Lu{sub 2}O{sub 3} scintillator can provide a higher mean sensitivity: 40% compared with that (21.5%) obtained from CdWO{sub 4} system. Moreover, the simulation results are in good agreement with previous experimental measurements
8. Scintillators
International Nuclear Information System (INIS)
Cusano, D.A.; Holub, F.F.; Prochazka, S.
1979-01-01
Scintillator bodies comprising phosphor materials and having high optical translucency with low light absorption, and methods of making the scintillator bodies, are described. Fabrication methods include (a) a hot-pressing process, (b) cold-pressing followed by sintering, (c) controlled cooling from a melt, and (d) hot-forging. The scintillator bodies that result are easily machined to desired shapes and sizes. Suitable phosphors include BaFCl:Eu, LaOBr:Tb, CsI:Tl, CaWO 4 and CdWO 4 . (U.K.)
9. Ultrahigh resolution radiation imaging system using an optical fiber structure scintillator plate.
Science.gov (United States)
Yamamoto, Seiichi; Kamada, Kei; Yoshikawa, Akira
2018-02-16
High resolution imaging of radiation is required for such radioisotope distribution measurements as alpha particle detection in nuclear facilities or high energy physics experiments. For this purpose, we developed an ultrahigh resolution radiation imaging system using an optical fiber structure scintillator plate. We used a ~1-μm diameter fiber structured GdAlO 3 :Ce (GAP) /α-Al 2 O 3 scintillator plate to reduce the light spread. The fiber structured scintillator plate was optically coupled to a tapered optical fiber plate to magnify the image and combined with a lens-based high sensitivity CCD camera. We observed the images of alpha particles with a spatial resolution of ~25 μm. For the beta particles, the images had various shapes, and the trajectories of the electrons were clearly observed in the images. For the gamma photons, the images also had various shapes, and the trajectories of the secondary electrons were observed in some of the images. These results show that combining an optical fiber structure scintillator plate with a tapered optical fiber plate and a high sensitivity CCD camera achieved ultrahigh resolution and is a promising method to observe the images of the interactions of radiation in a scintillator.
10. Recipe for attaining optimal energy resolution in inorganic scintillators
Energy Technology Data Exchange (ETDEWEB)
Singh, Jai; Koblov, Alexander [School of Engineering and IT, B-purple-12, Faculty of EHSE, Charles Darwin University, Darwin, NT 0909 (Australia)
2012-12-15
Using an approximate form of the density of excitation created within the track initiated by an incident {gamma} - photon on a scintillator, the light yield is derived as a function of linear, bimolecular and Auger radiative and quenching recombination rates. The non-proportionality in the yield is analysed as a function of the bimolecular and Auger quenching rates and also its dependence on the track radius is studied. An optimal combination of these quenching rates and track radius is presented to obtain a recipe for inventing a scintillator material with optimal energy resolution. The importance of the mobility of charge carriers in minimising the non-proportionality in a scintillator is also discussed (copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
11. Recipe for attaining optimal energy resolution in inorganic scintillators
International Nuclear Information System (INIS)
Singh, Jai; Koblov, Alexander
2012-01-01
Using an approximate form of the density of excitation created within the track initiated by an incident γ - photon on a scintillator, the light yield is derived as a function of linear, bimolecular and Auger radiative and quenching recombination rates. The non-proportionality in the yield is analysed as a function of the bimolecular and Auger quenching rates and also its dependence on the track radius is studied. An optimal combination of these quenching rates and track radius is presented to obtain a recipe for inventing a scintillator material with optimal energy resolution. The importance of the mobility of charge carriers in minimising the non-proportionality in a scintillator is also discussed (copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
12. A high-resolution detector based on liquid-core scintillating fibres with readout via an electron-bombarded charge-coupled device
International Nuclear Information System (INIS)
Cianfarani, C.; Duane, A.; Fabre, J.P.; Frenkel, A.; Golovkin, S.V.; Gorin, A.M.; Harrison, K.; Kozarenko, E.N.; Kushnirenko, A.E.; Ladygin, E.A.; Martellotti, G.; Medvedkov, A.M.; Nass, P.A.; Obudovski, V.P.; Penso, G.; Petukhov, Yu.P.; Siegmund, W.P.; Tyukov, V.E.; Vasilchenko, V.G.
1994-01-01
This paper is a presentation of results from tests in a 5 GeV/c hadron beam of detectors based on liquid-core scintillating fibres, each fibre consisting of a glass capillary filled with organic liquid scintillator. Fibre readout was performed via an Electron-Bombarded Charge-Coupled Device (EBCCD) image tube, a novel instrument that combines the functions of a high-gain, gated image intensifier and a Charge-Coupled Device. Using 1-methylnaphthalene doped with 3 g/l of R45 as liquid scintillator, the attenuation lengths obtained for light propagation over distances greater than 16 cm were 1.5 m in fibres of 20 μm core and 1.0 m in fibres of 16 μm core. For particles that crossed the fibres of 20 μm core at distances of ∼1.8 cm and ∼95 cm from the fibres' readout ends, the recorded hit densities were 5.3 mm -1 and 2.5 mm -1 respectively. Using 1-methylnaphthalene doped with 3.6 g/l of R39 as liquid scintillator and fibres of 75 μm core, the hit density obtained for particles that crossed the fibres at a distance of ∼1.8 cm from their readout ends was 8.5 mm -1 . With a specially designed bundle of tapered fibres, having core diameters that smoothly increase from 16 μm to 75 μm, a spatial precision of 6 μm was measured. (orig.)
13. Energy resolution limitations in a gas scintillation proportional counter
International Nuclear Information System (INIS)
Simons, D.G.; de Korte, P.A.J.; Peacock, A.; Bleeker, J.A.M.
1985-01-01
An investigation is made of the factors limiting the energy resolution of a gas scintillation proportional counter (GSPC). Several of these limitations originate in the drift region of such a counter and data is presented, giving a quantitative description of those effects. Data is also presented of a GSPC without a drift region, that therefore largely circumvents most of those degrading factors. The results obtained so far indicate that in that detector the limitation to the resolution is most probably due to cleanliness of the gas. Further research is underway in order to assess quantitatively the limiting factors in such a driftless GSPC
14. High-efficiency organic glass scintillators
Science.gov (United States)
Feng, Patrick L.; Carlson, Joseph S.
2017-12-19
A new family of neutron/gamma discriminating scintillators is disclosed that comprises stable organic glasses that may be melt-cast into transparent monoliths. These materials have been shown to provide light yields greater than solution-grown trans-stilbene crystals and efficient PSD capabilities when combined with 0.01 to 0.05% by weight of the total composition of a wavelength-shifting fluorophore. Photoluminescence measurements reveal fluorescence quantum yields that are 2 to 5 times greater than conventional plastic or liquid scintillator matrices, which accounts for the superior light yield of these glasses. The unique combination of high scintillation light-yields, efficient neutron/gamma PSD, and straightforward scale-up via melt-casting distinguishes the developed organic glasses from existing scintillators.
15. Time resolution in scintillator based detectors for positron emission tomography
International Nuclear Information System (INIS)
Gundacker, S.
2014-01-01
In the domain of medical photon detectors L(Y)SO scintillators are used for positron emission tomography (PET). The interest for time of flight (TOF) in PET is increasing since measurements have shown that new crystals like L(Y)SO coupled to state of the art photodetectors, e.g. silicon photomultipliers (SiPM), can reach coincidence time resolutions (CTRs) of far below 500ps FWHM. To achieve these goals it is important to study the processe in the whole detection chain, i.e. the high energy particle or gamma interaction in the crystal, the scintillation process itself, the light propagation in the crystal with the light transfer to the photodetector, and the electronic readout. In this thesis time resolution measurements for a PET like system are performed in a coincidence setup utilizing the ultra fast amplifier discriminator NINO. We found that the time-over-threshold energy information provided by NINO shows a degradation in energy resolution for higher SiPM bias voltages. This is a consequence of the increasing dark count rate (DCR) of the SiPM with higher bias voltages together with the exponential decay of the signal. To overcome this problem and to operate the SiPM at its optimum voltage in terms of timing we developed a new electronic board that employs NINO only as a low noise leading edge discriminator together with an analog amplifier which delivers the energy information. With this new electronic board we indeed improved the measured CTR by about 15%. To study the limits of time resolution in more depth we measured the CTR with 2x2x3mm3 LSO:Ce codoped 0.4%Ca crystals coupled to commercially available SiPMs (Hamamatsu S10931-50P MPPC) and achieved a CTR of 108±5ps FWHM at an energy of 511keV. We determined the influence of the data acquisition system and the electronics on the CTR to be 27±2ps FWHM and thus negligible. To quantitatively understand the measured values, we developed a Monte Carlo simulation tool in MATLAB that incorporates the timing
16. Energy resolution of a lead scintillating fiber electromagnetic calorimeter
International Nuclear Information System (INIS)
Budagov, Yu.; Chirikov-Zorin, I.; Glagolev, V.
1993-01-01
A calorimeter module was fabricated using profiled lead plates and scintillating fibers with diameter 1 mm and attenuation length about 80 cm. The absorber-to-fiber volume ratio was 1.17 and the module average radiation length X 0 = 1.05 cm. The energy resolution of the module was investigated using the electron beams of U-70 at Serpukhov and of the SPS at CERN in the energy range 5-70 GeV. The energy resolution at θ = 3 0 (the angle between the fiber axis and the beam direction) may be expressed by the formula σ/E(%) = 13.1/√E ± 1.7. The energy resolution was also simulated by Monte Carlo and good agreement with the experiment has been achieved. 12 refs.; 13 figs.; 4 tabs
17. A high-spatial-resolution three-dimensional detector array for 30-200 keV X-rays based on structured scintillators
DEFF Research Database (Denmark)
Olsen, Ulrik Lund; Schmidt, Søren; Poulsen, Henning Friis
2008-01-01
A three-dimensional X-ray detector for imaging 30-200 keV photons is described. It comprises a set of semi-transparent structured scintillators, where each scintillator is a regular array of waveguides in silicon, and with pores filled with CsI. The performance of the detector is described...
18. Can Transient Phenomena Help Improving Time Resolution in Scintillators?
CERN Document Server
Lecoq, P; Vasiliev, A
2014-01-01
The time resolution of a scintillator-based detector is directly driven by the density of photoelectrons generated in the photodetector at the detection threshold. At the scintillator level it is related to the intrinsic light yield, the pulse shape (rise time and decay time) and the light transport from the gamma-ray conversion point to the photodetector. When aiming at 10 ps time resolution, fluctuations in the thermalization and relaxation time of hot electrons and holes generated by the interaction of ionization radiation with the crystal become important. These processes last for up to a few tens of ps and are followed by a complex trapping-detrapping process, Poole-Frenkel effect, Auger ionization of traps and electron-hole recombination, which can last for a few ns with very large fluctuations. This paper will review the different processes at work and evaluate if some of the transient phenomena taking place during the fast thermalization phase can be exploited to extract a time tag with a precision in...
19. Microfluidic Scintillation Detectors for High Energy Physics
CERN Document Server
Maoddi, Pietro; Mapelli, Alessandro
This thesis deals with the development and study of microfluidic scintillation detectors, a technology of recent introduction for the detection of high energy particles. Most of the interest for such devices comes from the use of a liquid scintillator, which entails the possibility of changing the active material in the detector, leading to increased radiation resistance. A first part of the thesis focuses on the work performed in terms of design and modelling studies of novel prototype devices, hinting to new possibilities and applications. In this framework, the simulations performed to validate selected designs and the main technological choices made in view of their fabrication are addressed. The second part of this thesis deals with the microfabrication of several prototype devices. Two different materials were studied for the manufacturing of microfluidic scintillation detectors, namely the SU-8 photosensitive epoxy and monocrystalline silicon. For what concerns the former, an original fabrication appro...
20. Two-dimensional diced scintillator array for innovative, fine-resolution gamma camera
International Nuclear Information System (INIS)
Fujita, T.; Kataoka, J.; Nishiyama, T.; Ohsuka, S.; Nakamura, S.; Yamamoto, S.
2014-01-01
We are developing a technique to fabricate fine spatial resolution (FWHM<0.5mm) and cost-effective photon counting detectors, by using silicon photomultipliers (SiPMs) coupled with a finely pixelated scintillator plate. Unlike traditional X-ray imagers that use a micro-columnar CsI(Tl) plate, we can pixelate various scintillation crystal plates more than 1 mm thick, and easily develop large-area, fine-pitch scintillator arrays with high precision. Coupling a fine pitch scintillator array with a SiPM array results in a compact, fast-response detector that is ideal for X-ray, gamma-ray, and charged particle detection as used in autoradiography, gamma cameras, and photon counting CTs. As the first step, we fabricated a 2-D, cerium-doped Gd 3 Al 2 Ga 3 O 12 (Ce:GAGG) scintillator array of 0.25 mm pitch, by using a dicing saw to cut micro-grooves 50μm wide into a 1.0 mm thick Ce:GAGG plate. The scintillator plate is optically coupled with a 3.0×3.0mm pixel 4×4 SiPM array and read-out via the resistive charge-division network. Even when using this simple system as a gamma camera, we obtained excellent spatial resolution of 0.48 mm (FWHM) for 122 keV gamma-rays. We will present our plans to further improve the signal-to-noise ratio in the image, and also discuss a variety of possible applications in the near future
1. Time resolution measurements with an improved discriminator and conical scintillators
International Nuclear Information System (INIS)
McGervey, J.D.; Vogel, J.; Sen, P.; Knox, C.
1977-01-01
A new constant fraction discriminator with improved stability and walk characteristics is described. The discriminator was used with RCA C31024 photomultiplier tubes to test scintillators of conical and cylindrical shapes. Conical scintillators of 2.54 cm base diameter, 1.0 cm top diameter, and 2.54 cm height gave a fwhm of 155 ps for 60 Co gamma rays; larger conical scintillators gave an improvement of 10-15% in fwhm over cylindrical scintillators of equal volume. (Auth.)
2. High-pressure 3He gas scintillation neutron spectrometer
International Nuclear Information System (INIS)
Derzon, M.S.; Slaughter, D.R.; Prussin, S.G.
1985-10-01
A high-pressure, 3 He-Xe gas scintillation spectrometer has been developed for neutron spectroscopy on D-D fusion plasmas. The spectrometer exhibits an energy resolution of (121 +- 20 keV) keV (FWHM) at 2.5 MeV and an efficiency of (1.9 +- 0.4) x 10 -3 (n/cm 2 ) -1 . The contribution to the resolution (FWHM) from counting statistics is only (22 +- 3 keV) and the remainder is due predominantly to the variation of light collection efficiency with location of neutron events within the active volume of the detector
3. Factors Influencing Time Resolution of Scintillators and Ways to Improve Them
CERN Document Server
Lecoq, P; Brunner, S; Meyer, T; Auffray, E; Knapitsch, A; Jarron, P
2010-01-01
The renewal of interest in Time of Flight Positron Emission Tomography (TOF-PET), as well as the necessity to precisely tag events in high energy physics (HEP) experiments at future colliders are pushing for an optimization of all factors affecting the time resolution of the whole acquisition chain comprising the crystal, the photo detector, and the electronics. The time resolution of a scintillator-based detection system is determined by the rate of photo electrons at the detection threshold, which depends on the time distribution of photons being converted in the photo detector. The possibility to achieve time resolution of about 100 ps Full Width at Half Maximum (FWHM) requires an optimization of the light production in the scintillator, the light transport and its transfer from the scintillator to the photo detector. In order to maximize the light yield, and in particular the density of photons in the first nanosecond, while minimizing the rise time and decay time, particular attention must be paid to the...
4. Non-Proportionality of Electron Response and Energy Resolution of Compton Electrons in Scintillators
Science.gov (United States)
Swiderski, L.; Marcinkowski, R.; Szawlowski, M.; Moszynski, M.; Czarnacki, W.; Syntfeld-Kazuch, A.; Szczesniak, T.; Pausch, G.; Plettner, C.; Roemer, K.
2012-02-01
Non-proportionality of light yield and energy resolution of Compton electrons in three scintillators (LaBr3:Ce, LYSO:Ce and CsI:Tl) were studied in a wide energy range from 10 keV up to 1 MeV. The experimental setup was comprised of a High Purity Germanium detector and tested scintillators coupled to a photomultiplier. Probing the non-proportionality and energy resolution curves at different energies was obtained by changing the position of various radioactive sources with respect to both detectors. The distance between both detectors and source was kept small to make use of Wide Angle Compton Coincidence (WACC) technique, which allowed us to scan large range of scattering angles simultaneously and obtain relatively high coincidence rate of 100 cps using weak sources of about 10 μCi activity. The results are compared with those obtained by direct irradiation of the tested scintillators with gamma-ray sources and fitting the full-energy peaks.
5. Recent advances in the use of ASEDRA in post processing scintillator spectra for resolution enhancement
International Nuclear Information System (INIS)
Sjoden, G.E.
2012-01-01
The ASEDRA (Advanced Synthetically Enhanced Detector Resolution Algorithm, patent pending) has been successfully applied as a post processing algorithm to both sodium iodide (NaI(Tl)) and cesium iodide (CsI(Na)) scintillator detectors to synthetically enhance their realized spectral data resolution by as much as a factor of three, wherein from these detectors the 'raw' unprocessed spectra are traditionally of poor resolution. ASEDRA uses noise reduction and built-in high resolution Monte Carlo radiation transport based detector response functions (DRFs) to rapidly post-process a spectrum in a few seconds on a standard laptop; gamma lines are extracted with an accuracy that makes the scintillator detectors competitive with higher resolution, higher material cost detectors. ASEDRA differs from other tools in the field, such as Sandia's GADRAS software, in that ASEDRA performs a differential spectrum attribution and cumulative extraction from the sample spectrum, rather than an integral-based approach, as in GADRAS. Previous publications have highlighted the successful application of ASEDRA in samples with plutonium and various isotopes. A new SmartID nuclide identification package to accompany ASEDRA has recently been implemented for test and evaluation purposes for sample attribution; in addition, the application of ASEDRA+SmartID has occurred with success in long dwell cargo monitoring and SNM detection applications, enabling new protocols for HEU detection. Overall, this paper presents recent developments and results along with a discussion of follow-on steps in the development of ASEDRA as an effective field gamma spectrum analysis tool for low cost scintillators. (author)
6. Structured scintillators for X-ray imaging with micrometre resolution
DEFF Research Database (Denmark)
Olsen, Ulrik Lund; Schmidt, Søren; Poulsen, Henning Friis
2009-01-01
A 3D X-ray detector for imaging of 30–200 keV photons is described. It comprises a stack of semitransparent structured scintillators, where each scintillator is a regular array of waveguides in silicon, and with pores filled with CsI. The performance of the detector is described theoretically...
7. Role of excitons in the energy resolution of scintillators used for medical imaging
Energy Technology Data Exchange (ETDEWEB)
Singh, Jai [School of Engineering and IT, B-purple-12, Faculty of EHS, Charles Darwin University, Darwin NT 0909 (Australia)
2010-11-01
Theoretical investigations suggest that the nonproportionality in a scintillator is caused by the high excitation density created within the track of an X-ray or {gamma} ray photon entering in a scintillating crystal. In this paper an analytical expression for the scintillator yield is derived. For the case of BaF{sub 2} scintillator the role of excitons created within the {gamma}-ray track in the scintillator yield is studied. By comparing the results of two theories an analytical expression is also derived for an energy parameter which could otherwise only be determined by fitting the theoretical yield to the experimental data.
8. Role of excitons in the energy resolution of scintillators used for medical imaging
International Nuclear Information System (INIS)
Singh, Jai
2010-01-01
Theoretical investigations suggest that the nonproportionality in a scintillator is caused by the high excitation density created within the track of an X-ray or γ ray photon entering in a scintillating crystal. In this paper an analytical expression for the scintillator yield is derived. For the case of BaF 2 scintillator the role of excitons created within the γ-ray track in the scintillator yield is studied. By comparing the results of two theories an analytical expression is also derived for an energy parameter which could otherwise only be determined by fitting the theoretical yield to the experimental data.
9. High effective atomic number polymer scintillators for gamma ray spectroscopy
Science.gov (United States)
Cherepy, Nerine Jane; Sanner, Robert Dean; Payne, Stephen Anthony; Rupert, Benjamin Lee; Sturm, Benjamin Walter
2014-04-15
A scintillator material according to one embodiment includes a bismuth-loaded aromatic polymer having an energy resolution at 662 keV of less than about 10%. A scintillator material according to another embodiment includes a bismuth-loaded aromatic polymer having a fluor incorporated therewith and an energy resolution at 662 keV of less than about 10%. Additional systems and methods are also presented.
10. Measurement of the time resolution of small SiPM-based scintillation counters
Science.gov (United States)
Kravchenko, E. A.; Porosev, V. V.; Savinov, G. A.
2017-12-01
In this research, we evaluated the timing resolution of SiPM-based scintillation detector on a 1-GeV electron beam "extracted" from VEPP-4M. We tested small scintillation crystals of pure CsI, YAP, LYSO, and LFS-3 with HAMAMATSU S10362-33-025C and S13360-3050CS. The CsI scintillator together with HAMAMATSU S13360-3050CS demonstrated the best results. Nevertheless, the achieved time resolution of ~80 ps (RMS) relates mainly to the photodetector itself. It makes the silicon photomultiplier an attractive candidate to replace other devices in applications where sub-nanosecond accuracy is required.
11. High-symmetry organic scintillator systems
Energy Technology Data Exchange (ETDEWEB)
Feng, Patrick L.
2018-03-13
An ionizing radiation detector or scintillator system includes a scintillating material comprising an organic crystalline compound selected to generate photons in response to the passage of ionizing radiation. The organic compound has a crystalline symmetry of higher order than monoclinic, for example an orthorhombic, trigonal, tetragonal, hexagonal, or cubic symmetry. A photodetector is optically coupled to the scintillating material, and configured to generate electronic signals having pulse shapes based on the photons generated in the scintillating material. A discriminator is coupled to the photon detector, and configured to discriminate between neutrons and gamma rays in the ionizing radiation based on the pulse shapes of the output signals.
12. High-symmetry organic scintillator systems
Science.gov (United States)
Feng, Patrick L.
2017-07-18
An ionizing radiation detector or scintillator system includes a scintillating material comprising an organic crystalline compound selected to generate photons in response to the passage of ionizing radiation. The organic compound has a crystalline symmetry of higher order than monoclinic, for example an orthorhombic, trigonal, tetragonal, hexagonal, or cubic symmetry. A photodetector is optically coupled to the scintillating material, and configured to generate electronic signals having pulse shapes based on the photons generated in the scintillating material. A discriminator is coupled to the photon detector, and configured to discriminate between neutrons and gamma rays in the ionizing radiation based on the pulse shapes of the output signals.
13. High-symmetry organic scintillator systems
Science.gov (United States)
Feng, Patrick L.
2018-02-06
An ionizing radiation detector or scintillator system includes a scintillating material comprising an organic crystalline compound selected to generate photons in response to the passage of ionizing radiation. The organic compound has a crystalline symmetry of higher order than monoclinic, for example an orthorhombic, trigonal, tetragonal, hexagonal, or cubic symmetry. A photodetector is optically coupled to the scintillating material, and configured to generate electronic signals having pulse shapes based on the photons generated in the scintillating material. A discriminator is coupled to the photon detector, and configured to discriminate between neutrons and gamma rays in the ionizing radiation based on the pulse shapes of the output signals.
14. High-Resolution PET Detector. Final report
International Nuclear Information System (INIS)
Karp, Joel
2014-01-01
The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface
15. High-Z organic-scintillation solution
International Nuclear Information System (INIS)
Berlman, I.B.; Fluornoy, J.M.; Ashford, C.B.; Lyons, P.B.
1983-01-01
In the present experiment, an attempt is made to raise the average Z of a scintillation solution with as little attendant quenching as possible. Since high-Z atoms quench by means of a close encounter, such encounters are minimized by the use of alkyl groups substituted on the solvent, solute, and heavy atoms. The aromatic compound 1,2,4-trimethylbenzene (pseudocumene) is used as the solvent; 4,4''-di(5-tridecyl)-p-terphenyl (SC-180) as the solute; and tetrabutyltin as the high-Z material. To establish the validity of our ideas, various experiments have been performed with less protected solvents, and heavy atoms. These include benzene, toluene, p-terphenyl, bromobutane, and bromobenzene
16. A new timing model for calculating the intrinsic timing resolution of a scintillator detector
International Nuclear Information System (INIS)
Shao Yiping
2007-01-01
The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions
17. Study of resolution and linearity in LaBr3: Ce scintillator through digital-pulse processing
International Nuclear Information System (INIS)
Abhinav Kumar; Mishra, Gaurav; Ramachandran, K.
2014-01-01
Advent of digital pulse processing has led to a paradigm shift in pulse processing techniques by replacing analog electronics processing chain with equivalent algorithms acting on pulse profiles digitized at high sampling rates. In this paper, we have carried out offline digital pulse processing of Cerium-doped Lanthanum bromide scintillator (LaBr 3 : Ce) detector pulses, acquired using CAEN V1742 VME digitizer module. Algorithms have been written to approximate the functioning of peak sensing analog-to-digital convertor (ADC) and charge-to-digital convertor (QDC). Energy dependence of resolution and energy linearity of LaBr 3 : Ce scintillator detector has been studied by utilizing aforesaid algorithms
18. A primary scintillation gated high pressure position sensitive gas scintillation proportional counter (HPGSPC) for applications to x-ray astronomy
International Nuclear Information System (INIS)
Giarrusso, S.; Manzo, G.; Re, S.
1985-01-01
The authors describe a new instrument for x-ray astronomy. The instrument, based on a high pressure (5 atm.), xenon filled, position sensitive Gas Scintillation Proportional counter (HPGSPC) is expected to feature an energy resolution better than 4% at 60 keV, an angular resolution of approximately 20 arc-minutes over the full energy range (4 to 100 keV) and a field of view (FOV) of up to 30x30 degrees. A prototype flight unit of the gas cell on which the instrument is based is presently under technological development in the framework of the SAX project
19. New detector developments for high resolution positron emission tomography
International Nuclear Information System (INIS)
Ziegler, S.I.; Pichler, B.; Lorenz, E.
1998-01-01
The strength of quantitative, functional imaging using positron emission tomography, specially in small animals, is limited due to the spatial resolution. Therefore, various tomograph designs employing new scintillators, light sensors, or coincidence electronic are investigated to improve resolution without losses in sensitivity. Luminous scintillators with short light decay time in combination with novel readout schemes using photomultipliers or semiconductor detectors are currently tested by several groups and are implemented in tomographs for small animals. This review summarises the state of development in high resolution positron emission tomography with a detailed description of a system incorporating avalanche photodiode arrays and small scintillation crystals. (orig.) [de
20. High pressure gas scintillation drift chambers with wave-shifter fiber readout
International Nuclear Information System (INIS)
Parsons, A.; Edberg, T.K.; Sadoulet, B.; Weiss, S.; Wilkerson, J.; Hurley, K.; Lin, R.P.
1990-01-01
The authors present results from a prototype high pressure xenon gas scintillation drift chamber using a novel wave-shifter fiber readout scheme. They have measured the primary scintillation light yield to be one photon per 76 ± 12 eV deposited energy. They present initial results of our chamber for the two-interaction separation (< 4 mm in the drift direction, ∼ 7 mm orthogonal to the drift); for the position resolution (< 400 μm rms in the plane orthogonal to the drift direction); and for the energy resolution (ΔE/E < 6% FWHM at 122 keV)
1. Plastic scintillator
International Nuclear Information System (INIS)
Andreeshchev, E.A.; Kilin, S.F.; Kavyrzina, K.A.
1978-01-01
A plastic scintillator for ionizing radiation detectors with high time resolution is suggested. To decrease the scintillation pulse width and to maintain a high light yield, the 4 1 , 4 5 -dibromo-2 1 , 2 5 , 5 1 , 5 5 -tetramethyl-n-quinquiphenyl (Br 2 Me 4 Ph) in combination with n-terphenyl (Ph 3 ) or 2, 5-diphenyloxadiazol-1, 3, 4 (PPD) is used as a luminescent addition. Taking into consideration the results of a special study, it is shown, that the following ratio of ingradients is the optimum one: 3-4 mass% Ph 3 or 4-7 mas% PPD + 2-5 mass% Br 2 Me 4 Ph + + polymeric base. The suggested scintillator on the basis of polystyrene has the light yield of 0.23-0.26 arbitrary units and the scintillation pulse duration at half-height is 0.74-0.84 ns
2. Scintillation camera for high activity sources
International Nuclear Information System (INIS)
Arseneau, R.E.
1978-01-01
The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)
3. Scintillation camera for high activity sources
International Nuclear Information System (INIS)
Arseneau, R.E.
1976-01-01
A scintillation camera is provided with electrical components which expand the intrinsic maximum rate of acceptance for processing of pulses emanating from detected radioactive events. Buffer storage is provided to accommodate temporary increases in the level of radioactivity. An early provisional determination of acceptability of pulses allows many unacceptable pulses to be discarded at an early stage
4. The frequency analysis particle resolution technique of 6LiI(Eu) scintillation detector
International Nuclear Information System (INIS)
Duan Shaojie
1995-01-01
To measure the distribution and rate of tritium production by neutron in a 6 LiD sphere, the 6 LiI(Eu) scintillation detector was used. In the measurement, the frequency analysis particle resolution technique was used. The experiment was completed perfectly
5. Performance of a highly segmented scintillating fibres electromagnetic calorimeter
International Nuclear Information System (INIS)
Asmone, A.; Bertino, M.; Bini, C.; De Zorzi, G.; Diambrini Palazzi, G.; Di Cosimo, G.; Di Domenico, A.; Garufi, F.; Gauzzi, P.; Zanello, D.
1993-01-01
A prototype of scintillating fibres electromagnetic calorimeter has been constructed and tested with 2, 4 and 8 GeV electron beams at the CERN PS. The calorimeter modules consist of a Bi-Pb-Sn alloy and scintillating fibres. The fibres are parallel to the modules longer axis, and nearly parallel to the incident electrons direction. The calorimeter has two different segmentation regions of 24x24 mm 2 and 8x24 mm 2 cross area respectively. Results on energy and impact point space resolution are obtained and compared for the two different granularities. (orig.)
6. Scintillator Evaluation for High-Energy X-Ray Diagnostics
International Nuclear Information System (INIS)
Lutz, S. S.; Baker, S. A.
2001-01-01
This report presents results derived from a digital radiography study performed using x-rays from a 2.3 MeV, rod-pinch diode. Detailed is a parameter study of cerium-doped lutetium ortho-silicate (LSO) scintillator thickness, as it relates to system resolution and detection quantum efficiency (DQE). Additionally, the detection statistics of LSO were compared with that of CsI(Tl). As a result of this study we found the LSO scintillator with a thickness of 3 mm to yield the highest system DQE over the range of spatial frequencies from 0.75 to 2.5 mm -1
7. PTR, PCR and Energy Resolution Study of GAGG:Ce Scintillator
Science.gov (United States)
Limkitjaroenporn, Pruittipol; Hongtong, Wiraporn; Kim, Hong Joo; Kaewkhao, Jakrapong
2018-03-01
In this paper, the peak to total ratio (PTR), the peak to Compton ratio (PCR) and the energy resolution of cerium doped gadolinium aluminium gallium garnet (GAGG:Ce) scintillator are measured in the range of energy from 511 keV to 1332 keV using the radioactive source Na-22, Cs-137 and Co-60. The crystal is coupled with the PMT number R1306 and analyzed by the nuclear instrument module (NIM). The results found that the PTR and PCR of GAGG:Ce scintillator decrease with the increasing of energy. The results of energy resolution show the trend is decrease with the increasing of energy which corresponding to the higher energy resolution at higher energy. Moreover the energy resolution found to be linearly with.
8. Simulation of the Position Resolution of a Scintillation Detector
CERN Document Server
Templ, Sebastian; Sauerzopf, Clemens
In the Standard Model of particle physics, CPT symmetry is regarded as invariant. In order to test this prediction, the ASACUSA collaboration (“Atomic Spectroscopy And Collisions Using Slow Antiprotons”) aims to make a very precise measurement of the hyperfine structure of antihydrogen with a Rabi-like experiment. The compar- ison of the experimentally-obtained antihydrogen transition frequencies with those of hydrogen allows for a direct test of CPT symmetry. The spectrometer line of the ASACUSA HBAR-GSHFS (“Antihydrogen ground state hyperfine splitting”) experiment consists of a particle source, a spin flip-in- ducing microwave cavity, a spin-analyzing sextupole magnet, and a detector. In the course of the work for this thesis, a single scintillation detector as used in the hodoscopes of the detector at the end of the spectrometer line was simulated using the particle physics toolkit Geant4. Subsequent analysis of the simulation data allows for an estimate of the minimal uncertainty in determining t...
9. An instrument for the high-statistics measurement of plastic scintillating fibers
International Nuclear Information System (INIS)
Buontempo, S.; Ereditato, A.; Marchetti-Stasi, F.; Riccardi, F.; Strolin, P.
1994-01-01
There is today widespread use of plastic scintillating fibers in particle physics, mainly for calorimetric and tracking applications. In the case of calorimeters, we have to cope with very massive detectors and a large quantity of scintillating fibers. The CHORUS Collaboration has built a new detector to search for ν μ -ν τ oscillations in the CERN neutrino beam. A crucial task of the detector is ruled by the high-energy resolution calorimeter. For its construction more than 400 000 scintillating plastic fibers have been used. In this paper we report on the design and performance of a new instrument for the high-statistics measurement of the fiber properties, in terms of light yield and light attenuation length. The instrument has been successfully used to test about 3% of the total number of fibers before the construction of the calorimeter. ((orig.))
10. ANL high resolution injector
International Nuclear Information System (INIS)
Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.
1985-01-01
The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs
11. Time-of-flight resolution of scintillating counters with Burle 85001 microchannel plate photomultipliers in comparison with Hamamatsu R2083
Energy Technology Data Exchange (ETDEWEB)
Baturin, V. [Department of Physics, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Burkert, V. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States); Kim, W. [Department of Physics, Kyungpook National University, Daegu 702-701 (Korea, Republic of)]. E-mail: wooyoung@jlab.org; Majewsky, S. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States); Park, K. [Department of Physics, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Popov, V. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States); Smith, E.S. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States); Son, D. [Department of Physics, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Stepanyan, S.S. [Department of Physics, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Zorn, C. [Thomas Jefferson National Accelerator Facility, Newport News, Virginia 23606 (United States)
2006-06-15
Improvements in the time resolution of the CEBAF Large Acceptance Spectrometer (CLAS) below {approx}50ps will be required for experiments using the planned upgraded accelerator facility at Jefferson Lab. The improved time resolution will allow particle identification using time-of-flight techniques to be used effectively up to the proposed operating energy of 12GeV. The challenge of achieving this time resolution over a relatively large area is compounded because the photomultipliers (PM) in the CLAS 'time-zero' scintillating counters must operate in very high magnetic fields. Therefore, we have studied the resolution of 'time-zero' prototypes with microchannel plate PMs 85001-501 from Burle. For reference and comparison, measurements were also made using the standard PMs R2083 from Hamamatsu using two timing methods. The cosmic ray method, which utilizes three identical scintillating counters (Bicron BC-408, 2x3x50cm{sup 3}) with PMs at the ends, yields {sigma}{sub R2083}=59.1+/-0.7ps. The location method of particles from a radiative source with known coordinates has been used to compare timing resolutions of R2083 and 85001-501. This method yields {sigma}{sub R2083}=59.5+/-0.7ps and it also provides an estimate of the number of primary photoelectrons. For the microchannel plate PM from Burle the method yields {sigma}{sub 85001}=130+/-4ps due to lower number of primary photoelectrons.
12. Liquid Scintillation Detectors for High Energy Neutrinos
International Nuclear Information System (INIS)
Smith, Stefanie N.; Learned, John G.
2010-01-01
Large open volume (not segmented) liquid scintillation detectors have been generally dedicated to low energy neutrino measurements, in the MeV energy region. We describe the potential employment of large detectors (>1 kiloton) for studies of higher energy neutrino interactions, such as cosmic rays and long-baseline experiments. When considering the physics potential of new large instruments the possibility of doing useful measurements with higher energy neutrino interactions has been overlooked. Here we take into account Fermat's principle, which states that the first light to reach each PMT will follow the shortest path between that PMT and the point of origin. We describe the geometry of this process, and the resulting wavefront, which we are calling the 'Fermat surface', and discuss methods of using this surface to extract directional track information and particle identification. This capability may be demonstrated in the new long-baseline neutrino beam from Jaeri accelerator to the KamLAND detector in Japan. Other exciting applications include the use of Hanohano as a movable long-baseline detector in this same beam, and LENA in Europe for future long-baseline neutrino beams from CERN. Also, this methodology opens up the question as to whether a large liquid scintillator detector should be given consideration for use in a future long-baseline experiment from Fermilab to the DUSEL underground laboratory at Homestake.
13. Ionosphere Scintillation at Low and High Latitudes (Modelling vs Measurement)
Science.gov (United States)
Béniguel, Yannick
2016-04-01
This paper will address the problem of scintillations characteristics, focusing on the parameters of interest for a navigation system. Those parameters are the probabilities of occurrence of simultaneous fading, the bubbles surface at IPP level, the cycle slips and the fades duration statistics. The scintillation characteristics obtained at low and high latitudes will be compared. These results correspond to the data analysis performed after the ESA Monitor ionosphere measurement campaign [1], [2]. A second aspect of the presentation will be the modelling aspect. It has been observed that the phase scintillation dominates at high latitudes while the intensity scintillation dominates at low latitudes. The way it can be reproduced and implemented in a propagation model (e.g. GISM model [3]) will be presented. Comparisons of measurements with results obtained by modelling will be presented on some typical scenarios. References [1] R. Prieto Cerdeira, Y. Beniguel, "The MONITOR project: architecture, data and products", Ionospheric Effects Symposium, Alexandria (Va), May 2011 [2] Y. Béniguel, R Orus-Perez , R. Prieto-Cerdeira , S. Schlueter , S. Scortan, A. Grosu "MONITOR 2: ionospheric monitoring network in support to SBAS and other GNSS and scientific purposes", IES Conference, Alexandria (Va), May 2015-05-22 [3] Y. Béniguel, P. Hamel, "A Global Ionosphere Scintillation Propagation Model for Equatorial Regions", Journal of Space Weather Space Climate, 1, (2011), doi: 10.1051/swsc/2011004
14. Time resolution of the plastic scintillator strips with matrix photomultiplier readout for J-PET tomograph
Science.gov (United States)
Moskal, P.; Rundel, O.; Alfs, D.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Giergiel, K.; Gorgol, M.; Jasińska, B.; Kamińska, D.; Kapłon, Ł.; Korcyl, G.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz; Pałka, M.; Raczyński, L.; Rudy, Z.; Sharma, N. G.; Słomski, A.; Silarski, M.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Witkowski, P.; Zieliński, M.; Zoń, N.
2016-03-01
Recent tests of a single module of the Jagiellonian Positron Emission Tomography system (J-PET) consisting of 30 cm long plastic scintillator strips have proven its applicability for the detection of annihilation quanta (0.511 MeV) with a coincidence resolving time (CRT) of 0.266 ns. The achieved resolution is almost by a factor of two better with respect to the current TOF-PET detectors and it can still be improved since, as it is shown in this article, the intrinsic limit of time resolution for the determination of time of the interaction of 0.511 MeV gamma quanta in plastic scintillators is much lower. As the major point of the article, a method allowing to record timestamps of several photons, at two ends of the scintillator strip, by means of matrix of silicon photomultipliers (SiPM) is introduced. As a result of simulations, conducted with the number of SiPM varying from 4 to 42, it is shown that the improvement of timing resolution saturates with the growing number of photomultipliers, and that the 2× 5 configuration at two ends allowing to read twenty timestamps, constitutes an optimal solution. The conducted simulations accounted for the emission time distribution, photon transport and absorption inside the scintillator, as well as quantum efficiency and transit time spread of photosensors, and were checked based on the experimental results. Application of the 2× 5 matrix of SiPM allows for achieving the coincidence resolving time in positron emission tomography of ≈ 0.170 ns for 15 cm axial field-of-view (AFOV) and ≈ 0.365 ns for 100 cm AFOV. The results open perspectives for construction of a cost-effective TOF-PET scanner with significantly better TOF resolution and larger AFOV with respect to the current TOF-PET modalities.
15. Influence of inhomogeneities in scintillating fibre electromagnetic calorimeter on its energy resolution
International Nuclear Information System (INIS)
Stavina, P.; Tokar, S.; Budagov, Yu.A.; Chirikov-Zorin, I.; Pantea, D.
1998-01-01
The specific aspects related to the discrete structure of the scintillating fibre electromagnetic calorimeter are investigated by means of Monte-Carlo simulation. It is shown that the structure inhomogeneity leads to an additional contribution to the systematic term in the energy resolution parametrization formula which weakly depends on energy and to the distortion of the Gaussian form of response distribution. The investigation was carried out for small tilt angles and for the absorber-to-fibre ratio 4:1
16. A comprehensive & systematic study of coincidence time resolution and light yield using scintillators of different size, wrapping and doping
CERN Document Server
Auffray, E.; Geraci, F.; Ghezzi, A.; Gundacker, S.; Hillemanns, H.; Jarron, P.; Meyer, T.; Paganoni, M.; Pauwels, K.; Pizzichemi, M.; Lecoq, P.
2011-01-01
Over the last years interest in using time-of-flight-based Positron Emission Tomography (TOF-PET) systems has significantly increased. High time resolution in such PET systems is a powerful tool to improve signal to noise ratio and therefore to allow smaller exposure rates for patients as well as faster image reconstruction. Improvement in coincidence time resolution (CTR) in PET systems to the level of 200ps FWHM requires the optimization of all parameters in the photon detection chain influencing the time resolution: crystal, photodetector and readout electronics. After reviewing the factors influencing the time resolution of scintillators, we will present in this paper the light yield and CTR obtained for different scintillator types (LSO:Ce, LYSO:Ce, LGSO:Ce, LSO:Ce:0.4Ca, LuAG:Ce, LuAG:Pr) with different cross-sections, lengths and reflectors. Whereas light yield measurements were made with a classical PMT, all CTR tests were performed with Hamamatsu-MPPCs or SiPMs S10931-050P. The CTR measurements were ...
17. High Resolution Elevation Contours
Data.gov (United States)
Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.
18. High-Z Nanoparticle/Polymer Nanocomposites for Gamma-Ray Scintillation Detectors
Science.gov (United States)
Liu, Chao
An affordable and reliable solution for spectroscopic gamma-ray detection has long been sought after due to the needs from research, defense, and medical applications. Scintillators resolve gamma energy by proportionally converting a single high-energy photon into a number of photomultiplier-tube-detectable low-energy photons, which is considered a more affordable solution for general purposes compared to the delicate semiconductor detectors. An ideal scintillator should simultaneously exhibit the following characteristics: 1) high atomic number (Z) for high gamma stopping power and photoelectron production; 2) high light yield since the energy resolution is inversely proportional to the square root of light yield; 3) short emission decay lifetime; and 4) low cost and scalable production. However, commercial scintillators made from either inorganic single crystals or plastics fail to satisfy all requirements due to their intrinsic material properties and fabrication limitations. The concept of adding high-Z constituents into plastic scintillators to harness high Z, low cost, and fast emission in the resulting nanocomposite scintillators is not new in and of itself. Attempts have been made by adding organometallics, quantum dots, and scintillation nanocrystals into the plastic matrix. High-Z organometallics have long been used to improve the Z of plastic scintillators; however, their strong spin-orbit coupling effect entails careful triplet energy matching using expensive triplet emitters to avoid severe quenching of the light yield. On the other hand, reported quantum dot- and nanocrystal-polymer nanocomposites suffer from moderate Z and high optical loss due to aggregation and self-absorption at loadings higher than 10 wt%, limiting their potential for practical application. This dissertation strives to improve the performance of nanoparticle-based nanocomposite scintillators. One focus is to synthesize transparent nanocomposites with higher loadings of high
19. Some rules to improve the energy resolution in alpha liquid scintillation with beta rejection
CERN Document Server
Aupiais, J; Dacheux, N
2003-01-01
Two common scintillating mixtures dedicated to alpha measurements by means of alpha liquid scintillation with pulse shape discrimination were tested: the di-isopropylnaphthalene - based and the toluene-based solvents containing the commercial cocktails Ultima Gold AB trademark and Alphaex trademark. We show the possibility to enhance the resolution up to 200% by using no-water miscible cocktails and by reducing the optical path. Under these conditions, the resolution of about 200 keV can be obtained either by the Tri Carb sup T sup M or by the Perals sup T sup M spectrometers. The time responses, e.g., the time required for a complete energy transfer between the initial interaction alpha particle-solvent and the final fluorescence of the organic scintillator, have been compared. Both cocktails present similar behavior. According to the Foerster theory, about 6-10 ns are required to complete the energy transfer. For both apparatus, the detection limits were determined for alpha emitters. The sensitivity of the...
20. Ultra high resolution tomography
Energy Technology Data Exchange (ETDEWEB)
1994-11-15
Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.
1. Response function measurement of plastic scintillator for high energy neutrons
International Nuclear Information System (INIS)
Sanami, Toshiya; Ban, Syuichi; Takahashi, Kazutoshi; Takada, Masashi
2003-01-01
The response function and detection efficiency of 2''φ x 2''L plastic (PilotU) and NE213 liquid (2''NE213) scintillators, which were used for the measurement of secondary neutrons from high energy electron induced reactions, were measured at Heavy Ion Medical Accelerator in Chiba (HIMAC). High energy neutrons were produced via 400 MeV/n C beam bombardment on a thick graphite target. The detectors were placed at 15 deg with respect to C beam axis, 5 m away from the target. As standard, a 5''φ x 5''L NE213 liquid scintillator (5''NE213) was also placed at same position. Neutron energy was determined by the time-of-flight method with the beam pickup scintillator in front of the target. In front of the detectors, veto scintillators were placed to remove charged particle events. All detector signals were corrected with list mode event by event. We deduce neutron spectrum for each detectors. The efficiency curves for pilotU and 2''NE213 were determined on the bases of 5 N E213 neutron spectrum and its efficiency calculated by CECIL code. (author)
2. High resolution solar observations
International Nuclear Information System (INIS)
Title, A.
1985-01-01
Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself
3. High resolution drift chambers
International Nuclear Information System (INIS)
Va'vra, J.
1985-07-01
High precision drift chambers capable of achieving less than or equal to 50 μm resolutions are discussed. In particular, we compare so called cool and hot gases, various charge collection geometries, several timing techniques and we also discuss some systematic problems. We also present what we would consider an ''ultimate'' design of the vertex chamber. 50 refs., 36 figs., 6 tabs
4. Photocathode non-uniformity contribution to the energy resolution of scintillators
International Nuclear Information System (INIS)
Mottaghian, M.; Koohi-Fayegh, R.; Ghal-Eh, N.; Etaati, G. R.
2010-01-01
This paper introduces the basics of the light transport simulation in scintillators and the wavelength-dependencies in the process. The non-uniformity measurement of the photocathode surface is undertaken, showing that for the photocathode used in this study the quantum efficiency falls to about 4% of its maximum value, especially in areas far from the centre. The wavelength-and position-dependent quantum efficiency is implemented in the Monte Carlo light transport code, showing that, the contribution of the photocathode non-uniformity to the energy resolution is estimated to be around 18%, when all position-and wavelength-dependencies are included. (authors)
5. Influence of inhomogeneities in scintillating fibre electromagnetic calorimeter on its energy resolution
Energy Technology Data Exchange (ETDEWEB)
Stavina, P; Tokar, S [Department of Nuclear Physics, Comenius University, Bratislava (Slovak Republic); Budagov, Yu A [Joint Institute for Nuclear Research, Dubna (Russian Federation); Chirikov-Zorin, I; Pantea, D [Institute of Atomic Physics, Bucharest (Romania)
1998-12-01
The specific aspects related to the discrete structure of the scintillating fibre electromagnetic calorimeter are investigated by means of Monte-Carlo simulation. It is shown that the structure inhomogeneity leads to an additional contribution to the systematic term in the energy resolution parametrization formula which weakly depends on energy and to the distortion of the Gaussian form of response distribution. The investigation was carried out for small tilt angles and for the absorber-to-fibre ratio 4:1 10 refs., 7 refs., 2 tabs.
6. New generation of efficient high resolution detector for 30-100 keV photons
DEFF Research Database (Denmark)
Olsen, Ulrik Lund
between pores. The potential of the structured scintillator is explored through Monte Carlo simulations. A spatial resolution of 1 µm is obtainable and for scintillators with a resolution between 1 µm and 8 µm the efficiency could be more than 15 times higher than a regular scintillator with corresponding...... detector. This establishes an inverse correlation between the spatial resolution and the detection efficiency which limits the performance of existing x-ray detectors. The purpose of this Ph.D. project is to explore alternative paths of research, to develop x-ray detectors for the 30-100 keV energy range...... with single micrometre resolution without compromising efficiency. A number of detector types have been evaluated for this purpose. Structured scintillators are found to exhibit a high potential in terms of performance and also in terms of realizing an actual detector. The structured scintillator consists...
7. High energy gamma ray response of liquid scintillator
International Nuclear Information System (INIS)
Shigyo, N.; Ishibashi, K.; Matsufuji, N.; Nakamoto, T.; Numajiri, M.
1994-01-01
We made the experiment on the spallation reaction. NE213 organic liquid scintillators were used for measuring neutrons and γ rays. To produce the γ ray emission cross section, we used the response functions by EGS4 code. The response functions look like uniform above γ ray energies of 60 MeV. The experimental data of the γ ray emission cross section are different from the data of High Energy Transport Code. (author)
8. High resolution data acquisition
Science.gov (United States)
Thornton, Glenn W.; Fuller, Kenneth R.
1993-01-01
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
9. Scintillating plastic optical fiber radiation detectors in high energy particle physics
International Nuclear Information System (INIS)
Bross, A.D.
1991-01-01
We describe the application of scintillating optical fiber in instrumentation for high energy particle physics. The basic physics of the scintillation process in polymers is discussed first and then we outline the fundamentals of scintillating fiber technology. Fiber performance, optimization, and characterization measurements are given. Detector applications in the areas of particle tracking and particle energy determination are then described. 13 refs., 12 figs
10. High tracking resolution detectors. Final Technical Report
International Nuclear Information System (INIS)
Vasile, Stefan; Li, Zheng
2010-01-01
11. High resolution photoelectron spectroscopy
International Nuclear Information System (INIS)
Arko, A.J.
1988-01-01
Photoelectron Spectroscopy (PES) covers a very broad range of measurements, disciplines, and interests. As the next generation light source, the FEL will result in improvements over the undulator that are larger than the undulater improvements over bending magnets. The combination of high flux and high inherent resolution will result in several orders of magnitude gain in signal to noise over measurements using synchrotron-based undulators. The latter still require monochromators. Their resolution is invariably strongly energy-dependent so that in the regions of interest for many experiments (h upsilon > 100 eV) they will not have a resolving power much over 1000. In order to study some of the interesting phenomena in actinides (heavy fermions e.g.) one would need resolving powers of 10 4 to 10 5 . These values are only reachable with the FEL
12. High resolution (transformers.
Science.gov (United States)
Garcia-Souto, Jose A; Lamela-Rivera, Horacio
2006-10-16
A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted.
13. Construction and response of a highly granular scintillator-based electromagnetic calorimeter
Science.gov (United States)
Repond, J.; Xia, L.; Eigen, G.; Price, T.; Watson, N. K.; Winter, A.; Thomson, M. A.; Cârloganu, C.; Blazey, G. C.; Dyshkant, A.; Francis, K.; Zutshi, V.; Gadow, K.; Göttlicher, P.; Hartbrich, O.; Kotera, K.; Krivan, F.; Krüger, K.; Lu, S.; Lutz, B.; Reinecke, M.; Sefkow, F.; Sudo, Y.; Tran, H. L.; Kaplan, A.; Schultz-Coulon, H.-Ch.; Bilki, B.; Northacker, D.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Sekiya, I.; Suehara, T.; Yamashiro, H.; Yoshioka, T.; Alamillo, E. Calvo; Fouz, M. C.; Marin, J.; Navarrete, J.; Pelayo, J. Puerta; Verdugo, A.; Chadeeva, M.; Danilov, M.; Gabriel, M.; Goecke, P.; Graf, C.; Israeli, Y.; Kolk, N. Van Der; Simon, F.; Szalay, M.; Windel, H.; Bilokin, S.; Bonis, J.; Pöschl, R.; Thiebault, A.; Richard, F.; Zerwas, D.; Balagura, V.; Boudry, V.; Brient, J.-C.; Cornat, R.; Cvach, J.; Janata, M.; Kovalcuk, M.; Kvasnicka, J.; Polak, I.; Smolik, J.; Vrba, V.; Zalesak, J.; Zuklin, J.; Choi, W.; Kotera, K.; Nishiyama, M.; Sakuma, T.; Takeshita, T.; Tozuka, S.; Tsubokawa, T.; Uozumi, S.; Jeans, D.; Ootani, W.; Liu, L.; Chang, S.; Khan, A.; Kim, D. H.; Kong, D. J.; Oh, Y. D.; Ikuno, T.; Sudo, Y.; Takahashi, Y.; Götze, M.; Calice Collaboration
2018-04-01
A highly granular electromagnetic calorimeter with scintillator strip readout is being developed for future linear collider experiments. A prototype of 21.5 X0 depth and 180 × 180mm2 transverse dimensions was constructed, consisting of 2160 individually read out 10 × 45 × 3mm3 scintillator strips. This prototype was tested using electrons of 2-32 GeV at the Fermilab Test Beam Facility in 2009. Deviations from linear energy response were less than 1.1%, and the intrinsic energy resolution was determined to be (12 . 5 ± 0 . 1(stat.) ± 0 . 4(syst.)) % /√{ E [ GeV ] } ⊕(1.2 ± 0.1 (stat.)-0.7+0.6 (syst.)) %, where the uncertainties correspond to statistical and systematic sources, respectively.
14. A user's guide to scintillation
International Nuclear Information System (INIS)
Hewish, A.
1989-01-01
During the past four decades scintillation methods have been used for remote-sensing distant plasmas and for providing high angular resolution in radioastronomy. This brief review illustrates some of the techniques employed and explains the underlying theory in simple physical terms; it is not intended to be a complete survey of all applications of scintillation. (author)
15. High resolution backscattering instruments
International Nuclear Information System (INIS)
Coldea, R.
2001-01-01
The principle of operation of indirect-geometry time-of-flight spectrometers are presented, including the IRIS at the ISIS spallation neutron source. The key features that make those types of spectrometers ideally suited for low-energy spectroscopy are: high energy resolution over a wide dynamic range, and simultaneous measurement over a large momentum transfer range provided by the wide angular detector coverage. To exemplify these features are discussed of single-crystal experiments of the spin dynamics in the two-dimensional frustrated quantum magnet Cs 2 CuCl 4 . (R.P.)
16. X-ray radiation detectors of 'scintillator-photoreceiving device type' for industrial digital radiography with improved spatial resolution
International Nuclear Information System (INIS)
Ryzhykov, V.D.; Lysetska, O.K.; Opolonin, O.D.; Kozin, D.N.
2003-01-01
Main types of photo receivers used in X-ray digital radiography systems are luminescent screens that transfer the optical image onto charge collection instruments, which require cooling, and semiconductor silicon detectors, which limit the contrast sensitivity. We have developed and produced X-ray radiation detectors of 'scintillator-photoreceiving device' (S-PRD) type, which are integrally located on the inverse side of the photodiode (PD). The receiving-converting circuit (RCC) is designed for data conversion into digital form and their input into PC. Software is provided for RCC control and image visualization. Main advantages of these detectors are high industrial resolution (3-5 line pairs per mm), detecting activity up to 20 μm, controlled sensitivity, low weight and small size, imaging low (0.1-0.3 mrad) object dose in real time. In this work, main characteristics of 32-, 64- and 1024-channel detectors of S-PRD type were studied and compared for X-ray sensitivity with S-PD detectors. Images of the tested objects have been obtained. Recommendations are given on the use of different scintillation materials, depending upon the purpose of a digital radiographic system. The detectors operate in a broad energy range of ionizing radiation, hence the size of the controlled object is not limited. The system is sufficiently powerful to ensure frontal (through two walls) observation of pipelines with wall thickness up to 10 cm
17. Scintillating liquid xenon calorimeter for precise electron/photon/jet physics at high energy high luminosity hadron colliders
International Nuclear Information System (INIS)
Chen, M.; Luckey, D.; Pelly, D.; Shotkin, S.; Sumorok, K.; Wadsworth, B.; Yan, X.J.; You, C.; Zhang, X.; Chen, E.G.; Gaudreau, M.P.J.; Montgomery, D.B.; Sullivan, J.D.; Bolozdynya, A.; Chernyshev, V.; Goritchev, P.; Khovansky, V.; Kouchenkov, A.; Kovalenko, A.; Lebedenko, V.; Vinogradov, V.A.; Epstein, V.; Zeldovich, S.; Krasnokutsky, R.; Shuvalov, R.; Aprile, E.; Mukherjee, R.; Suzuki, M.; Moulsen, M.; Sugimoto, S.; Okada, K.; Fujino, T.; Matsuda, T.; Miyajima, M.; Doke, T.; Kikuchi, J.; Hitachi, A.; Kashiwagi, T.; Nagasawa, Y.; Ichinose, H.; Ishida, N.; Nakasugi, T.; Ito, T.; Masuda, K.; Shibamura, E.; Wallraff, W.; Vivargent, M.; Mutterer, M.; Chen, H.S.; Tang, H.W.; Tung, K.L.; Ding, H.L.; Takahashi, T.
1990-01-01
The authors use αs well as e, π, p, d and heavy ion beams to test prototype scintillating liquid xenon detectors, with large UV photodiodes and fast amplifiers submersed directly in liquid xenon. The data show very large photoelectron yields (10 7 /GeV) and high energy resolution (σ(E)/E 1.6 GeV). The α spectra are stable over long term and can be used to calibrate the detectors. Full size liquid xenon detectors have been constructed, to study cosmic μ's and heavy ions. The authors report the progress on the design and construction of the 5 x 5 and 11 x 11 cell liquid xenon detectors which will be tested in high energy beams to determine the e/π ratio. The authors describe the design and the unique properties of the proposed scintillating LXe calorimeter for the SSC
18. Grooved windows for scintillation crystals and light pipes of high refractive index
International Nuclear Information System (INIS)
Swinehart, C.F.
1975-01-01
Scintillation crystals are disclosed which have improved resolution and pulse height. An improved crystal has shallow grooves or spot depressions cut in the window, usually an end surface. Typical grooves are about 1.5 mm wide and about .1 mm deep. The grooves may be either horizontal, generally parallel grooves in spaced apart relationship, or concentric rings in radially spaced apart relationship. A light pipe of high refractive index, such as a crystal of pure sodium iodide, may also be improved with shallow grooves or spot depressions cut in an end surface
19. High Time Resolution Astrophysics
CERN Document Server
Phelan, Don; Shearer, Andrew
2008-01-01
High Time Resolution Astrophysics (HTRA) is an important new window to the universe and a vital tool in understanding a range of phenomena from diverse objects and radiative processes. This importance is demonstrated in this volume with the description of a number of topics in astrophysics, including quantum optics, cataclysmic variables, pulsars, X-ray binaries and stellar pulsations to name a few. Underlining this science foundation, technological developments in both instrumentation and detectors are described. These instruments and detectors combined cover a wide range of timescales and can measure fluxes, spectra and polarisation. These advances make it possible for HTRA to make a big contribution to our understanding of the Universe in the next decade.
20. High resolution ultrasonic densitometer
International Nuclear Information System (INIS)
Dress, W.B.
1983-01-01
The velocity of torsional stress pulses in an ultrasonic waveguide of non-circular cross section is affected by the temperature and density of the surrounding medium. Measurement of the transit times of acoustic echoes from the ends of a sensor section are interpreted as level, density, and temperature of the fluid environment surrounding that section. This paper examines methods of making these measurements to obtain high resolution, temperature-corrected absolute and relative density and level determinations of the fluid. Possible applications include on-line process monitoring, a hand-held density probe for battery charge state indication, and precise inventory control for such diverse fluids as uranium salt solutions in accountability storage and gasoline in service station storage tanks
1. Improvement on the light yield of a high-Z inorganic scintillator GSO(Ce)
CERN Document Server
Kamae, T; Isobe, N; Kokubun, M; Kubota, A; Osone, S; Takahashi, T; Tsuchida, N; Ishibashi, H
2002-01-01
Cerium-doped gadolinium silicic dioxide crystal, GSO(Ce), is a high-Z non-hydroscopic scintillator that gives higher light yield than BGO, and can potentially replace NaI(Tl), CsI(Tl) and BGO in many applications. Its production cost, however, has been substantially higher than any of them, while its energy resolution has been worse than that of NaI(Tl) or CsI(Tl). The merit did not overcome these deficiencies except in limited applications. We developed a low background phoswich counter (the well-type phoswich counter) for the Hard X-ray Detector of the Astro-E project based on GSO scintillator. In the developmental work, we have succeeded in improving the light yield of GSO(Ce) by 40-50%. For energies above 500 keV, a large GSO(Ce) crystal (4.5 cmx4.5phi cm) now gives energy resolution comparable to or better than the best NaI(Tl) when read out with a phototube. With a small GSO(Ce) crystal (5x5x5 mm sup 3) and a photodiode, an energy resolution comparable to or better than the best CsI(Tl) has been obtaine...
2. A new approach to film dosimetry for high-energy photon beams using organic plastic scintillators
International Nuclear Information System (INIS)
Yeo, I.J.; Wang, C.-K.C.; Burch, S.E.
1999-01-01
Successful radiotherapy relies on accurate dose measurement. Traditional dosimeters such as ion chambers, TLDs and diodes have disadvantages such as relatively long measurement time and poor spatial resolution. These drawbacks become more serious problems for dynamic beams (i.e. with the use of dynamic wedges or even the intensity modulation technique). X-ray film, an integrating dosimeter, may not be associated with the above disadvantages and problems. However, there are several major issues regarding use of x-ray film for routine dosimetry, including the over-response of the film to low-energy photons, variations in the dose response curve (nonlinearity), lack of reproducibility due to variation in processing, etc. This paper addresses the first problem. That is, x-ray film over-responds to low-energy photons (energies below 400 keV), and thus generates unacceptably inaccurate dosimetric data compared with ion-chamber data. To overcome the over-response problem of x-ray film in a phantom, a scintillation method has been investigated. In this method, a film is sandwiched by two plastic scintillation screens to enhance the film response to upstream electrons, and therefore minimize the over-response caused by low-energy photons. The sandwiched system was tested with a 4 MV linac beam. The result shows that, depending on the uniformity of the scintillation screens, the depth-dose distribution obtained from the sandwich system can be made to agree well with that obtained from ion chambers. However, the required high degree of uniformity remains a challenge for the scintillation screen manufacturers. (author)
3. High resolution positron tomography
International Nuclear Information System (INIS)
Brownell, G.L.; Burnham, C.A.
1982-01-01
The limits of spatial resolution in practical positron tomography are examined. The four factors that limit spatial resolution are: positron range; small angle deviation; detector dimensions and properties; statistics. Of these factors, positron range may be considered the fundamental physical limitation since it is independent of instrument properties. The other factors are to a greater or lesser extent dependent on the design of the tomograph
4. New DOI identification approach for high-resolution PET detectors
International Nuclear Information System (INIS)
Choghadi, Amin; Takahashi, Hiroyuki; Shimazoe, Kenji
2016-01-01
Depth-of-interaction (DOI) Identification in positron emission tomography (PET) detectors is getting importance as it improves spatial resolution in both conventional and time-of-flight (TOF) PET, and coincidence time resolution (CTR) in TOF-PET. In both prototypes, spatial resolution is affected by parallax error caused by length of scintillator crystals. This long length also contributes substantial timing uncertainty to the time resolution of TOF-PET. Through DOI identification, both parallax error and the timing uncertainty caused by the length of crystal can be resolved. In this work, a novel approach to estimate DOI was investigated, enjoying the interference of absorbance spectrum of scintillator crystals with their emission spectrum. Because the absorption length is close to zero for shorter wavelengths of crystal emission spectrum, the counts in this range of spectrum highly depend on DOI; that is, higher counts corresponds to deeper interactions. The ratio of counts in this range to the total counts is a good measure to estimate DOI. In order to extract such ratio, two photodetectors for each crystal are used and an optical filter is mounted only on top of one of them. The ratio of filtered output to non-filtered output can be utilized as DOI estimator. For a 2×2×20 mm 3 GAGG:Ce scintillator, 8-mm DOI resolution achieved in our simulations. (author)
5. High resolution tomography using analog coding
International Nuclear Information System (INIS)
Brownell, G.L.; Burnham, C.A.; Chesler, D.A.
1985-01-01
As part of a 30-year program in the development of positron instrumentation, the authors have developed a high resolution bismuth germanate (BGO) ring tomography (PCR) employing 360 detectors and 90 photomultiplier tubes for one plane. The detectors are shaped as trapezoid and are 4 mm wide at the front end. When assembled, they form an essentially continuous cylindrical detector. Light from a scintillation in the detector is viewed through a cylindrical light pipe by the photomultiplier tubes. By use of an analog coding scheme, the detector emitting light is identified from the phototube signals. In effect, each phototube can identify four crystals. PCR is designed as a static device and does not use interpolative motion. This results in considerable advantage when performing dynamic studies. PCR is the positron tomography analog of the γ-camera widely used in nuclear medicine
6. Capillary detectors for high resolution tracking
International Nuclear Information System (INIS)
Annis, P.; Bay, A.; Bonekaemper, D.; Buontempo, S.; Ereditato, A.; Fabre, J.P.; Fiorillo, G.; Frekers, D.; Frenkel, A.; Galeazzi, F.; Garufi, F.; Goldberg, J.; Golovkin, S.; Hoepfner, K.; Konijn, J.; Kozarenko, E.; Kreslo, I.; Liberti, B.; Martellotti, G.; Medvedkov, A.; Mommaert, C.; Panman, J.; Penso, G.; Petukhov, Yu.; Rondeshagen, D.; Tyukov, V.; Vasilchenko, V.; Vilain, P.; Vischers, J.L.; Wilquet, G.; Winter, K.; Wolff, T.; Wong, H.
1997-01-01
We present a new tracking device based on glass capillary bundles or layers filled with highly purified liquid scintillator and read out at one end by means of image intensifiers and CCD devices. A large-volume prototype consisting of 5 x 10 5 capillaries with a diameter of 20 μm and a length of 180 cm and read out by a megapixel CCD has been tested with muon and neutrino beams at CERN. With this prototype a two track resolution of 33 μm was achieved with passing through muons. Images of neutrino interactions in a capillary bundle have also been acquired and analysed. Read-out chains based on electron bombarded CCD (EBCCD) and image pipeline devices are also investigated. Preliminary results obtained with a capillary bundle read out by an EBCCD are presented. (orig.)
7. Capillary detectors for high resolution tracking
CERN Document Server
Annis, P
1997-01-01
We present a new tracking device based on glass capillary bundles or layers filled with highly purified liquid scintillator and read out at one end by means of image intensifiers and CCD devices. A large-volume prototype consisting of 5 × 105 capillaries with a diameter of 20 μm and a length of 180 cm and read out by a megapixel CCD has been tested with muon and neutrino beams at CERN. With this prototype a two track resolution of 33 μm was achieved with passing through muons. Images of neutrino interactions in a capillary bundle have also been acquired and analysed. Read-out chains based on Electron Bombarded CCD (EBCCD) and image pipeline devices are also investigated. Preliminary results obtained with a capillary bundle read out by an EBCCD are presented.
8. Effects of detector-source distance and detector bias voltage variations on time resolution of general purpose plastic scintillation detectors.
Science.gov (United States)
Ermis, E E; Celiktas, C
2012-12-01
Effects of source-detector distance and the detector bias voltage variations on time resolution of a general purpose plastic scintillation detector such as BC400 were investigated. (133)Ba and (207)Bi calibration sources with and without collimator were used in the present work. Optimum source-detector distance and bias voltage values were determined for the best time resolution by using leading edge timing method. Effect of the collimator usage on time resolution was also investigated. Copyright © 2012 Elsevier Ltd. All rights reserved.
9. Decay Time Measurement for Different Energy Depositions of Plastic Scintillator Fabricated by High Temperature Polymerization Reaction
Energy Technology Data Exchange (ETDEWEB)
Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Tae Hoon; Kim, Yong-Kyun [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
Plastic scintillators are based on organic fluorite. They have many advantages such as fast rise and decay time, high optical transmission, ease of manufacturing, low cost, and large available size. For these reasons they are widely used for particle identification. Also, protection of people against a variety of threats (such as nuclear, radiological, and explosive) represents a true challenge along with the continuing development of science and technology. The plastic scintillator is widely used in various devise, which serves for nuclear, photonics, quantum, and high-energy physics. The plastic scintillator is probably the most widely used organic detector, and polystyrene is one of the most widely used materials in the making of the plastic scintillator detector. Thus, a styrene monomer as a solvent was used to fabricate the plastic scintillator by using high temperature polymerization reaction, and then the emission wavelength and the decay times for different energy depositions were measured by using the fabricated plastic scintillator. A plastic scintillator was fabricated to measure decay time for different energy depositions using the high temperature polymerization. Emission wavelength was measured of 426.05 nm to confirm a scintillator property using the spectrophotometer. Four gamma-ray sources (Cs-137, Co-60, Na-22, and Ba-133) were used to evaluate effect for decay time of different energy depositions. The average decay time of the fabricated plastic scintillator was measured to approximately 4.72 ns slightly higher more than commercial plastic scintillator. In future, light output and linearity will be measured to evaluate other property compared with the commercial scintillator.
10. Decay Time Measurement for Different Energy Depositions of Plastic Scintillator Fabricated by High Temperature Polymerization Reaction
International Nuclear Information System (INIS)
Lee, Cheol Ho; Son, Jaebum; Lee, Sangmin; Kim, Tae Hoon; Kim, Yong-Kyun
2016-01-01
Plastic scintillators are based on organic fluorite. They have many advantages such as fast rise and decay time, high optical transmission, ease of manufacturing, low cost, and large available size. For these reasons they are widely used for particle identification. Also, protection of people against a variety of threats (such as nuclear, radiological, and explosive) represents a true challenge along with the continuing development of science and technology. The plastic scintillator is widely used in various devise, which serves for nuclear, photonics, quantum, and high-energy physics. The plastic scintillator is probably the most widely used organic detector, and polystyrene is one of the most widely used materials in the making of the plastic scintillator detector. Thus, a styrene monomer as a solvent was used to fabricate the plastic scintillator by using high temperature polymerization reaction, and then the emission wavelength and the decay times for different energy depositions were measured by using the fabricated plastic scintillator. A plastic scintillator was fabricated to measure decay time for different energy depositions using the high temperature polymerization. Emission wavelength was measured of 426.05 nm to confirm a scintillator property using the spectrophotometer. Four gamma-ray sources (Cs-137, Co-60, Na-22, and Ba-133) were used to evaluate effect for decay time of different energy depositions. The average decay time of the fabricated plastic scintillator was measured to approximately 4.72 ns slightly higher more than commercial plastic scintillator. In future, light output and linearity will be measured to evaluate other property compared with the commercial scintillator
11. Design and Prototyping of a High Granularity Scintillator Calorimeter
International Nuclear Information System (INIS)
Zutshi, Vishnu
2016-01-01
A novel approach for constructing fine-granularity scintillator calorimeters, based on the concept of an Integrated Readout Layer (IRL) was developed. The IRL consists of a printed circuit board inside the detector which supports the directly-coupled scintillator tiles, connects to the surface-mount SiPMs and carries the necessary front-end electronics and signal/bias traces. Prototype IRLs using this concept were designed, prototyped and successfully exposed to test beams. Concepts and implementations of an IRL carried out with funds associated with this contract promise to result in the next generation of scintillator calorimeters.
12. Design and Prototyping of a High Granularity Scintillator Calorimeter
Energy Technology Data Exchange (ETDEWEB)
Zutshi, Vishnu [Northern Illinois Univ., DeKalb, IL (United States). Dept. of Physics
2016-03-27
A novel approach for constructing fine-granularity scintillator calorimeters, based on the concept of an Integrated Readout Layer (IRL) was developed. The IRL consists of a printed circuit board inside the detector which supports the directly-coupled scintillator tiles, connects to the surface-mount SiPMs and carries the necessary front-end electronics and signal/bias traces. Prototype IRLs using this concept were designed, prototyped and successfully exposed to test beams. Concepts and implementations of an IRL carried out with funds associated with this contract promise to result in the next generation of scintillator calorimeters.
13. Cerenkov counting and Cerenkov-scintillation counting with high refractive index organic liquids using a liquid scintillation counter
Energy Technology Data Exchange (ETDEWEB)
Wiebe, L I; Helus, F; Maier-Borst, W [Deutsches Krebsforschungszentrum, Heidelberg (Germany, F.R.). Inst. fuer Nuklearmedizin
1978-06-01
/sup 18/F and /sup 14/C radioactivity was measured in methyl salicylate (MS), a high refractive index hybrid Cherenkov-scintillation generating medium, using a liquid scintillation counter. At concentrations of up to 21.4%, in MS, dimethyl sulfoxide (DMSO) quenched /sup 14/C fluorescence, and with a 10-fold excess of DMSO over MS, /sup 18/F count rates were reduced below that for DMSO alone, probably as a result of concentration-independent self-quenching due to 'dark-complex' formation. DMSO in lower concentrations did not reduce the counting efficiency of /sup 18/F in MS. Nitrobenzene was a concentration-dependent quencher for both /sup 14/C and /sup 18/F in MS. Chlorobenzene (CB) and DMSO were both found to be weak Cherenkov generators with /sup 18/F. Counting efficiencies for /sup 18/F in MS, CB, and DMSO were 50.3, 7.8 and 4.3% respectively in the coincidence counting mode, and 58.1, 13.0 and 6.8% in the singles mode. /sup 14/C efficiencies were 14.4 and 22.3% for coincidence and singles respectively, and 15.3 and 42.0% using a modern counter designed for coincidence and single photon counting. The high /sup 14/C and /sup 18/F counting efficiency in MS are discussed with respect to excitation mechanism, on the basis of quench and channels ratios changes observed. It is proposed that MS functions as an efficient Cherenkov-scintillation generator for high-energy beta emitters such as /sup 18/F, and as a low-efficiency scintillator for weak beta emitting radionuclides such as /sup 14/C.
14. Cerenkov counting and Cerenkov-scintillation counting with high refractive index organic liquids using a liquid scintillation counter
International Nuclear Information System (INIS)
Wiebe, L.I.; Helus, F.; Maier-Borst, W.
1978-01-01
18 F and 14 C radioactivity was measured in methyl salicylate (MS), a high refractive index hybrid Cherenkov-scintillation generating medium, using a liquid scintillation counter. At concentrations of up to 21.4%, in MS, dimethyl sulfoxide (DMSO) quenched 14 C fluorescence, and with a 10-fold excess of DMSO over MS, 18 F count rates were reduced below that for DMSO alone, probably as a result of concentration-independent self-quenching due to 'dark-complex' formation. DMSO in lower concentrations did not reduce the counting efficiency of 18 F in MS. Nitrobenzene was a concentration-dependent quencher for both 14 C and 18 F in MS. Chlorobenzene (CB) and DMSO were both found to be weak Cherenkov generators with 18 F. Counting efficiencies for 18 F in MS, CB, and DMSO were 50.3, 7.8 and 4.3% respectively in the coincidence counting mode, and 58.1, 13.0 and 6.8% in the singles mode. 14 C efficiencies were 14.4 and 22.3% for coincidence and singles respectively, and 15.3 and 42.0% using a modern counter designed for coincidence and single photon counting. The high 14 C and 18 F counting efficiency in MS are discussed with respect to excitation mechanism, on the basis of quench and channels ratios changes observed. It is proposed that MS functions as an efficient Cherenkov-scintillation generator for high-energy beta emitters such as 18 F, and as a low-efficiency scintillator for weak beta emitting radionuclides such as 14 C. (author)
International Nuclear Information System (INIS)
Wigmans, R.
1987-01-01
The components that contribute to the signal of a hadron calorimeter and the factors that affect its performance are discussed, concentrating on two aspects; energy resolution and signal linearity. Both are decisively dependent on the relative response to the electromagnetic and the non-electromagnetic shower components, the e/h signal ratio, which should be equal to 1.0 for optimal performance. The factors that determine the value of this ratio are examined. The calorimeter performance is crucially determined by its response to the abundantly present soft neutrons in the shower. The presence of a considerable fraction of hydrogen atoms in the active medium is essential for achieving the best possible results. Firstly, this allows one to tune e/h to the desired value by choosing the appropriate sampling fraction. And secondly, the efficient neutron detection via recoil protons in the readout medium itself reduces considerably the effect of fluctuations in binding energy losses at the nuclear level, which dominate the intrinsic energy resolution. Signal equalization, or compensation (e/h = 1.0) does not seem to be a property unique to 238 U, but can also be achieved with lead and probably even iron absorbers. 21 refs.; 19 figs
16. A new scintillation counter with very fast resolving time (1961); Nouveau compteur a scintillation a tres faible temps de resolution (1961)
Energy Technology Data Exchange (ETDEWEB)
Koch, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1961-07-01
The rare gases used as scintillators are characterized by their short time of luminescence and by the linearity of their response as a function of the total energy imparted to the gas by the incident particle. It is possible with these scintillators, when associated with a fast response photomultiplier, to solve certain problems of nuclear physics demanding a linear detector with a very fast resolving time (a few nanoseconds). Two examples of the construction of this apparatus are described. The results obtained and future possibilities are briefly outlined. (author) [French] Les gaz rares utilises comme scintillateurs sont caracterises par leur faible duree de luminescence et par la linearite de leur reponse en fonction de l'energie totale cedee au gaz par la particule incidente. Ces scintillateurs, associes a un photomultiplicateur a une reponse rapide, permettent de resoudre certains problemes de physique nucleaire dans lesquels un detecteur lineaire a tres faible temps de resolution (quelques nanosecondes) se revele indispensable. Deux exemples de realisation sont decrits. Les resultats obtenus et les possibilites futures sont brievement exposes. (auteur)
17. GPS scintillations and total electron content climatology in the southern low, middle and high latitude regions
Directory of Open Access Journals (Sweden)
Luca Spogli
2013-06-01
Full Text Available In recent years, several groups have installed high-frequency sampling receivers in the southern middle and high latitude regions, to monitor ionospheric scintillations and the total electron content (TEC changes. Taking advantage of the archive of continuous and systematic observations of the ionosphere on L-band by means of signals from the Global Positioning System (GPS, we present the first attempt at ionospheric scintillation and TEC mapping from Latin America to Antarctica. The climatology of the area considered is derived through Ground-Based Scintillation Climatology, a method that can identify ionospheric sectors in which scintillations are more likely to occur. This study also introduces the novel ionospheric scintillation 'hot-spot' analysis. This analysis first identifies the crucial areas of the ionosphere in terms of enhanced probability of scintillation occurrence, and then it studies the seasonal variation of the main scintillation and TEC-related parameters. The results produced by this sophisticated analysis give significant indications of the spatial/ temporal recurrences of plasma irregularities, which contributes to the extending of current knowledge of the mechanisms that cause scintillations, and consequently to the development of efficient tools to forecast space-weather-related ionospheric events.
18. Economical stabilized scintillation detector
International Nuclear Information System (INIS)
Anshakov, O.M.; Chudakov, V.A.; Gurinovich, V.I.
1983-01-01
An economical scintillation detector with the stabilization system of an integral type is described. Power consumed by the photomultiplier high-voltage power source is 40 mW, energy resolution is not worse than 9%. The given detector is used in a reference detector of a digital radioisotope densimeter for light media which is successfully operating for several years
19. Microfluidic Scintillation Detectors
CERN Multimedia
Microfluidic scintillation detectors are devices of recent introduction for the detection of high energy particles, developed within the EP-DT group at CERN. Most of the interest for such technology comes from the use of liquid scintillators, which entails the possibility of changing the active material in the detector, leading to an increased radiation resistance. This feature, together with the high spatial resolution and low thickness deriving from the microfabrication techniques used to manufacture such devices, is desirable not only in instrumentation for high energy physics experiments but also in medical detectors such as beam monitors for hadron therapy.
20. Effects of detector–source distance and detector bias voltage variations on time resolution of general purpose plastic scintillation detectors
International Nuclear Information System (INIS)
Ermis, E.E.; Celiktas, C.
2012-01-01
Effects of source-detector distance and the detector bias voltage variations on time resolution of a general purpose plastic scintillation detector such as BC400 were investigated. 133 Ba and 207 Bi calibration sources with and without collimator were used in the present work. Optimum source-detector distance and bias voltage values were determined for the best time resolution by using leading edge timing method. Effect of the collimator usage on time resolution was also investigated. - Highlights: ► Effect of the source-detector distance on time spectra was investigated. ► Effect of the detector bias voltage variations on time spectra was examined. ► Optimum detector–source distance was determined for the best time resolution. ► Optimum detector bias voltage was determined for the best time resolution. ► 133 Ba and 207 Bi radioisotopes were used.
1. Climatology of GNPs ionospheric scintillation at high and mid latitudes under different solar activity conditions
International Nuclear Information System (INIS)
Spogli, L.; Alfonsi, L.; De Franceschi, G.; Romano, V.; Aquino, M.H.O.; Dodson, A.
2010-01-01
We analyze data of ionospheric scintillation over North European regions for the same period (October to November) of two different years (2003 and 2008), characterized by different geomagnetic conditions. The work aims to develop a scintillation climatology of the high- and mid-latitude ionosphere, analyzing the behaviour of the scintillation occurrence as a function of the magnetic local time (MLT) and of the altitude adjusted corrected magnetic latitude (M lat), to characterize scintillation scenarios under different solar activity conditions. The results shown herein are obtained merging observations from a network of GISTMs (GPS Ionospheric Scintillation and TEC Monitor) located over a wide range of latitudes in the northern hemisphere. Our findings confirm the associations of the occurrence of the ionospheric irregularities with the expected position of the auroral oval and of the ionospheric trough walls and show the contribution of the polar cap patches even under solar minimum conditions.
2. High Resolution Sensor for Nuclear Waste Characterization
International Nuclear Information System (INIS)
Kanai Shah; William Higgins; Edgar V. Van Loef
2006-01-01
Gamma ray spectrometers are an important tool in the characterization of radioactive waste. Important requirements for gamma ray spectrometers used in this application include good energy resolution, high detection efficiency, compact size, light weight, portability, and low power requirements. None of the available spectrometers satisfy all of these requirements. The goal of the Phase I research was to investigate lanthanum halide and related scintillators for nuclear waste clean-up. LaBr 3 :Ce remains a very promising scintillator with high light yield and fast response. CeBr 3 is attractive because it is very similar to LaBr 3 :Ce in terms of scintillation properties and also has the advantage of much lower self-radioactivity, which may be important in some applications. CeBr 3 also shows slightly higher light yield at higher temperatures than LaBr 3 and may be easier to produce with high uniformity in large volume since it does not require any dopants. Among the mixed lanthanum halides, the light yield of LaBr x I 3-x :Ce is lower and the difference in crystal structure of the binaries (LaBr 3 and LaI 3 ) makes it difficult to grow high quality crystals of the ternary as the iodine concentration is increased. On the other hand, LaBr x I 3-x :Ce provides excellent performance. Its light output is high and it provides fast response. The crystal structures of the two binaries (LaBr 3 and LaCl 3 ) are very similar. Overall, its scintillation properties are very similar to those for LaBr 3 :Ce. While the gamma-ray stopping efficiency of LaBr x I 3-x :Ce is lower than that for LaBr 3 :Ce (primarily because the density of LaCl 3 is lower than that of LaBr 3 ), it may be easier to grow large crystals of LaBr x I 3-x :Ce than LaBr 3 :Ce since in some instances (for example, Cd x Zn 1-x Te), the ternary compounds provide increased flexibility in the crystal lattice. Among the new dopants, Eu 2+ and Pr 3+ , tried in LaBr 3 host crystals, the Eu 2+ doped samples exhibited
3. Timing resolution improvement using DOI information in a four-layer scintillation detector for TOF-PET
Energy Technology Data Exchange (ETDEWEB)
Shibuya, Kengo [jPET Project Team, Molecular Imaging Center, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage-ku, Chiba 263-0024 (Japan)], E-mail: shibuken@gakushikai.jp; Nishikido, Fumihiko [jPET Project Team, Molecular Imaging Center, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage-ku, Chiba 263-0024 (Japan); Tsuda, Tomoaki [Technology Research Laboratory, Shimadzu Corporation, Hikaridai 3-9-4, Seika-cho, Kyoto 619-0237 (Japan); Kobayashi, Tetsuya [Department of Medical System Engineering, Graduate School of Engineering, Chiba University, Yayoi 1-33, Inage-ku, Chiba 263-8522 (Japan); Lam, Chihfung; Yamaya, Taiga; Yoshida, Eiji; Inadama, Naoko; Murayama, Hideo [jPET Project Team, Molecular Imaging Center, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage-ku, Chiba 263-0024 (Japan)
2008-08-11
Depth-of-interaction (DOI) detectors are considered to be advantageous for time-of-flight positron emission tomography (TOF-PET) because they can correct timing errors arising in the scintillation crystals due to a propagation speed difference between annihilation radiation and scintillation photons. We experimentally measured this timing error, using our four-layer DOI encoding method. The upper layers exhibited the larger timing delays due to the longer path lengths after conversion from annihilation radiation into scintillation photons that traveled by zigzag paths at a speed decreased by a factor of the refractive index (n). The maximum timing delay between the uppermost and the lowermost layers was evaluated as 164 ps when n=1.47. A TOF error correction was demonstrated to improve the timing resolution of the four-layer DOI detector by 10.3%, which would increase the effective sensitivity of the scanner by about 12% comparison with a non-DOI TOF-PET scanner. This is the first step towards combining these two important fields in PET instrumentation, namely DOI and TOF, for the purpose of achieving a higher sensitivity as well as a more uniform spatial resolution.
4. Study of micro pixel photon counters for a high granularity scintillator-based hadron calorimeter
International Nuclear Information System (INIS)
D'Ascenzo, N.; Eggemann, A.; Garutti, E.
2007-11-01
A new Geiger mode avalanche photodiode, the Micro Pixel Photon Counter (MPPC), was recently released by Hamamatsu. It has a high photo-detection efficiency in the 420 nm spectral region. This product can represent an elegant candidate for the design of a high granularity scintillator based hadron calorimeter for the International Linear Collider. In fact, the direct readout of the blue scintillation photons with a MPPC is a feasible techological solution. The readout of a plastic scintillator by a MPPC, both mediated by the traditional wavelength shifting fiber, and directly coupled, has been systematically studied. (orig.)
5. Memory effect, resolution, and efficiency measurements of an Al2O3 coated plastic scintillator used for radioxenon detection
International Nuclear Information System (INIS)
Bläckberg, L.; Fritioff, T.; Mårtensson, L.; Nielsen, F.; Ringbom, A.; Sjöstrand, H.; Klintenberg, M.
2013-01-01
A cylindrical plastic scintillator cell, used for radioxenon monitoring within the verification regime of the Comprehensive Nuclear-Test-Ban Treaty, has been coated with 425 nm Al 2 O 3 using low temperature Atomic Layer Deposition, and its performance has been evaluated. The motivation is to reduce the memory effect caused by radioxenon diffusing into the plastic scintillator material during measurements, resulting in an elevated detection limit. Measurements with the coated detector show both energy resolution and efficiency comparable to uncoated detectors, and a memory effect reduction of a factor of 1000. Provided that the quality of the detector is maintained for a longer period of time, Al 2 O 3 coatings are believed to be a viable solution to the memory effect problem in question
6. Memory effect, resolution, and efficiency measurements of an Al2O3 coated plastic scintillator used for radioxenon detection
Science.gov (United States)
Bläckberg, L.; Fritioff, T.; Mårtensson, L.; Nielsen, F.; Ringbom, A.; Sjöstrand, H.; Klintenberg, M.
2013-06-01
A cylindrical plastic scintillator cell, used for radioxenon monitoring within the verification regime of the Comprehensive Nuclear-Test-Ban Treaty, has been coated with 425 nm Al2O3 using low temperature Atomic Layer Deposition, and its performance has been evaluated. The motivation is to reduce the memory effect caused by radioxenon diffusing into the plastic scintillator material during measurements, resulting in an elevated detection limit. Measurements with the coated detector show both energy resolution and efficiency comparable to uncoated detectors, and a memory effect reduction of a factor of 1000. Provided that the quality of the detector is maintained for a longer period of time, Al2O3 coatings are believed to be a viable solution to the memory effect problem in question.
7. Properties of high pressure nitrogen-argon and nitrogen-xenon gas scintillators
International Nuclear Information System (INIS)
Tornow, W.; Huck, H.; Koeber, H.J.; Mertens, G.
1976-01-01
Investigations of scintillation light output and energy resolution have been made at pressures up to 90 atm in gaseous mixtures of nitrogen with both argon and xenon by stopping of 210 Po-alpha particles. In the absence of a wavelength shifter, the N 2 -Ar mixtures gave a maximum pulse height at a ratio of nitrogen to argon partial pressures rsub(N 2 /Ar) approximately =0.2. However, when using the wavelength shifter diphenyl stilbene (DPS), the measured light output was much larger at lower values of rsub(N 2 /Ar), whereas for rsub(N 2 /Ar)>0.2 pulse height and energy resolution of the studied N 2 -Ar mixtures were roughly indentical with and without DPS. The N 2 -Xe gas mixtures exhibited a similar dependence of pulse height and energy resolution to that of the N 2 -Ar mixtures employing DPS, but the pulse height was larger by a factor of about 7. A 40 atm 50% N 2 -50% Xe gas scintillator showed an energy resolution ΔE/E=0.25, while an 80 atm 75% N 2 -25% Xe scintillator gave ΔE/E=0.6. The pulse height from the 80 atm N 2 -Xe scintillator was smaller by a factor of about 240 than the pulse height from a 20 atm pure Xe gas scintillator, but larger by a factor of about 20 than the pulse height from a 75 atm pure N 2 gas scintillator. The N 2 -Xe mixtures showed a remarkable increase of light output as the temperature of the gas was descreased. (Auth.)
8. Testing and simulation of silicon photomultiplier readouts for scintillators in high-energy astronomy and solar physics
Science.gov (United States)
Bloser, P. F.; Legere, J. S.; Bancroft, C. M.; Jablonski, L. F.; Wurtz, J. R.; Ertley, C. D.; McConnell, M. L.; Ryan, J. M.
2014-11-01
Space-based gamma-ray detectors for high-energy astronomy and solar physics face severe constraints on mass, volume, and power, and must endure harsh launch conditions and operating environments. Historically, such instruments have usually been based on scintillator materials due to their relatively low cost, inherent ruggedness, high stopping power, and radiation hardness. New scintillator materials, such as LaBr3:Ce, feature improved energy and timing performance, making them attractive for future astronomy and solar physics space missions in an era of tightly constrained budgets. Despite this promise, the use of scintillators in space remains constrained by the volume, mass, power, and fragility of the associated light readout device, typically a vacuum photomultiplier tube (PMT). In recent years, silicon photomultipliers (SiPMs) have emerged as promising alternative light readout devices that offer gains and quantum efficiencies similar to those of PMTs, but with greatly reduced mass and volume, high ruggedness, low voltage requirements, and no sensitivity to magnetic fields. In order for SiPMs to replace PMTs in space-based instruments, however, it must be shown that they can provide comparable performance, and that their inherent temperature sensitivity can be corrected for. To this end, we have performed extensive testing and modeling of a small gamma-ray spectrometer composed of a 6 mm×6 mm SiPM coupled to a 6 mm×6 mm ×10 mm LaBr3:Ce crystal. A custom readout board monitors the temperature and adjusts the bias voltage to compensate for gain variations. We record an energy resolution of 5.7% (FWHM) at 662 keV at room temperature. We have also performed simulations of the scintillation process and optical light collection using Geant4, and of the SiPM response using the GosSiP package. The simulated energy resolution is in good agreement with the data from 22 keV to 662 keV. Above ~1 MeV, however, the measured energy resolution is systematically worse than
9. Testing and simulation of silicon photomultiplier readouts for scintillators in high-energy astronomy and solar physics
International Nuclear Information System (INIS)
Bloser, P.F.; Legere, J.S.; Bancroft, C.M.; Jablonski, L.F.; Wurtz, J.R.; Ertley, C.D.; McConnell, M.L.; Ryan, J.M.
2014-01-01
Space-based gamma-ray detectors for high-energy astronomy and solar physics face severe constraints on mass, volume, and power, and must endure harsh launch conditions and operating environments. Historically, such instruments have usually been based on scintillator materials due to their relatively low cost, inherent ruggedness, high stopping power, and radiation hardness. New scintillator materials, such as LaBr 3 :Ce, feature improved energy and timing performance, making them attractive for future astronomy and solar physics space missions in an era of tightly constrained budgets. Despite this promise, the use of scintillators in space remains constrained by the volume, mass, power, and fragility of the associated light readout device, typically a vacuum photomultiplier tube (PMT). In recent years, silicon photomultipliers (SiPMs) have emerged as promising alternative light readout devices that offer gains and quantum efficiencies similar to those of PMTs, but with greatly reduced mass and volume, high ruggedness, low voltage requirements, and no sensitivity to magnetic fields. In order for SiPMs to replace PMTs in space-based instruments, however, it must be shown that they can provide comparable performance, and that their inherent temperature sensitivity can be corrected for. To this end, we have performed extensive testing and modeling of a small gamma-ray spectrometer composed of a 6 mm×6 mm SiPM coupled to a 6 mm×6 mm ×10 mm LaBr 3 :Ce crystal. A custom readout board monitors the temperature and adjusts the bias voltage to compensate for gain variations. We record an energy resolution of 5.7% (FWHM) at 662 keV at room temperature. We have also performed simulations of the scintillation process and optical light collection using Geant4, and of the SiPM response using the GosSiP package. The simulated energy resolution is in good agreement with the data from 22 keV to 662 keV. Above ∼1 MeV, however, the measured energy resolution is systematically
10. Testing and simulation of silicon photomultiplier readouts for scintillators in high-energy astronomy and solar physics
Energy Technology Data Exchange (ETDEWEB)
Bloser, P.F., E-mail: Peter.Bloser@unh.edu; Legere, J.S.; Bancroft, C.M.; Jablonski, L.F.; Wurtz, J.R.; Ertley, C.D.; McConnell, M.L.; Ryan, J.M.
2014-11-01
Space-based gamma-ray detectors for high-energy astronomy and solar physics face severe constraints on mass, volume, and power, and must endure harsh launch conditions and operating environments. Historically, such instruments have usually been based on scintillator materials due to their relatively low cost, inherent ruggedness, high stopping power, and radiation hardness. New scintillator materials, such as LaBr{sub 3}:Ce, feature improved energy and timing performance, making them attractive for future astronomy and solar physics space missions in an era of tightly constrained budgets. Despite this promise, the use of scintillators in space remains constrained by the volume, mass, power, and fragility of the associated light readout device, typically a vacuum photomultiplier tube (PMT). In recent years, silicon photomultipliers (SiPMs) have emerged as promising alternative light readout devices that offer gains and quantum efficiencies similar to those of PMTs, but with greatly reduced mass and volume, high ruggedness, low voltage requirements, and no sensitivity to magnetic fields. In order for SiPMs to replace PMTs in space-based instruments, however, it must be shown that they can provide comparable performance, and that their inherent temperature sensitivity can be corrected for. To this end, we have performed extensive testing and modeling of a small gamma-ray spectrometer composed of a 6 mm×6 mm SiPM coupled to a 6 mm×6 mm ×10 mm LaBr{sub 3}:Ce crystal. A custom readout board monitors the temperature and adjusts the bias voltage to compensate for gain variations. We record an energy resolution of 5.7% (FWHM) at 662 keV at room temperature. We have also performed simulations of the scintillation process and optical light collection using Geant4, and of the SiPM response using the GosSiP package. The simulated energy resolution is in good agreement with the data from 22 keV to 662 keV. Above ∼1 MeV, however, the measured energy resolution is
11. Development of a low-cost-high-sensitivity Compton camera using CsI (Tl) scintillators (γI)
Energy Technology Data Exchange (ETDEWEB)
Kagaya, M., E-mail: 13nd401n@vc.ibaraki.ac.jp [College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito City, Ibaraki 310-8512 (Japan); Open-It consortium (Japan); Katagiri, H. [College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito City, Ibaraki 310-8512 (Japan); Open-It consortium (Japan); Enomoto, R. [Institute for Cosmic Ray Research, University of Tokyo, 5-1-5 Kashiwa-no-Ha, Kashiwa City, Chiba 277-8582 (Japan); Open-It consortium (Japan); Hanafusa, R.; Hosokawa, M.; Itoh, Y. [Fuji Electric, 1 Fujimachi, Hino City, Tokyo 191-8502 (Japan); Muraishi, H. [School of Allied Health Science, Kitasato University, 1-15-1 Kitasato, Minami-ku, Sagamihara City, Kanagawa 252-0373 (Japan); Open-It consortium (Japan); Nakayama, K. [College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito City, Ibaraki 310-8512 (Japan); Open-It consortium (Japan); Satoh, K. [Shinsei Corporation, 4-9-1 Nihonbashi-honcho, Chuo-ku, Tokyo 103-0023 (Japan); Takeda, T. [School of Allied Health Science, Kitasato University, 1-15-1 Kitasato, Minami-ku, Sagamihara City, Kanagawa 252-0373 (Japan); Tanaka, M.M.; Uchida, T. [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba City, Ibaraki 305-0801 (Japan); Open-It consortium (Japan); Watanabe, T. [School of Allied Health Science, Kitasato University, 1-15-1 Kitasato, Minami-ku, Sagamihara City, Kanagawa 252-0373 (Japan); Open-It consortium (Japan); Yanagita, S.; Yoshida, T.; Umehara, K. [College of Science, Ibaraki University, 2-1-1 Bunkyo, Mito City, Ibaraki 310-8512 (Japan); Open-It consortium (Japan)
2015-12-21
We have developed a novel low-cost gamma-ray imaging Compton camera γI that has a high detection efficiency. Our motivation for the development of this detector was to measure the arrival directions of gamma rays produced by radioactive nuclides that were released by the Fukushima Daiichi nuclear power plant accident in 2011. The detector comprises two arrays of inorganic scintillation detectors, which act as a scatterer and an absorber. Each array has eight scintillation detectors, each comprising a large CsI (Tl) scintillator cube of side 3.5 cm, which is inexpensive and has a good energy resolution. Energies deposited by the Compton scattered electrons and subsequent photoelectric absorption, measured by each scintillation counter, are used for image reconstruction. The angular resolution was found to be 3.5° after using an image-sharpening technique. With this angular resolution, we can resolve a 1 m{sup 2} radiation hot spot that is located at a distance of 10 m from the detector with a wide field of view of 1 sr. Moreover, the detection efficiency 0.68 cps/MBq at 1 m for 662 keV (7.6 cps/μSv/h) is sufficient for measuring low-level contamination (i.e., less than 1 μSv/h) corresponding to typical values in large areas of eastern Japan. In addition to the laboratory tests, the imaging capability of our detector was verified in various regions with dose rates less than 1 μSv/h (e.g., Fukushima city).
12. Characterization of the Ionospheric Scintillations at High Latitude using GPS Signal
Science.gov (United States)
Mezaoui, H.; Hamza, A. M.; Jayachandran, P. T.
2013-12-01
Transionospheric radio signals experience both amplitude and phase variations as a result of propagation through a turbulent ionosphere; this phenomenon is known as ionospheric scintillations. As a result of these fluctuations, Global Positioning System (GPS) receivers lose track of signals and consequently induce position and navigational errors. Therefore, there is a need to study these scintillations and their causes in order to not only resolve the navigational problem but in addition develop analytical and numerical radio propagation models. In order to quantify and qualify these scintillations, we analyze the probability distribution functions (PDFs) of L1 GPS signals at 50 Hz sampling rate using the Canadian High arctic Ionospheric Network (CHAIN) measurements. The raw GPS signal is detrended using a wavelet-based technique and the detrended amplitude and phase of the signal are used to construct probability distribution functions (PDFs) of the scintillating signal. The resulting PDFs are non-Gaussian. From the PDF functional fits, the moments are estimated. The results reveal a general non-trivial parabolic relationship between the normalized fourth and third moments for both the phase and amplitude of the signal. The calculated higher-order moments of the amplitude and phase distribution functions will help quantify some of the scintillation characteristics and in the process provide a base for forecasting, i.e. develop a scintillation climatology model. This statistical analysis, including power spectra, along with a numerical simulation will constitute the backbone of a high latitude scintillation model.
13. Design and test of a scintillation dosimeter for dosimetry measurements of high energy radiotherapy beams
International Nuclear Information System (INIS)
Fontbonne, J.M.
2002-12-01
This work describes the design and evaluation of the performances of a scintillation dosimeter developed for the dosimetry of radiation beams used in radiotherapy. The dosimeter consists in a small plastic scintillator producing light which is guided by means of a plastic optical fiber towards photodetectors. In addition to scintillation, high energy ionizing radiations produce Cerenkov light both in the scintillator and the optical fiber. Based on a wavelength analysis, we have developed a deconvolution technique to measure the scintillation light in the presence of Cerenkov light. We stress the advantages that are anticipated from plastic scintillator, in particular concerning tissue or water equivalence (mass stopping power, mass attenuation or mass energy absorption coefficients). We show that detectors based on this material have better characteristics than conventional dosimeters such as ionisation chambers or silicon detectors. The deconvolution technique is exposed, as well as the calibration procedure using an ionisation chamber. We have studied the uncertainty of our dosimeter. The electronics noise, the fiber transmission, the deconvolution technique and the calibration errors give an overall combined experimental uncertainty of about 0,5%. The absolute response of the dosimeter is studied by means of depth dose measurements. We show that absolute uncertainty with photons or electrons beams with energies ranging from 4 MeV to 25 MeV is less than ± 1 %. Last, at variance with other devices, our scintillation dosimeter does not need dose correction with depth. (author)
14. Scintillation and ionization yields produced by α-particles in high-density gaseous xenon
International Nuclear Information System (INIS)
Kusano, H.; Ishikawa, T.; Lopes, J.A.M.; Miyajima, M.; Shibamura, E.; Hasebe, N.
2012-01-01
The average numbers of scintillation photons and liberated electrons produced by 5.49-MeV α-particles were measured in high-density gaseous xenon. The density range is 0.12–1.32 g/cm 3 for scintillation measurements at zero electric field, and 0.12–1.03 g/cm 3 for the scintillation and ionization measurements under various electric fields. The density dependence of scintillation yield at zero electric field was observed. The W s -value, which is defined as the average energy expended per photon, increases with density and becomes almost constant in the density range above 1.0 g/cm 3 . Anti-correlations between average numbers of scintillation photons and liberated electrons were found to vary with density. It was also found that the total number of scintillation photons and liberated electrons decreases with increasing density. Several possible reasons for the variation in scintillation and ionization yields with density are discussed.
15. Scintillation Counters
Science.gov (United States)
Bell, Zane W.
Scintillators find wide use in radiation detection as the detecting medium for gamma/X-rays, and charged and neutral particles. Since the first notice in 1895 by Roentgen of the production of light by X-rays on a barium platinocyanide screen, and Thomas Edison's work over the following 2 years resulting in the discovery of calcium tungstate as a superior fluoroscopy screen, much research and experimentation have been undertaken to discover and elucidate the properties of new scintillators. Scintillators with high density and high atomic number are prized for the detection of gamma rays above 1 MeV; lower atomic number, lower-density materials find use for detecting beta particles and heavy charged particles; hydrogenous scintillators find use in fast-neutron detection; and boron-, lithium-, and gadolinium-containing scintillators are used for slow-neutron detection. This chapter provides the practitioner with an overview of the general characteristics of scintillators, including the variation of probability of interaction with density and atomic number, the characteristics of the light pulse, a list and characteristics of commonly available scintillators and their approximate cost, and recommendations regarding the choice of material for a few specific applications. This chapter does not pretend to present an exhaustive list of scintillators and applications.
16. The high resolution spaghetti hadron calorimeter
International Nuclear Information System (INIS)
Jenni, P.; Sonderegger, P.; Paar, H.P.; Wigmans, R.
1987-01-01
It is proposed to build a prototype for a hadron calorimeter with scintillating plastic fibres as active material. The absorber material is lead. Provided that these components are used in the appropriate volume ratio, excellent performance may be expected, e.g. an energy resolution of 30%/√E for jet detection. The proposed design offers additional advantages compared to the classical sandwich calorimeter structures in terms of granularity, hermiticity, uniformity, compactness, readout, radiation resistivity, stability and calibration. 22 refs.; 7 figs
17. GPS phase scintillation at high latitudes during the geomagnetic storm of 17-18 March 2015
DEFF Research Database (Denmark)
Prikryl, P.; Ghoddousi-Fard, R.; Weygand, J. M.
2016-01-01
The geomagnetic storm of 17–18 March 2015 was caused by the impacts of a coronal mass ejection and a high-speed plasma stream from a coronal hole. The high-latitude ionosphere dynamics is studied using arrays of ground-based instruments including GPS receivers, HF radars, ionosondes, riometers......, and magnetometers. The phase scintillation index is computed for signals sampled at a rate of up to 100 Hz by specialized GPS scintillation receivers supplemented by the phase scintillation proxy index obtained from geodetic-quality GPS data sampled at 1 Hz. In the context of solar wind coupling...... to the magnetosphere-ionosphere system, it is shown that GPS phase scintillation is primarily enhanced in the cusp, the tongue of ionization that is broken into patches drawn into the polar cap from the dayside storm-enhanced plasma density, and in the auroral oval. In this paper we examine the relation between...
18. Toward the Probabilistic Forecasting of High-latitude GPS Phase Scintillation
Science.gov (United States)
Prikryl, P.; Jayachandran, P.T.; Mushini, S. C.; Richardson, I. G.
2012-01-01
The phase scintillation index was obtained from L1 GPS data collected with the Canadian High Arctic Ionospheric Network (CHAIN) during years of extended solar minimum 2008-2010. Phase scintillation occurs predominantly on the dayside in the cusp and in the nightside auroral oval. We set forth a probabilistic forecast method of phase scintillation in the cusp based on the arrival time of either solar wind corotating interaction regions (CIRs) or interplanetary coronal mass ejections (ICMEs). CIRs on the leading edge of high-speed streams (HSS) from coronal holes are known to cause recurrent geomagnetic and ionospheric disturbances that can be forecast one or several solar rotations in advance. Superposed epoch analysis of phase scintillation occurrence showed a sharp increase in scintillation occurrence just after the arrival of high-speed solar wind and a peak associated with weak to moderate CMEs during the solar minimum. Cumulative probability distribution functions for the phase scintillation occurrence in the cusp are obtained from statistical data for days before and after CIR and ICME arrivals. The probability curves are also specified for low and high (below and above median) values of various solar wind plasma parameters. The initial results are used to demonstrate a forecasting technique on two example periods of CIRs and ICMEs.
19. Plastic scintillation dosimetry: Optimal selection of scintillating fibers and scintillators
International Nuclear Information System (INIS)
Archambault, Louis; Arsenault, Jean; Gingras, Luc; Sam Beddar, A.; Roy, Rene; Beaulieu, Luc
2005-01-01
Scintillation dosimetry is a promising avenue for evaluating dose patterns delivered by intensity-modulated radiation therapy plans or for the small fields involved in stereotactic radiosurgery. However, the increase in signal has been the goal for many authors. In this paper, a comparison is made between plastic scintillating fibers and plastic scintillator. The collection of scintillation light was measured experimentally for four commercial models of scintillating fibers (BCF-12, BCF-60, SCSF-78, SCSF-3HF) and two models of plastic scintillators (BC-400, BC-408). The emission spectra of all six scintillators were obtained by using an optical spectrum analyzer and they were compared with theoretical behavior. For scintillation in the blue region, the signal intensity of a singly clad scintillating fiber (BCF-12) was 120% of that of the plastic scintillator (BC-400). For the multiclad fiber (SCSF-78), the signal reached 144% of that of the plastic scintillator. The intensity of the green scintillating fibers was lower than that of the plastic scintillator: 47% for the singly clad fiber (BCF-60) and 77% for the multiclad fiber (SCSF-3HF). The collected light was studied as a function of the scintillator length and radius for a cylindrical probe. We found that symmetric detectors with nearly the same spatial resolution in each direction (2 mm in diameter by 3 mm in length) could be made with a signal equivalent to those of the more commonly used asymmetric scintillators. With augmentation of the signal-to-noise ratio in consideration, this paper presents a series of comparisons that should provide insight into selection of a scintillator type and volume for development of a medical dosimeter
20. High resolution metric imaging payload
Science.gov (United States)
Delclaud, Y.
2017-11-01
Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.
1. Development of high-resolution detector module with depth of interaction identification for positron emission tomography
International Nuclear Information System (INIS)
Niknejad, Tahereh; Pizzichemi, Marco; Stringhini, Gianluca; Auffray, Etiennette; Bugalho, Ricardo; Da Silva, Jose Carlos; Di Francesco, Agostino; Ferramacho, Luis; Lecoq, Paul; Leong, Carlos; Paganoni, Marco; Rolo, Manuel; Silva, Rui; Silveira, Miguel; Tavernier, Stefaan; Varela, Joao; Zorraquino, Carlos
2017-01-01
We have developed a Time-of-flight high resolution and commercially viable detector module for the application in small PET scanners. A new approach to depth of interaction (DOI) encoding with low complexity for a pixelated crystal array using a single side readout and 4-to-1 coupling between scintillators and photodetectors was investigated. In this method the DOI information is estimated using the light sharing technique. The detector module is a 1.53×1.53×15 mm"3 matrix of 8×8 LYSO scintillator with lateral surfaces optically depolished separated by reflective foils. The crystal array is optically coupled to 4×4 silicon photomultipliers (SiPM) array and readout by a high performance front-end ASIC with TDC capability (50 ps time binning). The results show an excellent crystal identification for all the scintillators in the matrix, a timing resolution of 530 ps, an average DOI resolution of 5.17 mm FWHM and an average energy resolution of 18.29% FWHM. - Highlights: • A new method for DOI encoding for PET detectors based on light sharing is proposed. • A prototype module with LYSO scintillator matrix coupled to SiPMs array is produced. • The module has one side readout and 4-to-1 coupling between scintillators and SiPMs. • A compact TOF front-end ASIC is used. • Excellent performances are shown by the prototype module.
2. Development of high-resolution detector module with depth of interaction identification for positron emission tomography
Energy Technology Data Exchange (ETDEWEB)
Niknejad, Tahereh, E-mail: tniknejad@lip.pt [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); Pizzichemi, Marco [University of Milano-Bicocca (Italy); Stringhini, Gianluca [University of Milano-Bicocca (Italy); CERN, Geneve (Switzerland); Auffray, Etiennette [CERN, Geneve (Switzerland); Bugalho, Ricardo; Da Silva, Jose Carlos; Di Francesco, Agostino [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); Ferramacho, Luis [PETsys Electronics, Oeiras (Portugal); Lecoq, Paul [CERN, Geneve (Switzerland); Leong, Carlos [PETsys Electronics, Oeiras (Portugal); Paganoni, Marco [University of Milano-Bicocca (Italy); Rolo, Manuel [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); INFN, Turin (Italy); Silva, Rui [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); Silveira, Miguel [PETsys Electronics, Oeiras (Portugal); Tavernier, Stefaan [PETsys Electronics, Oeiras (Portugal); Vrije Universiteit Brussel (Belgium); Varela, Joao [Laboratory of Instrumentation and Experimental Particles Physics, Lisbon (Portugal); CERN, Geneve (Switzerland); Zorraquino, Carlos [Biomedical Image Technologies Lab, Universidad Politécnica de Madrid (Spain); CIBER-BBN, Universidad Politécnica de Madrid (Spain)
2017-02-11
We have developed a Time-of-flight high resolution and commercially viable detector module for the application in small PET scanners. A new approach to depth of interaction (DOI) encoding with low complexity for a pixelated crystal array using a single side readout and 4-to-1 coupling between scintillators and photodetectors was investigated. In this method the DOI information is estimated using the light sharing technique. The detector module is a 1.53×1.53×15 mm{sup 3} matrix of 8×8 LYSO scintillator with lateral surfaces optically depolished separated by reflective foils. The crystal array is optically coupled to 4×4 silicon photomultipliers (SiPM) array and readout by a high performance front-end ASIC with TDC capability (50 ps time binning). The results show an excellent crystal identification for all the scintillators in the matrix, a timing resolution of 530 ps, an average DOI resolution of 5.17 mm FWHM and an average energy resolution of 18.29% FWHM. - Highlights: • A new method for DOI encoding for PET detectors based on light sharing is proposed. • A prototype module with LYSO scintillator matrix coupled to SiPMs array is produced. • The module has one side readout and 4-to-1 coupling between scintillators and SiPMs. • A compact TOF front-end ASIC is used. • Excellent performances are shown by the prototype module.
3. Berkeley High-Resolution Ball
International Nuclear Information System (INIS)
Diamond, R.M.
1984-10-01
Criteria for a high-resolution γ-ray system are discussed. Desirable properties are high resolution, good response function, and moderate solid angle so as to achieve not only double- but triple-coincidences with good statistics. The Berkeley High-Resolution Ball involved the first use of bismuth germanate (BGO) for anti-Compton shield for Ge detectors. The resulting compact shield permitted rather close packing of 21 detectors around a target. In addition, a small central BGO ball gives the total γ-ray energy and multiplicity, as well as the angular pattern of the γ rays. The 21-detector array is nearly complete, and the central ball has been designed, but not yet constructed. First results taken with 9 detector modules are shown for the nucleus 156 Er. The complex decay scheme indicates a transition from collective rotation (prolate shape) to single- particle states (possibly oblate) near spin 30 h, and has other interesting features
4. Plastic scintillators with high loading of one or more metal carboxylates
Science.gov (United States)
Cherepy, Nerine; Sanner, Robert Dean
2016-01-12
In one embodiment, a material includes at least one metal compound incorporated into a polymeric matrix, where the metal compound includes a metal and one or more carboxylate ligands, where at least one of the one or more carboxylate ligands includes a tertiary butyl group, and where the material is optically transparent. In another embodiment, a method includes: processing pulse traces corresponding to light pulses from a scintillator material; and outputting a result of the processing, where the scintillator material comprises at least one metal compound incorporated into a polymeric matrix, the at least one metal compound including a metal and one or more carboxylate ligands, where at least one of the one or more carboxylate ligands has a tertiary butyl group, and where the scintillator material is optically transparent and has an energy resolution at 662 keV of less than about 20%.
5. R&D proposal to DRDC fast EM calorimeter with excellent photon angular resolution and energy resolution using scintillating noble liquids
CERN Document Server
Chen, M; Sumorok, K; Zhang, X; Gaudreau, M P J; Akimov, D Y; Bolozdynya, A I; Churakov, D; Chernyshov, V; Koutchenkov, A; Kovalenko, A; Kuzichev, V F; Lamkov, V A; Lebedenko, V; Gusev, L; Safronov, G A; Sheinkman, V A; Smirnov, G; Krasnokutsky, R N; Shuvalov, R S; Fedyakin, N N; Sushkov, V V; Akopyan, M V; Gougas, Andreas; Pevsner, A; CERN. Geneva. Detector Research and Development Committee
1993-01-01
Recent test beam data have shown fast and large signals for LKr, mixed with >1% LXe. Excellent uniformity in LKr and LXe was achieved over a 37 cm long cell. CsI cathode works well inside LKr/LXe with O(1%) resolution at 5 MeV. Precision calibration in-situ has been demonstrated. Scintillating LKr/LXe detectors are sufficiently radiation hard for LHC environment. These new developments simplify the construction of prototype LKr calorimeter, to demonstrate the superior e/gamma energy resolution and the determination of photon direction using longitudinal and transverse segmentations, which are vital for the detection of the multi-photon states. The constant term in the energy resolution is small, the electronics noise is negligible due to the large signal size. The overall pion/electron suppression is expected to be better than 10-4.
6. High-resolution EM-CCD scintillation gamma cameras
NARCIS (Netherlands)
Korevaar, M.A.N.
2013-01-01
The development of medical imaging techniques has dramatically changed clinical practice and biomedical science in the 20th century. Nuclear Medicine imaging techniques reveal the function of organs and tissues in vivo with the aid of radioactively labeled tracer molecules. These techniques, such as
7. Test beam studies of the light yield, time and coordinate resolutions of scintillator strips with WLS fibers and SiPM readout
Energy Technology Data Exchange (ETDEWEB)
Denisov, Dmitri [Fermilab, Batavia IL (United States); Evdokimov, Valery [Institute for High Energy Physics, Protvino (Russian Federation); Lukić, Strahinja; Ujić, Predrag [Vinča Institute, University of Belgrade (Serbia)
2017-03-11
Prototype scintilator+WLS strips with SiPM readout for large muon detection systems were tested in the muon beam of the Fermilab Test Beam Facility. Light yield of up to 137 photoelectrons per muon per strip has been observed , as well as time resolution of 330 ps and position resolution along the strip of 5.4 cm.
8. Laser micromachining of cadmium tungstate scintillator for high energy X-ray imaging
Science.gov (United States)
Richards, Sion Andreas
Pulsed laser ablation has been investigated as a method for the creation of thick segmented scintillator arrays for high-energy X-ray radiography. Thick scintillators are needed to improve the X-ray absorption at high energies, while segmentation is required for spatial resolution. Monte-Carlo simulations predicted that reflections at the inter-segment walls were the greatest source of loss of scintillation photons. As a result of this, fine pitched arrays would be inefficient as the number of reflections would be significantly higher than in large pitch arrays. Nanosecond and femtosecond pulsed laser ablation was investigated as a method to segment cadmium tungstate (CdWO_4). The effect of laser parameters on the ablation mechanisms, laser induced material changes and debris produced were investigated using optical and electron microscopy, energy dispersive X-ray spectroscopy and X-ray photoelectron spectroscopy for both types of lasers. It was determined that nanosecond ablation was unsuitable due to the large amount of cracking and a heat affected zone created during the ablation process. Femtosecond pulsed laser ablation was found to induce less damage. The optimised laser parameters for a 1028 nm laser was found to be a pulse energy of 54 μJ corresponding to a fluence of 5.3 J cm. -2 a pulse duration of 190 fs, a repetition rate of 78.3 kHz and a laser scan speed of 707 mm s. -1 achieving a normalised pulse overlap of 0.8. A serpentine scan pattern was found to minimise damage caused by anisotropic thermal expansion. Femtosecond pulsed ablation was also found to create a layer of tungsten and cadmium sub-oxides on the surface of the crystals. The CdWO_4 could be cleaned by immersing the CdWO_4 in ammonium hydroxide at 45°C for 15 minutes. However, XPS indicated that the ammonium hydroxide formed a thin layer of CdCO_3 and Cd(OH)_2 on the surface. Prototype arrays were shown to be able to resolve features as small as 0.5 mm using keV energy X-rays. The most
9. Liquid scintillation solution
International Nuclear Information System (INIS)
Long, E.C.
1976-01-01
The invention deals with a liquid scintillation solution which contains 1) a scintillation solvent (toluol), 2) a primary scintillation solute (PPO), 3) a secondary scintillation solute (dimethyl POPOP), 4) several surfactants (iso-octyl-phenol polyethoxy-ethanol and sodium di-hexyl sulfosuccinate) essentially different from one another and 5) a filter resolution and/or transparent-making agent (cyclic ether, especially tetrahydrofuran). (HP) [de
10. High spatial resolution gamma imaging detector based on a 5 inch diameter R3292 Hamamatsu PSPMT
International Nuclear Information System (INIS)
Wojcik, R.; Majewski, S.; Kross, B.; Weisenberger, A.G.; Steinbach, D.
1998-01-01
High resolution imaging gamma-ray detectors were developed using Hamamatsu's 5 inch diameter R3292 position sensitive PMT (PSPMT) and a variety of crystal scintillator arrays. Special readout techniques were used to maximize the active imaging area while reducing the number of readout channels. Spatial resolutions approaching 1 mm were obtained in a broad energy range from 20 to 511 keV. Results are also presented of coupling the scintillator arrays to the PMT via imaging light guides consisting of acrylic optical fibers
11. Requirements on high resolution detectors
Energy Technology Data Exchange (ETDEWEB)
Koch, A. [European Synchrotron Radiation Facility, Grenoble (France)
1997-02-01
For a number of microtomography applications X-ray detectors with a spatial resolution of 1 {mu}m are required. This high spatial resolution will influence and degrade other parameters of secondary importance like detective quantum efficiency (DQE), dynamic range, linearity and frame rate. This note summarizes the most important arguments, for and against those detector systems which could be considered. This article discusses the mutual dependencies between the various figures which characterize a detector, and tries to give some ideas on how to proceed in order to improve present technology.
12. Scintillation counter and wire chamber front end modules for high energy physics experiments
International Nuclear Information System (INIS)
Baldin, Boris; DalMonte, Lou
2011-01-01
This document describes two front-end modules developed for the proposed MIPP upgrade (P-960) experiment at Fermilab. The scintillation counter module was developed for the Plastic Ball detector time and charge measurements. The module has eight LEMO 00 input connectors terminated with 50 ohms and accepts negative photomultiplier signals in the range 0.25...1000 pC with the maximum input voltage of 4.0 V. Each input has a passive splitter with integration and differentiation times of ∼20 ns. The integrated portion of the signal is digitized at 26.55 MHz by Analog Devices AD9229 12-bit pipelined 4-channel ADC. The differentiated signal is discriminated for time measurement and sent to one of the four TMC304 inputs. The 4-channel TMC304 chip allows high precision time measurement of rising and falling edges with ∼100 ps resolution and has internal digital pipeline. The ADC data is also pipelined which allows deadtime-less operation with trigger decision times of ∼4 (micro)s. The wire chamber module was developed for MIPP EMCal detector charge measurements. The 32-channel digitizer accepts differential analog signals from four 8-channel integrating wire amplifiers. The connection between wire amplifier and digitizer is provided via 26-wire twist-n-flat cable. The wire amplifier integrates input wire current and has sensitivity of 275 mV/pC and the noise level of ∼0.013 pC. The digitizer uses the same 12-bit AD9229 ADC chip as the scintillator counter module. The wire amplifier has a built-in test pulser with a mask register to provide testing of the individual channels. Both modules are implemented as a 6Ux220 mm VME size board with 48-pin power connector. A custom europack (VME) 21-slot crate is developed for housing these front-end modules.
13. High resolution optical DNA mapping
Science.gov (United States)
Many types of diseases including cancer and autism are associated with copy-number variations in the genome. Most of these variations could not be identified with existing sequencing and optical DNA mapping methods. We have developed Multi-color Super-resolution technique, with potential for high throughput and low cost, which can allow us to recognize more of these variations. Our technique has made 10--fold improvement in the resolution of optical DNA mapping. Using a 180 kb BAC clone as a model system, we resolved dense patterns from 108 fluorescent labels of two different colors representing two different sequence-motifs. Overall, a detailed DNA map with 100 bp resolution was achieved, which has the potential to reveal detailed information about genetic variance and to facilitate medical diagnosis of genetic disease.
14. High angular resolution at LBT
Science.gov (United States)
Conrad, A.; Arcidiacono, C.; Bertero, M.; Boccacci, P.; Davies, A. G.; Defrere, D.; de Kleer, K.; De Pater, I.; Hinz, P.; Hofmann, K. H.; La Camera, A.; Leisenring, J.; Kürster, M.; Rathbun, J. A.; Schertl, D.; Skemer, A.; Skrutskie, M.; Spencer, J. R.; Veillet, C.; Weigelt, G.; Woodward, C. E.
2015-12-01
High angular resolution from ground-based observatories stands as a key technology for advancing planetary science. In the window between the angular resolution achievable with 8-10 meter class telescopes, and the 23-to-40 meter giants of the future, LBT provides a glimpse of what the next generation of instruments providing higher angular resolution will provide. We present first ever resolved images of an Io eruption site taken from the ground, images of Io's Loki Patera taken with Fizeau imaging at the 22.8 meter LBT [Conrad, et al., AJ, 2015]. We will also present preliminary analysis of two data sets acquired during the 2015 opposition: L-band fringes at Kurdalagon and an occultation of Loki and Pele by Europa (see figure). The light curves from this occultation will yield an order of magnitude improvement in spatial resolution along the path of ingress and egress. We will conclude by providing an overview of the overall benefit of recent and future advances in angular resolution for planetary science.
15. Intercalibration of the ZEUS high resolution and backing calorimeters
International Nuclear Information System (INIS)
Abramowicz, H.; Czyrkowski, H.; Derlicki, A.; Krzyzanowski, M.; Kudla, I.; Kusmierz, W.; Nowak, R.J.; Pawlak, J.M.; Rajca, A.; Stopczynski, A.; Walczak, R.; Zarnecki, A.F.; Kowalski, T.Z.
1991-07-01
We have studied the combined performance of two calorimeters, the high resolution uranium-scintillator prototype of the ZEUS forward calorimeter (FCAL), followed by a prototype of the coarser ZEUS backing calorimeter (BAC), made out of thick iron plates interleaved with planes of aluminium proportional chambers. The test results, obtained in an exposure of the calorimeter system to a hadron test beam at the CERN-SPS, show that the backing calorimeter does fulfil its role of recognizing the energy leaking out of the FCAL calorimeter. The measurement of this energy is feasible, if an appropriate calibration of the BAC calorimeter is performed. (orig.)
16. Intercalibration of the ZEUS high resolution and backing calorimeters
International Nuclear Information System (INIS)
Abramowicz, H.; Czyrkowski, H.; Derlicki, A.; Krzyzanowski, M.; Kudla, I.; Kusmierz, W.; Nowak, R.J.; Pawlak, J.M.; Rajca, A.; Stopczynski, A.; Walczak, R.; Zarnecki, A.F.; Kowalski, T.Z.
1992-01-01
We have studied the combined performance of two calorimeters, the high resolution uranium-scintillator prototype of the ZEUS forward calorimeter (FCAL), followed by a prototype of the coarser ZEUS backing calorimeter (BAC), made out of thick iron plates interleaved with planes of aluminium proportional chambers. The test results, obtained in an exposure of the calorimeter system to a hadron test beam at the CERN SPS, show that the backing calorimeter does fulfil its role of recognizing the energy leaking out of the FCAL calorimeter. The measurement of this energy is feasible, if an appropriate calibration of the BAC calorimeter is performed. (orig.)
17. Scintillating plate calorimeter optical design
International Nuclear Information System (INIS)
McNeil, R.; Fazely, A.; Gunasingha, R.; Imlay, R.; Lim, J.
1990-01-01
A major technical challenge facing the builder of a general purpose detector for the SSC is to achieve an optimum design for the calorimeter. Because of its fast response and good energy resolution, scintillating plate sampling calorimeters should be considered as a possible technology option. The work of the Scintillating Plate Calorimeter Collaboration is focused on compensating plate calorimeters. Based on experimental and simulation studies, it is expected that a sampling calorimeter with alternating layers of high-Z absorber (Pb, W, DU, etc.) and plastic scintillator can be made compensating (e/h = 1.00) by suitable choice of the ratio of absorber/scintillator thickness. Two conceptual designs have been pursued by this subsystem collaboration. One is based on lead as the absorber, with read/out of the scintillator plates via wavelength shifter fibers. The other design is based on depleted uranium as the absorber with wavelength shifter (WLS) plate readout. Progress on designs for the optical readout of a compensating scintillator plate calorimeter are presented. These designs include readout of the scintillator plates via wavelength shifter plates or fiber readout. Results from radiation damage studies of the optical components are presented
18. High resolution tomographic instrument development
International Nuclear Information System (INIS)
1992-01-01
Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational
19. High resolution tomographic instrument development
Energy Technology Data Exchange (ETDEWEB)
1992-08-01
Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.
20. High resolution tomographic instrument development
Energy Technology Data Exchange (ETDEWEB)
1992-01-01
Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.
1. Ionization and scintillation response of high-pressure xenon gas to alpha particles
International Nuclear Information System (INIS)
Álvarez, V; Cárcel, S; Cervera, A; Díaz, J; Ferrario, P; Gil, A; Gómez-Cadenas, J J; Borges, F I G; Conde, C A N; Fernandes, L M P; Freitas, E D C; Cebrián, S; Dafni, T; Gómez, H; Egorov, M; Gehman, V M; Goldschmidt, A; Esteve, R; Evtoukhovitch, P; Ferreira, A L
2013-01-01
High-pressure xenon gas is an attractive detection medium for a variety of applications in fundamental and applied physics. In this paper we study the ionization and scintillation detection properties of xenon gas at 10 bar pressure. For this purpose, we use a source of alpha particles in the NEXT-DEMO time projection chamber, the large scale prototype of the NEXT-100 neutrinoless double beta decay experiment, in three different drift electric field configurations. We measure the ionization electron drift velocity and longitudinal diffusion, and compare our results to expectations based on available electron scattering cross sections on pure xenon. In addition, two types of measurements addressing the connection between the ionization and scintillation yields are performed. On the one hand we observe, for the first time in xenon gas, large event-by-event correlated fluctuations between the ionization and scintillation signals, similar to that already observed in liquid xenon. On the other hand, we study the field dependence of the average scintillation and ionization yields. Both types of measurements may shed light on the mechanism of electron-ion recombination in xenon gas for highly-ionizing particles. Finally, by comparing the response of alpha particles and electrons in NEXT-DEMO, we find no evidence for quenching of the primary scintillation light produced by alpha particles in the xenon gas.
2. Calculations and measurements of the scintillator-to-water stopping power ratio of liquid scintillators for use in proton radiotherapy
International Nuclear Information System (INIS)
Scott Ingram, W.; Robertson, Daniel; Beddar, Sam
2015-01-01
Liquid scintillators are a promising detector for high-resolution three-dimensional proton therapy dosimetry. Because the scintillator comprises both the active volume of the detector and the phantom material, an ideal scintillator will exhibit water equivalence in its radiological properties. One of the most fundamental of these is the scintillator’s stopping power. The objective of this study was to compare calculations and measurements of scintillator-to-water stopping power ratios to evaluate the suitability of the liquid scintillators BC-531 and OptiPhase HiSafe 3 for proton dosimetry. We also measured the relative scintillation output of the two scintillators. Both calculations and measurements show that the linear stopping power of OptiPhase is significantly closer to water than that of BC-531. BC-531 has a somewhat higher scintillation output. OptiPhase can be mixed with water at high concentrations, which further improves its scintillator-to-water stopping power ratio. However, this causes the solution to become cloudy, which has a negative impact on the scintillation output and spatial resolution of the detector. OptiPhase is preferred over BC-531 for proton dosimetry because its density and scintillator-to-water stopping power ratio are more water equivalent
3. A new digital method for high precision neutron-gamma discrimination with liquid scintillation detectors
International Nuclear Information System (INIS)
Nakhostin, M
2013-01-01
A new pulse-shape discrimination algorithm for neutron and gamma (n/γ) discrimination with liquid scintillation detectors has been developed, leading to a considerable improvement of n/γ separation quality. The method is based on triangular pulse shaping which offers a high sensitivity to the shape of input pulses, as well as, excellent noise filtering characteristics. A clear separation of neutrons and γ-rays down to a scintillation light yield of about 65 keVee (electron equivalent energy) with a dynamic range of 45:1 was achieved. The method can potentially operate at high counting rates and is well suited for real-time measurements.
4. Section on High Resolution Optical Imaging (HROI)
Data.gov (United States)
Federal Laboratory Consortium — The Section on High Resolution Optical Imaging (HROI) develops novel technologies for studying biological processes at unprecedented speed and resolution. Research...
5. High Resolution Thermometry for EXACT
Science.gov (United States)
Panek, J. S.; Nash, A. E.; Larson, M.; Mulders, N.
2000-01-01
High Resolution Thermometers (HRTs) based on SQUID detection of the magnetization of a paramagnetic salt or a metal alloy has been commonly used for sub-nano Kelvin temperature resolution in low temperature physics experiments. The main applications to date have been for temperature ranges near the lambda point of He-4 (2.177 K). These thermometers made use of materials such as Cu(NH4)2Br4 *2H2O, GdCl3, or PdFe. None of these materials are suitable for EXACT, which will explore the region of the He-3/He-4 tricritical point at 0.87 K. The experiment requirements and properties of several candidate paramagnetic materials will be presented, as well as preliminary test results.
6. High speed, High resolution terahertz spectrometers
International Nuclear Information System (INIS)
Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook
2008-01-01
A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated
7. Highly efficient solid-state neutron scintillators based on hybrid sol-gel nanocomposite materials
International Nuclear Information System (INIS)
Kesanli, Banu; Hong, Kunlun; Meyer, Kent; Im, Hee-Jung; Dai, Sheng
2006-01-01
This research highlights opportunities in the formulation of neutron scintillators that not only have high scintillation efficiencies but also can be readily cast into two-dimensional detectors. Series of transparent, crack-free monoliths were prepared from hybrid polystyrene-silica nanocomposites in the presence of arene-containing alkoxide precursor through room temperature sol-gel processing. The monoliths also contain lithium-6 salicylate as a target material for neutron-capture reactions and amphiphilic scintillator solution as a fluorescent sensitizer. Polystyrene was functionalized by trimethoxysilyl group in order to enable the covalent incorporation of aromatic functional groups into the inorganic sol-gel matrices for minimizing macroscopic phase segregation and facilitating lithium-6 doping in the sol-gel samples. Neutron and alpha responses of these hybrid polystyrene-silica monoliths were explored
8. Development of scintillation materials for PET scanners
CERN Document Server
Korzhik, Mikhail; Annenkov, Alexander N; Borissevitch, Andrei; Dossovitski, Alexei; Missevitch, Oleg; Lecoq, Paul
2007-01-01
The growing demand on PET methodology for a variety of applications ranging from clinical use to fundamental studies triggers research and development of PET scanners providing better spatial resolution and sensitivity. These efforts are primarily focused on the development of advanced PET detector solutions and on the developments of new scintillation materials as well. However Lu containing scintillation materials introduced in the last century such as LSO, LYSO, LuAP, LuYAP crystals still remain the best PET species in spite of the recent developments of bright, fast but relatively low density lanthanum bromide scintillators. At the same time Lu based materials have several drawbacks which are high temperature of crystallization and relatively high cost compared to alkali-halide scintillation materials. Here we describe recent results in the development of new scintillation materials for PET application.
9. [Development and evaluation of an improved high-resolution TOFPET camera: TOFPET II: Progress report, 1984-1985
International Nuclear Information System (INIS)
Mullani, N.A.
1985-01-01
We have been working to improve the quality of barium fluoride scintillators for the fast component and subsequently improve the coincidence timing. We are now able to obtain approximately 400 psec timing and less than 20% energy resolution for barium fluoride using quartz faced photomultiplier tubes. One major problem with the use of barium fluoride and quartz windows on the PMT's, the coupling of the scintillator to the photomultiplier tube. The best available coupling compound is viscasil from GE which is a silicon grease. It is highly efficient for transmitting the 220 nm uv light from the scintillator
10. Comparative measurements between a Li-6 glass and a He-3 high-pressure gas scintillator
International Nuclear Information System (INIS)
Priesmeyer, H.G.; Fischer, P.; Harz, U.; Soldner, B.
1983-01-01
The He-3 high-pressure gas scintillation neutron detector commercially available as LND 800, has been compated to a Li-6 glass scintillator type NE 912. (n,γ) pulse height discrimination capabilities and neutron detection efficiencies have been determined. The objective of these measurements was to try to improve the Kiel Fast-Chopper TOF detector system by using a gasscintillator, which could cover the neutron beam geometry and by which gamma ray background contributions could be reduced. The time response always meets the requirements of a chopper experiment, but the neutron detection efficiency of the Li-6 glasses now used had to be maintained. (orig./HP) [de
11. High-resolution intravital microscopy.
Directory of Open Access Journals (Sweden)
Volker Andresen
Full Text Available Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy--the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and
12. High-Resolution Intravital Microscopy
Science.gov (United States)
Andresen, Volker; Pollok, Karolin; Rinnenthal, Jan-Leo; Oehme, Laura; Günther, Robert; Spiecker, Heinrich; Radbruch, Helena; Gerhard, Jenny; Sporbert, Anje; Cseresnyes, Zoltan; Hauser, Anja E.; Niesner, Raluca
2012-01-01
Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy - the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning) while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs) of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and developmental biology
13. High luminosity operation of large solid angle scintillator arrays in Jefferson Lab Hall A
International Nuclear Information System (INIS)
Ran Shneor
2003-01-01
This thesis describes selected aspects of high luminosity operation of large solid angle scintillator arrays in Hall A of the CEBAF (Central Electron Beam Accelerator Facility) at TJNAF (Thomas Jefferson National Accelerator Facility ). CEBAF is a high current, high duty factor electron accelerator with a maximum beam energy of about 6 GeV and a maximum current of 200 (micro)A. Operating large solid angle scintillator arrays in high luminosity environment presents several problems such as high singles rates, low signal to noise ratios and shielding requirements. To demonstrate the need for large solid angle and momentum acceptance detectors as a third arm in Hall A, we will give a brief overview of the physics motivating five approved experiments, which utilize scintillator arrays. We will then focus on the design and assembly of these scintillator arrays, with special focus on the two new detector packages built for the Short Range Correlation experiment E01-015. This thesis also contains the description and results of different tests and calibrations which where conducted for these arrays. We also present the description of a number of tests which were done in order to estimate the singles rates, data reconstruction, filtering techniques and shielding required for these counters
14. Simulation Study of Using High-Z EMA to Suppress Recoil Protons Crosstalk in Scintillating Fiber Array for 14.1 MeV Neutron Imaging
Science.gov (United States)
Jia, Qinggang; Hu, Huasi; Zhang, Fengna; Zhang, Tiankui; Lv, Wei; Zhan, Yuanpin; Liu, Zhihua
2013-12-01
This paper studies the effect of a high-Z extra mural absorber (EMA) to improve the spatial resolution of a plastic (polystyrene) scintillating fiber array for 14.1 MeV fusion neutron imaging. Crosstalk induced by recoil protons was studied, and platinum (Pt) was selected as EMA material, because of its excellent ability to suppress the recoil protons penetrating the fibers. Three common fiber arrays (cylindrical scintillating fibers in square and hexagonal packing arrangements and square scintillating fibers) were simulated using the Monte Carlo method for evaluating the effect of Pt-EMA in improving spatial resolution. It is found that the resolution of the 100 μm square fiber array can be improved from 1.7 to 3.4 lp/mm by using 10- μm-thick Pt-EMA; comparatively, using an array with thinner square fibers (50 μm) only obtains a resolution of 2.1 lp/mm. The packing fraction decreases with the increase of EMA thickness. Our results recommend the use of 10 μm Pt-EMA for the square and the cylindrical (hexagonal packing) scintillating fiber arrays with fibers of 50-200 μm in the cross-sectional dimension. Besides, the dead-zone material should be replaced by high-Z material for the hexagonal packing cylindrical fiber array with fibers of 50-200 μm in diameter. Tungsten (W) and gold (Au) are also used as EMA in the three fiber arrays as a comparison. The simulation results show that W can be used at a lower cost, and Au does not have any advantages in cost and resolution improvement.
15. High-Resolution Mass Spectrometers
Science.gov (United States)
Marshall, Alan G.; Hendrickson, Christopher L.
2008-07-01
Over the past decade, mass spectrometry has been revolutionized by access to instruments of increasingly high mass-resolving power. For small molecules up to ˜400 Da (e.g., drugs, metabolites, and various natural organic mixtures ranging from foods to petroleum), it is possible to determine elemental compositions (CcHhNnOoSsPp…) of thousands of chemical components simultaneously from accurate mass measurements (the same can be done up to 1000 Da if additional information is included). At higher mass, it becomes possible to identify proteins (including posttranslational modifications) from proteolytic peptides, as well as lipids, glycoconjugates, and other biological components. At even higher mass (˜100,000 Da or higher), it is possible to characterize posttranslational modifications of intact proteins and to map the binding surfaces of large biomolecule complexes. Here we review the principles and techniques of the highest-resolution analytical mass spectrometers (time-of-flight and Fourier transform ion cyclotron resonance and orbitrap mass analyzers) and describe some representative high-resolution applications.
16. Highly lead-loaded red plastic scintillators as an X-ray imaging system for the laser Mega Joule
International Nuclear Information System (INIS)
Hamel, Matthieu; Normand, Stephane; Turk, Gregory; Darbon, Stephane
2012-01-01
The scope of this project intends to record spatially resolved images of core shape and size of a deuterium-tritium micro-balloon during inertial confinement fusion (ICF) experiments at Laser Mega Joule facility (LMJ). We need to develop an x-ray imaging system which can operate in the hard radiative background generated by an ignition shot of ICF. The scintillator is a part of the imaging system and has to gather a compromise of scintillating properties (scintillating efficiency, decay time, emission wavelength) so as to both operate in the hard radiative environment and to allow the acquisition of spatially resolved images. Inorganic scintillators cannot be used because no compromise can be found regarding the expected scintillating properties. Most of them are not fast enough and emit blue light. Organic scintillators are generally fast, but present low x-ray photoelectric absorption in the 10 to 40 keV range. This does not enable the acquisition of spatially resolved images. To this aim, we have developed highly lead-loaded and red-fluorescent fast plastic scintillators. Such a combination is not currently available via scintillator suppliers, since they propose only blue-fluorescent plastic scintillators doped with up to 12 wt% Pb. Thus, incorporation ratio up to 27 wt% Pb has been reached in our laboratory, which can afford a plastic scintillator with an outstanding Z(eff) close to 50. X-rays in the 10 to 40 keV range can thus interact with a higher probability of photoelectric effect than for classic organic scintillators, such as NE-102. The strong orange-red fluorescence can be filtered, so that we can eliminate residual Cerenkov light, generated by gamma-ray absorption in glass parts of the imaging system. Characteristic decay times of our scintillators evaluated under UV excitation were estimated to be in the range 10 to 13 ns. (authors)
17. High Resolution PET with 250 micrometer LSO Detectors and Adaptive Zoom
International Nuclear Information System (INIS)
Cherry, Simon R.; Qi, Jinyi
2012-01-01
There have been impressive improvements in the performance of small-animal positron emission tomography (PET) systems since their first development in the mid 1990s, both in terms of spatial resolution and sensitivity, which have directly contributed to the increasing adoption of this technology for a wide range of biomedical applications. Nonetheless, current systems still are largely dominated by the size of the scintillator elements used in the detector. Our research predicts that developing scintillator arrays with an element size of 250 (micro)m or smaller will lead to an image resolution of 500 (micro)m when using 18F- or 64Cu-labeled radiotracers, giving a factor of 4-8 improvement in volumetric resolution over the highest resolution research systems currently in existence. This proposal had two main objectives: (i) To develop and evaluate much higher resolution and efficiency scintillator arrays that can be used in the future as the basis for detectors in a small-animal PET scanner where the spatial resolution is dominated by decay and interaction physics rather than detector size. (ii) To optimize one such high resolution, high sensitivity detector and adaptively integrate it into the existing microPET II small animal PET scanner as a 'zoom-in' detector that provides higher spatial resolution and sensitivity in a limited region close to the detector face. The knowledge gained from this project will provide valuable information for building future PET systems with a complete ring of very high-resolution detector arrays and also lay the foundations for utilizing high-resolution detectors in combination with existing PET systems for localized high-resolution imaging.
18. Ultra-high resolution AMOLED
Science.gov (United States)
Wacyk, Ihor; Prache, Olivier; Ghosh, Amal
2011-06-01
AMOLED microdisplays continue to show improvement in resolution and optical performance, enhancing their appeal for a broad range of near-eye applications such as night vision, simulation and training, situational awareness, augmented reality, medical imaging, and mobile video entertainment and gaming. eMagin's latest development of an HDTV+ resolution technology integrates an OLED pixel of 3.2 × 9.6 microns in size on a 0.18 micron CMOS backplane to deliver significant new functionality as well as the capability to implement a 1920×1200 microdisplay in a 0.86" diagonal area. In addition to the conventional matrix addressing circuitry, the HDTV+ display includes a very lowpower, low-voltage-differential-signaling (LVDS) serialized interface to minimize cable and connector size as well as electromagnetic emissions (EMI), an on-chip set of look-up-tables for digital gamma correction, and a novel pulsewidth- modulation (PWM) scheme that together with the standard analog control provides a total dimming range of 0.05cd/m2 to 2000cd/m2 in the monochrome version. The PWM function also enables an impulse drive mode of operation that significantly reduces motion artifacts in high speed scene changes. An internal 10-bit DAC ensures that a full 256 gamma-corrected gray levels are available across the entire dimming range, resulting in a measured dynamic range exceeding 20-bits. This device has been successfully tested for operation at frame rates ranging from 30Hz up to 85Hz. This paper describes the operational features and detailed optical and electrical test results for the new AMOLED WUXGA resolution microdisplay.
19. Light Collection in the High Energy X-ray Detector with the Pixelated CdWO4 Scintillator using Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Lim, Chang Hwy; Moon, Myung-Kook; Lee, Suhyun; Kim, Jongyul; Kim, Jeongho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Jong Won [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-05-15
The performance of indirect detectors, which use the scintillator as CdWO{sub 4}, BGO, CsI, NaI, etc., are effected by optical properties of scintillator and geometrical condition of scintillator. Some of generated lights by interaction between x-ray photons and scintillator are collected at the photo-sensor and others are absorbed in scintillator or escape out of detector. In order to make the high performance image detector, detector should be able to gather the generated lights as much as possible. To minimize the loss of generated lights, thickness of scintillator is to be chosen appropriately. Therefore, the quality of the image detector using the pixelated scintillator is determined by scintillator size, reflectance of scintillator surface, electric noise, etc. In this study, we carried out a study the correlation between the number of collected light and the change of thickness of scintillator using Monte Carlo method. As shown in results, the optimal thickness of a scintillator should be properly selected depending on the incident x-ray energy. In case of without reflector, the scintillator thickness range for x-ray detection is thinner than other cases (with reflector). In the case of a scintillator with reflector, number of collected light and the optima thickness of a scintillator is higher and thicker than scintillator without reflector.
20. High resolution, high speed ultrahigh vacuum microscopy
International Nuclear Information System (INIS)
Poppa, Helmut
2004-01-01
The history and future of transmission electron microscopy (TEM) is discussed as it refers to the eventual development of instruments and techniques applicable to the real time in situ investigation of surface processes with high resolution. To reach this objective, it was necessary to transform conventional high resolution instruments so that an ultrahigh vacuum (UHV) environment at the sample site was created, that access to the sample by various in situ sample modification procedures was provided, and that in situ sample exchanges with other integrated surface analytical systems became possible. Furthermore, high resolution image acquisition systems had to be developed to take advantage of the high speed imaging capabilities of projection imaging microscopes. These changes to conventional electron microscopy and its uses were slowly realized in a few international laboratories over a period of almost 40 years by a relatively small number of researchers crucially interested in advancing the state of the art of electron microscopy and its applications to diverse areas of interest; often concentrating on the nucleation, growth, and properties of thin films on well defined material surfaces. A part of this review is dedicated to the recognition of the major contributions to surface and thin film science by these pioneers. Finally, some of the important current developments in aberration corrected electron optics and eventual adaptations to in situ UHV microscopy are discussed. As a result of all the path breaking developments that have led to today's highly sophisticated UHV-TEM systems, integrated fundamental studies are now possible that combine many traditional surface science approaches. Combined investigations to date have involved in situ and ex situ surface microscopies such as scanning tunneling microscopy/atomic force microscopy, scanning Auger microscopy, and photoemission electron microscopy, and area-integrating techniques such as x-ray photoelectron
1. Magnetic fields and scintillator performance
International Nuclear Information System (INIS)
Green, D.; Ronzhin, A.; Hagopian, V.
1995-06-01
Experimental data have shown that the light output of a scintillator depends on the magnitude of the externally applied magnetic fields, and that this variation can affect the calorimeter calibration and possibly resolution. The goal of the measurements presented here is to study the light yield of scintillators in high magnetic fields in conditions that are similar to those anticipated for the LHC CMS detector. Two independent measurements were performed, the first at Fermilab and the second at the National High Magnetic Field Laboratory at Florida State University
2. Development of {sup 100}Mo-containing scintillating bolometers for a high-sensitivity neutrinoless double-beta decay search
Energy Technology Data Exchange (ETDEWEB)
Armengaud, E.; Gros, M.; Herve, S.; Magnier, P.; Navick, X.F.; Nones, C.; Paul, B.; Penichot, Y.; Zolotarova, A.S. [Universite Paris-Saclay, IRFU, CEA, Gif-sur-Yvette (France); Augier, C.; Billard, J.; Cazes, A.; Charlieux, F.; Jesus, M. de; Gascon, J.; Juillard, A.; Queguiner, E.; Sanglard, V.; Vagneron, L. [Univ Lyon, Universite Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne (France); Barabash, A.S.; Konovalov, S.I.; Umatov, V.I. [National Research Centre Kurchatov Institute, Institute of Theoretical and Experimental Physics, Moscow (Russian Federation); Beeman, J.W. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Bekker, T.B. [V.S. Sobolev Institute of Geology and Mineralogy of the Siberian Branch of the RAS, Novosibirsk (Russian Federation); Bellini, F.; Ferroni, F. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN, Sezione di Roma, Rome (Italy); Benoit, A.; Camus, P. [CNRS-Neel, Grenoble (France); Berge, L.; Chapellier, M.; Dumoulin, L.; Humbert, V.; Le Sueur, H.; Marcillac, P. de; Marnieros, S.; Marrache-Kikuchi, C.; Novati, V.; Olivieri, E.; Plantevin, O. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Bergmann, T.; Kleifges, M.; Tcherniakhovski, D.; Weber, M. [Karlsruhe Institute of Technology, Institut fuer Prozessdatenverarbeitung und Elektronik, Karlsruhe (Germany); Boiko, R.S.; Danevich, F.A.; Kobychev, V.V.; Nikolaichuk, M.O.; Tretyak, V.I. [Institute for Nuclear Research, Kyiv (Ukraine); Broniatowski, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Karlsruhe Institute of Technology, Institut fuer Experimentelle Teilchenphysik, Karlsruhe (Germany); Brudanin, V.; Rozov, S.; Yakushev, E. [JINR, Laboratory of Nuclear Problems, Dubna, Moscow Region (Russian Federation); Capelli, S.; Gironi, L.; Pavan, M.; Pessina, G. [Universita di Milano Bicocca, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano Bicocca, Milan (Italy); Cardani, L.; Casali, N.; Dafinei, I.; Tomei, C.; Vignati, M. [INFN, Sezione di Roma, Rome (Italy); Chernyak, D.M. [Institute for Nuclear Research, Kyiv (Ukraine); The University of Tokyo, Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study, Kashiwa, Chiba (Japan); Combarieu, M. de; Pari, P. [Universite Paris-Saclay, IRAMIS, CEA, Gif-sur-Yvette (France); Coron, N.; Redon, T. [Universite Paris-Sud, IAS, CNRS, Orsay (France); Devoyon, L.; Koskas, F.; Strazzer, O. [Universite Paris-Saclay, Orphee, CEA, Gif-sur-Yvette (France); Di Domizio, S. [Universita di Genova, Dipartimento di Fisica, Genoa (Italy); INFN Sezione di Genova, Genoa (Italy); Eitel, K.; Siebenborn, B. [Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Enss, C.; Fleischmann, A.; Gastaldo, L. [Heidelberg University, Kirchhoff Institute for Physics, Heidelberg (Germany); Foerster, N.; Kozlov, V. [Karlsruhe Institute of Technology, Institut fuer Experimentelle Teilchenphysik, Karlsruhe (Germany); Giuliani, A. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Universita dell' Insubria, DISAT, Como (Italy); Grigorieva, V.D.; Ivannikova, N.V.; Ivanov, I.M.; Makarov, E.P.; Shlegel, V.N.; Vasiliev, Ya.V. [Nikolaev Institute of Inorganic Chemistry, Novosibirsk (Russian Federation); Hehn, L. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Jin, Y. [Laboratoire de Photonique et de Nanostructures, CNRS, Marcoussis (France); Kraus, H. [University of Oxford, Department of Physics, Oxford (United Kingdom); Kudryavtsev, V.A. [University of Sheffield, Department of Physics and Astronomy, Sheffield (United Kingdom); Laubenstein, M.; Nagorny, S.; Pattavina, L.; Pirro, S. [INFN, Laboratori Nazionali del Gran Sasso, Assergi, AQ (Italy); Loidl, M.; Rodrigues, M. [CEA-Saclay, CEA, LIST, Laboratoire National Henri Becquerel (LNE-LNHB), Gif-sur-Yvette Cedex (France); Mancuso, M. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Universita dell' Insubria, DISAT, Como (Italy); Max-Planck-Institut fuer Physik, Munich (Germany); Pagnanini, L.; Schaeffner, K. [INFN, Laboratori Nazionali del Gran Sasso, Assergi, AQ (Italy); INFN, Gran Sasso Science Institute, L' Aquila (Italy); Piperno, G. [INFN, Laboratori Nazionali di Frascati, Rome (Italy); Poda, D.V. [CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Orsay (France); Institute for Nuclear Research, Kyiv (Ukraine); Rusconi, C. [INFN, Laboratori Nazionali del Gran Sasso, Assergi, AQ (Italy); University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); Scorza, S. [Karlsruhe Institute of Technology, Institut fuer Experimentelle Teilchenphysik, Karlsruhe (Germany); SNOLAB, Lively, ON (Canada); Velazquez, M. [Universite de Bordeaux, ICMCB, CNRS, Pessac (France)
2017-11-15
This paper reports on the development of a technology involving {sup 100}Mo-enriched scintillating bolometers, compatible with the goals of CUPID, a proposed next-generation bolometric experiment to search for neutrinoless double-beta decay. Large mass (∝ 1 kg), high optical quality, radiopure {sup 100}Mo-containing zinc and lithium molybdate crystals have been produced and used to develop high performance single detector modules based on 0.2-0.4 kg scintillating bolometers. In particular, the energy resolution of the lithium molybdate detectors near the Q-value of the double-beta transition of {sup 100}Mo (3034 keV) is 4-6 keV FWHM. The rejection of the α-induced dominant background above 2.6 MeV is better than 8σ. Less than 10 μBq/kg activity of {sup 232}Th({sup 228}Th) and {sup 226}Ra in the crystals is ensured by boule recrystallization. The potential of {sup 100}Mo-enriched scintillating bolometers to perform high sensitivity double-beta decay searches has been demonstrated with only 10 kg x d exposure: the two neutrino double-beta decay half-life of {sup 100}Mo has been measured with the up-to-date highest accuracy as T{sub 1/2} = [6.90 ± 0.15(stat.) ± 0.37(syst.)] x 10{sup 18} years. Both crystallization and detector technologies favor lithium molybdate, which has been selected for the ongoing construction of the CUPID-0/Mo demonstrator, containing several kg of {sup 100}Mo. (orig.)
3. Scintillation properties of polycrystalline LaxY1-xO3 ceramic
Science.gov (United States)
Sahi, Sunil; Chen, Wei; Kenarangui, Rasool
2015-03-01
Scintillators are the material that absorbs the high-energy photons and emits visible photons. Scintillators are commonly used in radiation detector for security, medical imaging, industrial applications and high energy physics research. Two main types of scintillators are inorganic single crystals and organic (plastic or liquid) scintillators. Inorganic single crystals are expensive and difficult to grow in desire shape and size. Also, some efficient inorganic scintillator such as NaI and CsI are not environmental friendly. But on the other hand, organic scintillators have low density and hence poor energy resolution which limits their use in gamma spectroscopy. Polycrystalline ceramic can be a cost effective alternative to expensive inorganic single crystal scintillators. Here we have fabricated La0.2Y1.8O3 ceramic scintillator and studied their luminescence and scintillation properties. Ceramic scintillators were fabricated by vacuum sintering of La0.2Y1.8O3 nanoparticles at temperature below the melting point. La0.2Y1.8O3 ceramic were characterized structurally using XRD and TEM. Photoluminescence and radioluminescence studies were done using UV and X-ray as an excitation source. We have used gamma isotopes with different energy to studies the scintillation properties of La0.2Y1.8O3 scintillator. Preliminary studies of La0.2Y1.8O3 scintillator shows promising result with energy resolution comparable to that of NaI and CsI.
4. Uranium-scintillator device
International Nuclear Information System (INIS)
Smith, S.D.
1979-01-01
The calorimeter subgroup of the 1977 ISABELLE Summer Workshop strongly recommended investigation of the uranium-scintillator device because of its several attractive features: (1) increased resolution for hadronic energy, (2) fast time response, (3) high density (i.e., 16 cm of calorimeter per interaction length), and, in comparison with uranium--liquid argon detectors, (4) ease of construction, (5) simple electronics, and (6) lower cost. The AFM group at the CERN ISR became interested in such a calorimeter for substantially the same reasons, and in the fall of 1977 carried out tests on a uranium-scintillator (U-Sc) calorimeter with the same uranium plates used in their 1974 studies of the uranium--liquid argon (U-LA) calorimeter. The chief disadvantage of the scintillator test was that the uranium plates were too small to fully contain the hadronic showers. However, since the scintillator and liquid argon tests were made with the plates, direct comparison of the two types of devices could be made
5. Scintillation measurements at Bahir Dar during the high solar activity phase of solar cycle 24
Energy Technology Data Exchange (ETDEWEB)
Kriegel, Martin; Jakowski, Norbert; Berdermann, Jens; Sato, Hiroatsu [German Aerospace Center (DLR), Neustrelitz (Germany). Inst. of Communications and Navigation; Mersha, Mogese Wassaie [Bahir Dar Univ. (Ethiopia). Washera Geospace and Radar Science Lab.
2017-04-01
Small-scale ionospheric disturbances may cause severe radio scintillations of signals transmitted from global navigation satellite systems (GNSSs). Consequently, smallscale plasma irregularities may heavily degrade the performance of current GNSSs such as GPS, GLONASS or Galileo. This paper presents analysis results obtained primarily from two high-rate GNSS receiver stations designed and operated by the German Aerospace Center (DLR) in cooperation with Bahir Dar University (BDU) at 11.6 N, 37.4 E. Both receivers collect raw data sampled at up to 50 Hz, from which characteristic scintillation parameters such as the S4 index are deduced. This paper gives a first overview of the measurement setup and the observed scintillation events over Bahir Dar in 2015. Both stations are located close to one another and aligned in an east-west, direction which allows us to estimate the zonal drift velocity and spatial dimension of equatorial ionospheric plasma irregularities. Therefore, the lag times of moving electron density irregularities and scintillation patterns are derived by applying cross-correlation analysis to high-rate measurements of the slant total electron content (sTEC) along radio links between a GPS satellite and both receivers and to the associated signal power, respectively. Finally, the drift velocity is derived from the estimated lag time, taking into account the geometric constellation of both receiving antennas and the observed GPS satellites.
6. High-resolution ultrasonic spectroscopy
Directory of Open Access Journals (Sweden)
V. Buckin
2018-03-01
Full Text Available High-resolution ultrasonic spectroscopy (HR-US is an analytical technique for direct and non-destructive monitoring of molecular and micro-structural transformations in liquids and semi-solid materials. It is based on precision measurements of ultrasonic velocity and attenuation in analysed samples. The application areas of HR-US in research, product development, and quality and process control include analysis of conformational transitions of polymers, ligand binding, molecular self-assembly and aggregation, crystallisation, gelation, characterisation of phase transitions and phase diagrams, and monitoring of chemical and biochemical reactions. The technique does not require optical markers or optical transparency. The HR-US measurements can be performed in small sample volumes (down to droplet size, over broad temperature range, at ambient and elevated pressures, and in various measuring regimes such as automatic temperature ramps, titrations and measurements in flow.
7. High resolution eddy current microscopy
Science.gov (United States)
Lantz, M. A.; Jarvis, S. P.; Tokumoto, H.
2001-01-01
We describe a sensitive scanning force microscope based technique for measuring local variations in resistivity by monitoring changes in the eddy current induced damping of a cantilever with a magnetic tip oscillating above a conducting sample. To achieve a high sensitivity, we used a cantilever with an FeNdBLa particle mounted on the tip. Resistivity measurements are demonstrated on a silicon test structure with a staircase doping profile. Regions with resistivities of 0.0013, 0.0041, and 0.022 Ω cm are clearly resolved with a lateral resolution of approximately 180 nm. For this range of resistivities, the eddy current induced damping is found to depend linearly on the sample resistivity.
8. Maximum likelihood positioning algorithm for high-resolution PET scanners
International Nuclear Information System (INIS)
Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar
2016-01-01
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML
9. Scintillating fiber tracking at high luminosities using Visible Light Photon counter readout
International Nuclear Information System (INIS)
Atac, M.
1995-11-01
This paper reviews the research work on the Visible Light Photon Counters (VLPC) that have been developed for the scintillating fiber tracking at high luminosity colliders and high rate fixed target experiments. The devices originated from the joint work between UCLA and Rockwell International Science Center. The VLPCs are capable of counting photons very efficiently down to a single photon level with high avalanche gain, producing pulses at very high rates with very short rise times. Due to small gain dispersions they can be used in counting photons with high quantum efficiencies, therefore they are excellent devices for charged particle tracking using small diameter scintillating plastic fibers. In this paper, fiber tracking for the CDF and D0 upgrades and a possible usage of the VLPC readout for the experiment E803 at Fermilab will be discussed
10. Liquid scintillator for 2D dosimetry for high-energy photon beams
International Nuclear Information System (INIS)
Poenisch, Falk; Archambault, Louis; Briere, Tina Marie; Sahoo, Narayan; Mohan, Radhe; Beddar, Sam; Gillin, Michael T.
2009-01-01
Complex radiation therapy techniques require dosimetric verification of treatment planning and delivery. The authors investigated a liquid scintillator (LS) system for application for real-time high-energy photon beam dosimetry. The system was comprised of a transparent acrylic tank filled with liquid scintillating material, an opaque outer tank, and a CCD camera. A series of images was acquired when the tank with liquid scintillator was irradiated with a 6 MV photon beam, and the light data measured with the CCD camera were filtered to correct for scattering of the optical light inside the liquid scintillator. Depth-dose and lateral profiles as well as two-dimensional (2D) dose distributions were found to agree with results from the treatment planning system. Further, the corrected light output was found to be linear with dose, dose rate independent, and is robust for single or multiple acquisitions. The short time needed for image acquisition and processing could make this system ideal for fast verification of the beam characteristics of the treatment machine. This new detector system shows a potential usefulness of the LS for 2D QA.
11. Liquid scintillator for 2D dosimetry for high-energy photon beams
Energy Technology Data Exchange (ETDEWEB)
Poenisch, Falk; Archambault, Louis; Briere, Tina Marie; Sahoo, Narayan; Mohan, Radhe; Beddar, Sam; Gillin, Michael T. [Department of Radiation Physics, University of Texas M. D. Anderson Cancer Center, 1515 Holcombe Boulevard., Unit 94, Houston, Texas 77030 (United States)
2009-05-15
Complex radiation therapy techniques require dosimetric verification of treatment planning and delivery. The authors investigated a liquid scintillator (LS) system for application for real-time high-energy photon beam dosimetry. The system was comprised of a transparent acrylic tank filled with liquid scintillating material, an opaque outer tank, and a CCD camera. A series of images was acquired when the tank with liquid scintillator was irradiated with a 6 MV photon beam, and the light data measured with the CCD camera were filtered to correct for scattering of the optical light inside the liquid scintillator. Depth-dose and lateral profiles as well as two-dimensional (2D) dose distributions were found to agree with results from the treatment planning system. Further, the corrected light output was found to be linear with dose, dose rate independent, and is robust for single or multiple acquisitions. The short time needed for image acquisition and processing could make this system ideal for fast verification of the beam characteristics of the treatment machine. This new detector system shows a potential usefulness of the LS for 2D QA.
12. Scintillators for positron emission tomography
International Nuclear Information System (INIS)
Moses, W.W.; Derenzo, S.E.
1995-09-01
Like most applications that utilize scintillators for gamma detection, Positron Emission Tomography (PET) desires materials with high light output, short decay time, and excellent stopping power that are also inexpensive, mechanically rugged, and chemically inert. Realizing that this ''ultimate'' scintillator may not exist, this paper evaluates the relative importance of these qualities and describes their impact on the imaging performance of PET. The most important PET scintillator quality is the ability to absorb 511 keV photons in a small volume, which affects the spatial resolution of the camera. The dominant factor is a short attenuation length (≤ 1.5 cm is required), although a high photoelectric fraction is also important (> 30% is desired). The next most important quality is a short decay time, which affects both the dead time and the coincidence timing resolution. Detection rates for single 511 keV photons can be extremely high, so decay times ≤ 500 ns are essential to avoid dead time losses. In addition, positron annihilations are identified by time coincidence so ≤5 ns fwhm coincidence pair timing resolution is required to identify events with narrow coincidence windows, reducing contamination due to accidental coincidences. Current trends in PET cameras are toward septaless, ''fully-3D'' cameras, which have significantly higher count rates than conventional 2-D cameras and so place higher demands on scintillator decay time. Light output affects energy resolution, and thus the ability of the camera to identify and reject events where the initial 511 keV photon has undergone Compton scatter in the patient. The scatter to true event fraction is much higher in fully-3D cameras than in 2-D cameras, so future PET cameras would benefit from scintillators with a 511 keV energy resolution < 10--12% fwhm
13. High-spatial resolution and high-spectral resolution detector for use in the measurement of solar flare hard x rays
International Nuclear Information System (INIS)
Desai, U.D.; Orwig, L.E.
1988-01-01
In the areas of high spatial resolution, the evaluation of a hard X-ray detector with 65 micron spatial resolution for operation in the energy range from 30 to 400 keV is proposed. The basic detector is a thick large-area scintillator faceplate, composed of a matrix of high-density scintillating glass fibers, attached to a proximity type image intensifier tube with a resistive-anode digital readout system. Such a detector, combined with a coded-aperture mask, would be ideal for use as a modest-sized hard X-ray imaging instrument up to X-ray energies as high as several hundred keV. As an integral part of this study it was also proposed that several techniques be critically evaluated for X-ray image coding which could be used with this detector. In the area of high spectral resolution, it is proposed to evaluate two different types of detectors for use as X-ray spectrometers for solar flares: planar silicon detectors and high-purity germanium detectors (HPGe). Instruments utilizing these high-spatial-resolution detectors for hard X-ray imaging measurements from 30 to 400 keV and high-spectral-resolution detectors for measurements over a similar energy range would be ideally suited for making crucial solar flare observations during the upcoming maximum in the solar cycle
14. Photon statistics in scintillation crystals
Science.gov (United States)
Bora, Vaibhav Joga Singh
Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are eld-tested, relatively inexpensive, and have good detection eciency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often dened by the Fano factor. The Fano factor of an integer-valued random process is dened as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI2:Eu, YAP
15. WORKSHOP: Scintillating fibre detectors
International Nuclear Information System (INIS)
Anon.
1989-01-01
Scintillating fibre detector development and technology for the proposed US Superconducting Supercollider, SSC, was the subject of a recent workshop at Fermilab, with participation from the high energy physics community and from industry. Sessions covered the current status of fibre technology and fibre detectors, new detector applications, fluorescent materials and scintillation compositions, radiation damage effects, amplification and imaging structures, and scintillation fibre fabrication techniques
16. Improvements to well scintillation counters
International Nuclear Information System (INIS)
Farukhi, M.R.; Mataraza, G.A.; Wimer, O.D.
1977-01-01
This invention relates to the field of ionising radiation detection. It concerns in particular scintillation detectors of the type that is commonly used in conjunction with a photomultiplier tube and that is used for monitoring radiation, for instance in the clinical measurements of isotopes. This invention enables well scintillation counters to be made, characterised by a high efficiency in measuring the thindown rate of radio-pharmaceutical solutions and to resolve the distribution of energy emanating from the radioactive source. It particularly consists in improving the uniformity of the luminous efficiency, the quality of the resolution and the efficiency whilst improving the reception of light [fr
17. Optimization of the scintillation parameters of the lead tungstate crystals for their application in high precision electromagnetic calorimetry
International Nuclear Information System (INIS)
Drobychev, G.
2000-01-01
In the frame of this dissertation work scintillation properties of the lead tungstate crystals PWO) and possibilities of their use were studied foreseeing their application for electromagnetic calorimetry in extreme radiation environment conditions of new colliders. The results of this work can be summarized in the following way. 1. A model of the scintillations origin in the lead tungstate crystals which includes processes influencing on the crystals radiation hardness and presence of slow components in scintillations was developed. 2. An analysis of the influences of the PWO scintillation properties changes on the parameters of the electromagnetic calorimeter was done. 3. Methods of the light collection from the large scintillation elements of complex shape made of the birefringent scintillation crystal with high refraction index and low light yield in case of signal registration by a photodetector with sensitive surface small in compare with the output face of scintillator were Studied. 4. Physical principles of the methodology of the scintillation crystals certification during their mass production foreseeing their installation into a calorimeter electromagnetic were developed. Correlations between the results of measurements of the PWO crystals parameters by different methods were found. (author)
18. CeBr3 as a room-temperature, high-resolution gamma-ray detector
International Nuclear Information System (INIS)
Guss, Paul; Reed, Michael; Yuan Ding; Reed, Alexis; Mukhopadhyay, Sanjoy
2009-01-01
Cerium bromide (CeBr 3 ) has become a material of interest in the race for high-resolution gamma-ray spectroscopy at room temperature. This investigation quantified the potential of CeBr 3 as a room-temperature, high-resolution gamma-ray detector. The performance of CeBr 3 crystals was compared to other scintillation crystals of similar dimensions and detection environments. Comparison of self-activity of CeBr 3 to cerium-doped lanthanum tribromide (LaBr 3 :Ce) was performed. Energy resolution and relative intrinsic efficiency were measured and are presented.
19. Development of AMS high resolution injector system
International Nuclear Information System (INIS)
Bao Yiwen; Guan Xialing; Hu Yueming
2008-01-01
The Beijing HI-13 tandem accelerator AMS high resolution injector system was developed. The high resolution energy achromatic system consists of an electrostatic analyzer and a magnetic analyzer, which mass resolution can reach 600 and transmission is better than 80%. (authors)
20. A fast, high light output scintillator for gamma ray and neutron detection. Fifth Semi-Annual Report
International Nuclear Information System (INIS)
Entine, Gerald; Kanai, S.; Shah, M.S.; Leonard Cirignano, M.S.; Jarek Glodo; Van Loef, Edgar V.
2003-01-01
In view of the attractive properties of RbGd2Br7:Ce for gamma-ray and thermal neutron detection, and the lack of larger volume crystals, the goal of the Phase I project was to perform a rigorous investigation of the crystal growth of this exciting material and explore its capabilities for gamma-ray and thermal neutron detection. The Phase I research was very successful. All technical objectives were met and in many cases exceeded expectations. We were able to produce large (>1 cm3) RbGd2Br7:Ce crystals with excellent scintillation properties and demonstrated the possibility to detect thermal neutrons. As far as we are aware, our Phase I experiment was the first to demonstrate thermal neutron detection with RbGd2Br7:Ce. Clearly, the feasibility of the proposed research was adequately proven. The Phase II research builds on the successful results obtained during Phase I. Phase II will initially focus on optimizing the RbGd2Br7:Ce growth process to produce high quality, larger volume RbGd2Br7:Ce crystals. We will continue to use the versatile Bridgman technique. During this process, crystal growth parameters will be adjusted for optimal growth conditions. Our goal is to produce high quality RbGd2Br7:Ce crystals of size 1 inch x 1 inch x 1 inch (∼16 cm3). We will work on packaging aspects that allow efficient light collection and prevent crystal degradation. We will study and measure emission spectra, light yield, scintillation decay, energy and time resolution. The effects of variation in Ce concentration on the scintillation properties of RbGd2Br7:Ce will be examined in detail. Comprehensive gamma-ray spectroscopic and imaging studies will be conducted. Also, optimization of RbGd2Br7:Ce for thermal neutron detection will be addressed. Our initial studies will determine the optimal geometry of the RbGd2Br7:Ce crystals for neutron detection. For thermal neutron detection experiments, we will produce large area, thin samples in order to minimize gamma-ray sensitivity
1. Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter
CERN Document Server
Adloff, C.; Blaising, J.J.; Drancourt, C.; Espargiliere, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Schlereth, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S.T.; Sosebee, M.; White, A.P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N.K.; Mavromanolakis, G.; Thomson, M.A.; Ward, D.R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Apostolakis, J.; Dotti, A.; Folger, G.; Ivantchenko, V.; Uzhinskiy, V.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G.C.; Dyshkant, A.; Lima, J.G.R.; Zutshi, V.; Hostachy, J.Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Gottlicher, P.; Gunter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.Ch.; Shen, W.; Stamen, R.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G.W.; Kawagoe, K.; Dauncey, P.D.; Magnan, A.M.; Bartsch, V.; Wing, M.; Salvatore, F.; Alamillo, E.Calvo; Fouz, M.C.; Puerta-Pelayo, J.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Tikhomirov, V.; Kiesling, C.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Amjad, M.S.; Bonis, J.; Callier, S.; Conforti di Lorenzo, S.; Cornebise, P.; Doublet, Ph.; Dulucq, F.; Fleury, J.; Frisson, T.; van der Kolk, N.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch.; Poschl, R.; Raux, L.; Rouene, J.; Seguin-Moreau, N.; Anduze, M.; Boudry, V.; Brient, J-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Gotze, M.; Hartbrich, O.; Sauer, J.; Weber, S.; Zeitnitz, C.
2013-01-01
Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.
2. Radiation Hard and High Light Yield Scintillator Search for CMS Phase II Upgrade
CERN Document Server
Tiras, Emrah
2015-01-01
The CMS detector at the LHC requires a major upgrade to cope with the higher instantaneous luminosity and the elevated radiation levels. The active media of the forward backing hadron calorimeters is projected to be radiation-hard, high light yield scintillation materials or similar alternatives. In this context, we have studied various radiation-hard scintillating materials such as Polyethylene Terephthalate (PET), Polyethylene Naphthalate (PEN), High Efficiency Mirror (HEM) and quartz plates with various coatings. The quartz plates are pure Cerenkov radiators and their radiation hardness has been confirmed. In order to increase the light output, we considered organic and inorganic coating materials such as p-Terphenyl (pTp), Anthracene and Gallium-doped Zinc Oxide (ZnO Ga) that are applied as thin layers on the surface of the quartz plates. Here, we present the results of the related test beam activities, laboratory measurements and recent developments.
3. FLUKA studies of hadron-irradiated scintillating crystals for calorimetry at the High-Luminosity LHC
CERN Document Server
Quittnat, Milena Eleonore
2015-01-01
Calorimetry at the High-Luminosity LHC (HL-LHC) will be performed in a harsh radiation environment with high hadron fluences. The upgraded CMS electromagnetic calorimeter design and suitable scintillating materials are a focus of current research. In this paper, first results using the Monte Carlo simulation program FLUKA are compared to measurements performed with proton-irradiated LYSO, YSO and cerium fluoride crystals. Based on these results, an extrapolation to the behavior of an electromagnetic sampling calorimeter, using one of the inorganic scintillators above as an active medium, is performed for the upgraded CMS experiment at the HL-LHC. Characteristic parameters such as the induced ambient dose, fluence spectra for different particle types and the residual nuclei are studied, and the suitability of these materials for a future calorimeter is surveyed. Particular attention is given to the creation of isotopes in an LYSO-tungsten calorimeter that might contribute a prohibitive background to the measu...
4. Scintillation properties of Ce:(La,Gd){sub 2}Si{sub 2}O{sub 7} at high temperatures
Energy Technology Data Exchange (ETDEWEB)
Kurosawa, Shunsuke, E-mail: kurosawa@imr.tohoku.ac.jp [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Miyagi (Japan); New Industry Creation Hatchery Center (NICHe), Tohoku University, 6-6-10 Aoba, Aramaki, Aoba-ku, Sendai 980-8579, Miyagi (Japan); Shishido, Toetsu; Sugawara, Takamasa; Nomura, Akiko; Yubuta, Kunio; Suzuki, Akira; Murakami, Rikito [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Miyagi (Japan); Pejchal, Jan [New Industry Creation Hatchery Center (NICHe), Tohoku University, 6-6-10 Aoba, Aramaki, Aoba-ku, Sendai 980-8579, Miyagi (Japan); Institute of Physics, AS CR, Cukrovarnická 10, 162 53 Prague (Czech Republic); Yokota, Yuui; Kamada, Kei [New Industry Creation Hatchery Center (NICHe), Tohoku University, 6-6-10 Aoba, Aramaki, Aoba-ku, Sendai 980-8579, Miyagi (Japan); Yoshikawa, Akira [Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Miyagi (Japan); New Industry Creation Hatchery Center (NICHe), Tohoku University, 6-6-10 Aoba, Aramaki, Aoba-ku, Sendai 980-8579, Miyagi (Japan); C and A Corporation, 6-6-40 Aoba, Aramaki, Aoba-ku, Sendai, Miyagi 980-8577 (Japan)
2015-02-01
Temperature dependence of scintillation properties was investigated for (Ce{sub 0.01}, Gd{sub 0.90}, La{sub 0.09}){sub 2}Si{sub 2}O{sub 7} grown by floating zone method. The light output over 35,000 photons/MeV was found constant in the temperature range from 0 °C to 150 °C. In addition, FWHM energy resolution of Ce:La-GPS (roughly 7–8%) at 662 keV remained constant up to 100 °C. Thus, this crystal can be applied to oil well logging or other radiation detection application at high temperature conditions.
5. A high granularity plastic scintillator tile hadronic calorimeter with APD readout for a linear collider detector
Czech Academy of Sciences Publication Activity Database
Andreev, V.; Cvach, Jaroslav; Danilov, M.; Devitsin, E.; Dodonov, V.; Eigen, G.; Garutti, E.; Gilitzky, Yu.; Groll, M.; Heuer, R.D.; Janata, Milan; Kacl, Ivan; Korbel, V.; Kozlov, V. Yu; Meyer, H.; Morgunov, V.; Němeček, Stanislav; Pöschl, R.; Polák, Ivo; Raspereza, A.; Reiche, S.; Rusinov, V.; Sefkow, F.; Smirnov, P.; Terkulov, A.; Valkár, Š.; Weichert, Jan; Zálešák, Jaroslav
2006-01-01
Roč. 564, - (2006), s. 144-154 ISSN 0168-9002 R&D Projects: GA MŠk(CZ) LC527; GA MŠk(CZ) 1P05LA259; GA ČR(CZ) GA202/05/0653 Institutional research plan: CEZ:AV0Z10100502 Keywords : hadronic calorimeter * plastic scintillator tile * APD readout * linear collider detector Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.185, year: 2006
6. Application of the Oslo method to high resolution gamma spectra
Science.gov (United States)
Simon, A.; Guttormsen, M.; Larsen, A. C.; Beausang, C. W.; Humby, P.
2015-10-01
Hauser-Feshbach statistical model is a widely used tool for calculation of the reaction cross section, in particular for astrophysical processes. The HF model requires as an input an optical potential, gamma-strength function (GSF) and level density (LD) to properly model the statistical properties of the nucleus. The Oslo method is a well established technique to extract GSFs and LDs from experimental data, typically used for gamma-spectra obtained with scintillation detectors. Here, the first application of the Oslo method to high-resolution data obtained using the Ge detectors of the STARLITER setup at TAMU is discussed. The GSFs and LDs extracted from (p,d) and (p,t) reactions on 152154 ,Sm targets will be presented.
7. Investigation of the imaging properties of inorganic scintillation screens using high energetic ion beams
Energy Technology Data Exchange (ETDEWEB)
Lieberwirth, Alice [TU Darmstadt (Germany); JWG Universitaet Frankfurt/Main (Germany); Forck, Peter; Sieber, Thomas [GSI Darmstadt (Germany); Ensinger, Wolfgang; Lederer, Stephan [TU Darmstadt (Germany); Kester, Oliver [JWG Universitaet Frankfurt/Main (Germany)
2016-07-01
Inorganic scintillation screens are a common diagnostics tool in heavy ion accelerators. In order to investigate the imaging properties of various screen materials, four different material compositions were irradiated at GSI, using protons up to Uranium ions as projectiles. Beams were extracted from SIS18 with high energy (300 MeV/u) in slow and fast extraction mode. During irradiation the scintillation response of the screens was simultaneously recorded by two different optical setups to investigate light output, profile characteristics and emission spectra. It was observed, that fast extracted beams induce in general lower light output than slow extracted beams, while the light output per deposited energy decreases with atomic number. The analysis of the spectral emission as well as investigations with classical optical methods showed no significant defect-building in all materials, not even under irradiation with increasing beam intensity or over long time periods. The investigated scintillation screens can be considered as stable under irradiation with high energetic heavy ion pulses and are appropriate for beam diagnostics applications in future accelerator facilities like FAIR. Characteristic properties and application areas of the screens are presented in the poster.
8. Search for and selection of novel heavy scintillator crystals for calorimeter design for future high-energy colliders
International Nuclear Information System (INIS)
Ferrere, D.
1993-01-01
The discovery of some particles (Higgs, top,..) foreseen by theoretical models should be achieved at future colliders allowing to reach an energy scale of about 1 TeV. Efficient detectors must be designed to handle the very high luminosity of the LHC collider at CERN. In the intermediate mass region, M Z -2M Z , the diphoton decay mode of a Higgs boson produced inclusively or in association with W boson or a toponium gives good chance of observation. A very high resolution calorimeter with photon angle reconstruction and pion identification capability should detect a Higgs signal with high probability. So a homogeneous crystal calorimeter seems to be suitable. Because of the high luminosity and the high radiation level, a search for a new heavy scintillator has been undertaken. It must have a good radiation hardness (>0.5 MRad in a year) and a fast luminescence decay time (<30 ns). Among 50 crystals or glasses of specific chemical composition tested in transmission, luminescence, decay time, γ/neutrons radiation and light yield, cerium fluoride seems best suited for LHC. The necessity to have a good photon resolution in the intermediate Higgs mass region led us to optimise by Monte Carlo simulations the geometry of the calorimeter, the uniformisation of the light collection and crystal intercalibration parameters. (orig.)
9. Evaluation and optimization of the High Resolution Research Tomograph (HRRT)
International Nuclear Information System (INIS)
Knoess, C.
2004-01-01
Positron Emission Tomography (PET) is an imaging technique used in medicine to determine qualitative and quantitative metabolic parameters in vivo. The High Resolution Research Tomograph (HRRT) is a new high resolution tomograph that was designed for brain studies (312 mm transaxial field-of-view (FOV), 252 mm axial FOV). The detector blocks are arranged in a quadrant sharing design and consist of two crystal layers with dimensions of 2.1 mm x 2.1 mm x 7.5 mm. The main detector material is the newly developed scintillator lutetium oxyorthosilicate (LSO). Events from the different crystal layers are distinguished by Pulse Shape Discrimination (PSD) to gain Depth of Interaction (DOI) information. This will improve the spatial resolution, especially at the edges of the FOV. A prototype of the tomograph was installed at the Max-Planck Institute for Neurological Research in Cologne, Germany in 1999 and was evaluated with respect to spatial resolution, sensitivity, scatter fraction, and count rate behavior. These performance measurements showed that this prototype provided a spatial resolution of around 2.5 mm in a volume big enough to contain the human brain. A comparison with a single layer HRRT prototype showed a 10% worsening of the resolution, despite the fact that DOI was used. Without DOI, the resolution decreased considerably. The sensitivity, as measured with a 22 Na point source, was 46.5 cps/kBq for an energy window of 350-650 keV and 37.9 cps/kBq for an energy window of 400-650 keV, while the scatter fractions were 56% for 350-650 keV and 51% for 400-650 keV, respectively. A daily quality check was developed and implemented that uses the uniform, natural radioactive background of the scintillator material LSO. In 2001, the manufacturer decided to build a series of additional HRRT scanners to try to improve the design (detector electronics, transmission source design, and shielding against out-of-FOV activity) and to eliminate problems (difficult detector
10. Current trends in scintillator detectors and materials
International Nuclear Information System (INIS)
Moses, W.W.
2002-01-01
The last decade has seen a renaissance in inorganic scintillator development for gamma ray detection. Lead tungstate (PbWO 4 ) has been developed for high-energy physics experiments, and possesses exceptionally high density and radiation hardness, albeit with low luminous efficiency. Lutetium orthosilicate or LSO (Lu 2 SiO 5 :Ce) possesses a unique combination of high luminous efficiency, high density, and reasonably short decay time, and is now incorporated in commercial positron emission tomography cameras. There have been advances in understanding the fundamental mechanisms that limit energy resolution, and several recently discovered materials (such as LaBr 3 :Ce) possess energy resolution that approaches that of direct solid state detectors. Finally, there are indications that a neglected class of scintillator materials that exhibit near band-edge fluorescence could provide scintillators with sub-nanosecond decay times and high luminescent efficiency
11. High-resolution electron microscopy
CERN Document Server
Spence, John C H
2013-01-01
This new fourth edition of the standard text on atomic-resolution transmission electron microscopy (TEM) retains previous material on the fundamentals of electron optics and aberration correction, linear imaging theory (including wave aberrations to fifth order) with partial coherence, and multiple-scattering theory. Also preserved are updated earlier sections on practical methods, with detailed step-by-step accounts of the procedures needed to obtain the highest quality images of atoms and molecules using a modern TEM or STEM electron microscope. Applications sections have been updated - these include the semiconductor industry, superconductor research, solid state chemistry and nanoscience, and metallurgy, mineralogy, condensed matter physics, materials science and material on cryo-electron microscopy for structural biology. New or expanded sections have been added on electron holography, aberration correction, field-emission guns, imaging filters, super-resolution methods, Ptychography, Ronchigrams, tomogr...
12. Performance of high-resolution position-sensitive detectors developed for storage-ring decay experiments
International Nuclear Information System (INIS)
Yamaguchi, T.; Suzaki, F.; Izumikawa, T.; Miyazawa, S.; Morimoto, K.; Suzuki, T.; Tokanai, F.; Furuki, H.; Ichihashi, N.; Ichikawa, C.; Kitagawa, A.; Kuboki, T.; Momota, S.; Nagae, D.; Nagashima, M.; Nakamura, Y.; Nishikiori, R.; Niwa, T.; Ohtsubo, T.; Ozawa, A.
2013-01-01
Highlights: • Position-sensitive detectors were developed for storage-ring decay spectroscopy. • Fiber scintillation and silicon strip detectors were tested with heavy ion beams. • A new fiber scintillation detector showed an excellent position resolution. • Position and energy detection by silicon strip detectors enable full identification. -- Abstract: As next generation spectroscopic tools, heavy-ion cooler storage rings will be a unique application of highly charged RI beam experiments. Decay spectroscopy of highly charged rare isotopes provides us important information relevant to the stellar conditions, such as for the s- and r-process nucleosynthesis. In-ring decay products of highly charged RI will be momentum-analyzed and reach a position-sensitive detector set-up located outside of the storage orbit. To realize such in-ring decay experiments, we have developed and tested two types of high-resolution position-sensitive detectors: silicon strips and scintillating fibers. The beam test experiments resulted in excellent position resolutions for both detectors, which will be available for future storage-ring experiments
13. Research in high energy physics: Scintillating fiber detector development for the SSC: Annual progress report
International Nuclear Information System (INIS)
Ruchti, R.C.
1988-01-01
The scintillating fiber detector development program at the University of Notre Dame is divided into several components. These include: Research on scintillating glass fiber materials; Research on scintillating plastic fiber materials; Research on scintillating liquids in fiber capillaries; Studies of improvements in image intensification and light amplification of appropriate test and development facilities at Notre Dame. The overall goal of the program is to develop efficient scintillating fiber detectors with long, optical attenuation length, and excellent radiation resistance properties for tracking and microvertex detectors and as component active sampling materials for scintillation calorimetry. We now discuss each of these programs in turn. 2 figs., 3 tabs
14. Collimated trans-axial tomographic scintillation camera
International Nuclear Information System (INIS)
1980-01-01
The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)
15. A micro-machined retro-reflector for improving light yield in ultra-high-resolution gamma cameras
NARCIS (Netherlands)
Heemskerk, J.W.T.; Korevaar, M.A.N.; Kreuger, R.; Ligtvoet, C.M.; Schotanus, P.; Beekman, F.J.
2009-01-01
High-resolution imaging of x-ray and gamma-ray distributions can be achieved with cameras that use charge coupled devices (CCDs) for detecting scintillation light flashes. The energy and interaction position of individual gamma photons can be determined by rapid processing of CCD images of
16. Scintillating fibre detectors using position-sensitive photomultipliers
International Nuclear Information System (INIS)
Agoritsas, V.; Bergdolt, A.M.; Bing, O.; Bravar, A.; Ditta, J.; Drevenak, R.
1995-01-01
Scintillating fibre technology has made substantial progress, and has demonstrated great potential for fast tracking and triggering in high luminosity experiments in Particle Physics. Some recent issues of the RD-17 project at CERN are presented for fast and precise readout of scintillating fibre arrays, as well as for upgrade of position-sensitive photomultipliers. Excellent matching of the scintillating fibre and the position-sensitive photomultiplier, in particular in time characteristics, allowed to achieve excellent detector performances, typically a spatial resolution of ∼ 125 μm with time resolution better than 1 ns and detection efficiency greater than 95%. (author)10 refs.; 25 figs.; 1 tab
17. Neutron detection in a high gamma-ray background with EJ-301 and EJ-309 liquid scintillators
International Nuclear Information System (INIS)
Stevanato, L.; Cester, D.; Nebbia, G.; Viesti, G.
2012-01-01
Using a fast digitizer, the neutron–gamma discrimination capability of the new liquid scintillator EJ-309 is compared with that obtained using standard EJ-301. Moreover the capability of both the scintillation detectors to identify a weak neutron source in a high gamma-ray background is demonstrated. The probability of neutron detection is PD=95% at 95% confidence level for a gamma-ray background corresponding to a dose rate of 100 μSv/h.
18. A novel high resolution, high sensitivity SPECT detector for molecular imaging of cardiovascular diseases
Science.gov (United States)
Cusanno, F.; Argentieri, A.; Baiocchi, M.; Colilli, S.; Cisbani, E.; De Vincentis, G.; Fratoni, R.; Garibaldi, F.; Giuliani, F.; Gricia, M.; Lucentini, M.; Magliozzi, M. L.; Majewski, S.; Marano, G.; Musico, P.; Musumeci, M.; Santavenere, F.; Torrioli, S.; Tsui, B. M. W.; Vitelli, L.; Wang, Y.
2010-05-01
Cardiovascular diseases are the most common cause of death in western countries. Understanding the rupture of vulnerable atherosclerotic plaques and monitoring the effect of innovative therapies of heart failure is of fundamental importance. A flexible, high resolution, high sensitivity detector system for molecular imaging with radionuclides on small animal models has been designed for this aim. A prototype has been built using tungsten pinhole and LaBr3(Ce) scintillator coupled to Hamamatsu Flat Panel PMTs. Compact individual-channel readout has been designed, built and tested. Measurements with phantoms as well as pilot studies on mice have been performed, the results show that the myocardial perfusion in mice can be determined with sufficient precision. The detector will be improved replacing the Hamamatsu Flat Panel with Silicon Photomultipliers (SiPMs) to allow integration of the system with MRI scanners. Application of LaBr3(Ce) scintillator coupled to photosensor with high photon detection efficiency and excellent energy resolution will allow dual-label imaging to monitor simultaneously the cardiac perfusion and the molecular targets under investigation during the heart therapy.
19. Memory effect, resolution, and efficiency measurements of an Al{sub 2}O{sub 3} coated plastic scintillator used for radioxenon detection
Energy Technology Data Exchange (ETDEWEB)
Bläckberg, L., E-mail: lisa.blackberg@physics.uu.se [Department of Physics and Astronomy, Uppsala University, Box 516, SE-75120 Uppsala (Sweden); Fritioff, T.; Mårtensson, L.; Nielsen, F.; Ringbom, A. [Division of Defence and Security Systems, Swedish Defence Research Agency (FOI), SE-17290 Stockholm (Sweden); Sjöstrand, H.; Klintenberg, M. [Department of Physics and Astronomy, Uppsala University, Box 516, SE-75120 Uppsala (Sweden)
2013-06-21
A cylindrical plastic scintillator cell, used for radioxenon monitoring within the verification regime of the Comprehensive Nuclear-Test-Ban Treaty, has been coated with 425 nm Al{sub 2}O{sub 3} using low temperature Atomic Layer Deposition, and its performance has been evaluated. The motivation is to reduce the memory effect caused by radioxenon diffusing into the plastic scintillator material during measurements, resulting in an elevated detection limit. Measurements with the coated detector show both energy resolution and efficiency comparable to uncoated detectors, and a memory effect reduction of a factor of 1000. Provided that the quality of the detector is maintained for a longer period of time, Al{sub 2}O{sub 3} coatings are believed to be a viable solution to the memory effect problem in question.
20. Track segments in hadronic showers in a highly granular scintillator-steel hadron calorimeter
CERN Document Server
Adloff, C.; Chefdeville, M.; Drancourt, C.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Koletsou, I.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Schlereth, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S.T.; Sosebee, M.; White, A.P.; Yu, J.; Eigen, G.; Mikami, Y.; Watson, N.K.; Mavromanolakis, G.; Thomson, M.A.; Ward, D.R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Apostolakis, J.; Dannheim, D.; Dotti, A.; Folger, G.; Ivantchenko, V.; Klempt, W.; Kraaij, E.van der; Lucaci-Timoce, A.-I; Ribon, A.; Schlatter, D.; Uzhinskiy, V.; Cârloganu, C.; Gay, P.; Manen, S.; Royer, L.; Tytgat, M.; Zaganidis, N.; Blazey, G.C.; Dyshkant, A.; Lima, J.G.R.; Zutshi, V.; Hostachy, J.-Y; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hartbrich, O.; Hermberg, B.; Karstensen, S.; Krivan, F.; Krüger, K.; Lu, S.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Feege, N.; Garutti, E.; Laurien, S.; Marchesini, I.; Matysek, M.; Ramilli, M.; Briggl, K.; Eckert, P.; Harion, T.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G.W.; Kawagoe, K.; Sudo, Y.; Yoshioka, T.; Dauncey, P.D.; Magnan, A.-M; Bartsch, V.; Wing, M.; Salvatore, F.; Gil, E.Cortina; Mannai, S.; Baulieu, G.; Calabria, P.; Caponetto, L.; Combaret, C.; Negra, R.Della; Grenier, G.; Han, R.; Ianigro, J-C; Kieffer, R.; Laktineh, I.; Lumb, N.; Mathez, H.; Mirabito, L.; Petrukhin, A.; Steen, A.; Tromeur, W.; Donckt, M.Vander; Zoccarato, Y.; Alamillo, E.Calvo; Fouz, M.-C; Puerta-Pelayo, J.; Corriveau, F.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Tikhomirov, V.; Kiesling, C.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Amjad, M.S.; Bonis, J.; Callier, S.; Lorenzo, S.Conforti di; Cornebise, P.; Doublet, Ph; Dulucq, F.; Fleury, J.; Frisson, T.; der Kolk, N.van; Li, H.; Martin-Chassard, G.; Richard, F.; Taille, Ch de la; Pöschl, R.; Raux, L.; Rouëné, J.; Seguin-Moreau, N.; Anduze, M.; Balagura, V.; Boudry, V.; Brient, J-C; Cornat, R.; Frotin, M.; Gastaldi, F.; Guliyev, E.; Haddad, Y.; Magniette, F.; Musat, G.; Ruan, M.; Tran, T.H.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Kotera, K.; Takeshita, T.; Uozumi, S.; Jeans, D.; Götze, M.; Sauer, J.; Weber, S.; Zeitnitz, C.
2013-01-01
We investigate the three dimensional substructure of hadronic showers in the CALICE scintillator-steel hadronic calorimeter. The high granularity of the detector is used to find track segments of minimum ionising particles within hadronic showers, providing sensitivity to the spatial structure and the details of secondary particle production in hadronic cascades. The multiplicity, length and angular distribution of identified track segments are compared to GEANT4 simulations with several different shower models. Track segments also provide the possibility for in-situ calibration of highly granular calorimeters.
1. Development of high performance and very low radioactivity scintillation counters for the SuperNEMO calorimeter
International Nuclear Information System (INIS)
Chauveau, E.
2010-11-01
SuperNEMO is a next generation double beta decay experiment which will extend the successful 'tracko-calo' technique employed in NEMO 3. The main characteristic of this type of detector is to identify not only double beta decays, but also to measure its own background components. The project aims to reach a sensitivity up to 10 26 years on the half-life of 82 Se. One of the main challenge of the Research and Development is to achieve an unprecedented energy resolution for the electron calorimeter, better than 8 % FWHM at 1 MeV. This thesis contributes to improve scintillators and photomultipliers performances and reduce their radioactivity, including in particular the development of a new photomultiplier in collaboration with Photonis. (author)
2. A tilted fiber-optic plate coupled CCD detector for high resolution neutron imaging
Energy Technology Data Exchange (ETDEWEB)
Kim, Jongyul; Cho, Gyuseong [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Jongyul; Hwy, Limchang; Kim, Taejoo; Lee, Kyehong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Seungwook [Pusan National Univ., Pusan (Korea, Republic of)
2013-05-15
One of these efforts is that a tilted scintillator geometry and lens coupled CCD detector for neutron imaging system were used to improve spatial resolution in one dimension. The increased spatial resolution in one dimension was applied to fuel cell study. However, a lens coupled CCD detector has lower sensitivity than a fiber-optic plate coupled CCD detector due to light loss. In this research, a tilted detector using fiber-optic plate coupled CCD detector was developed to improve resolution and sensitivity. In addition, a tilted detector can prevent an image sensor from direct radiation damage. Neutron imaging has been used for fuel cell study, lithium ion battery study, and many scientific applications. High quality neutron imaging is demanded for more detailed studies of applications, and spatial resolution should be considered to get high quality neutron imaging. Therefore, there were many efforts to improve spatial resolution.
3. Scintillating fibres
International Nuclear Information System (INIS)
Nahnhauer, R.
1990-01-01
In the search for new detector techniques, scintillating fibre technology has already gained a firm foothold, and is a strong contender for the extreme experimental conditions of tomorrow's machines. Organized by a group from the Institute of High Energy Physics, Berlin-Zeuthen, a workshop held from 3-5 September in the nearby village of Blossin brought together experts from East and West, and from science and industry
4. Scintillating fibres
Energy Technology Data Exchange (ETDEWEB)
Nahnhauer, R. [IHEP Zeuthen (Germany)
1990-11-15
In the search for new detector techniques, scintillating fibre technology has already gained a firm foothold, and is a strong contender for the extreme experimental conditions of tomorrow's machines. Organized by a group from the Institute of High Energy Physics, Berlin-Zeuthen, a workshop held from 3-5 September in the nearby village of Blossin brought together experts from East and West, and from science and industry.
5. Optimization of the scintillation parameters of the lead tungstate crystals for their application in high precision electromagnetic calorimetry; Optimisation des parametres de scintillation des cristaux de tungstate de plomb pour leur application dans la calorimetrie electromagnetique de haute precision
Energy Technology Data Exchange (ETDEWEB)
Drobychev, G
2000-04-12
In the frame of this dissertation work scintillation properties of the lead tungstate crystals (PWO) and possibilities of their use were studied foreseeing their application for electromagnetic calorimetry in extreme radiation environment conditions of new colliders. The results of this work can be summarized in the following way. 1. A model of the scintillations origin in the lead tungstate crystals which includes processes influencing on the crystals radiation hardness and presence of slow components in scintillations was developed. 2. An analysis of the influences of the PWO scintillation properties changes on the parameters of the electromagnetic calorimeter was done. 3. Methods of the light collection from the large scintillation elements of complex shape made of the birefringent scintillation crystal with high refraction index and low light yield in case of signal registration by a photodetector with sensitive surface small in compare with the output face of scintillator were Studied. 4. Physical principles of the methodology of the scintillation crystals certification during their mass production foreseeing their installation into a calorimeter electromagnetic were developed. Correlations between the results of measurements of the PWO crystals parameters by different methods were found. (author)
6. Design and image-quality performance of high resolution CMOS-based X-ray imaging detectors for digital mammography
Science.gov (United States)
Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.
2012-04-01
In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).
7. Tests of a Fast Plastic Scintillator for High-Precision Half-Life Measurements
Science.gov (United States)
Laffoley, A. T.; Dunlop, R.; Finlay, P.; Leach, K. G.; Michetti-Wilson, J.; Rand, E. T.; Svensson, C. E.; Grinyer, G. F.; Thomas, J. C.; Ball, G.; Garnsworthy, A. B.; Hackman, G.; Orce, J. N.; Triambak, S.; Williams, S. J.; Andreoiu, C.; Cross, D.
2013-03-01
A fast plastic scintillator detector is evaluated for possible use in an ongoing program of high-precision half-life measurements of short lived β emitters. Using data taken at TRI-UMF's Isotope Separator and Accelerator Facility with a radioactive 26Na beam, a detailed investigation of potential systematic effects with this new detector setup is being performed. The technique will then be applied to other β-decay half-life measurements including the superallowed Fermi β emitters 10C, 14O, and T = 1/2 decay of 15O.
8. Planning for shallow high resolution seismic surveys
CSIR Research Space (South Africa)
Fourie, CJS
2008-11-01
Full Text Available of the input wave. This information can be used in conjunction with this spreadsheet to aid the geophysicist in designing shallow high resolution seismic surveys to achieve maximum resolution and penetration. This Excel spreadsheet is available free from...
9. High resolution phoswich gamma-ray imager utilizing monolithic MPPC arrays with submillimeter pixelized crystals
International Nuclear Information System (INIS)
Kato, T; Kataoka, J; Nakamori, T; Kishimoto, A; Yamamoto, S; Sato, K; Ishikawa, Y; Yamamura, K; Kawabata, N; Ikeda, H; Kamada, K
2013-01-01
We report the development of a high spatial resolution tweezers-type coincidence gamma-ray camera for medical imaging. This application consists of large-area monolithic Multi-Pixel Photon Counters (MPPCs) and submillimeter pixelized scintillator matrices. The MPPC array has 4 × 4 channels with a three-side buttable, very compact package. For typical operational gain of 7.5 × 10 5 at + 20 °C, gain fluctuation over the entire MPPC device is only ± 5.6%, and dark count rates (as measured at the 1 p.e. level) amount to ≤ 400 kcps per channel. We selected Ce-doped (Lu,Y) 2 (SiO 4 )O (Ce:LYSO) and a brand-new scintillator, Ce-doped Gd 3 Al 2 Ga 3 O 12 (Ce:GAGG) due to their high light yield and density. To improve the spatial resolution, these scintillators were fabricated into 15 × 15 matrices of 0.5 × 0.5 mm 2 pixels. The Ce:LYSO and Ce:GAGG scintillator matrices were assembled into phosphor sandwich (phoswich) detectors, and then coupled to the MPPC array along with an acrylic light guide measuring 1 mm thick, and with summing operational amplifiers that compile the signals into four position-encoded analog outputs being used for signal readout. Spatial resolution of 1.1 mm was achieved with the coincidence imaging system using a 22 Na point source. These results suggest that the gamma-ray imagers offer excellent potential for applications in high spatial medical imaging.
10. Development of high-resolution x-ray CT system using parallel beam geometry
Energy Technology Data Exchange (ETDEWEB)
Yoneyama, Akio, E-mail: akio.yoneyama.bu@hitachi.com; Baba, Rika [Central Research Laboratory, Hitachi Ltd., Hatoyama, Saitama (Japan); Hyodo, Kazuyuki [Institute of Materials Science, High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Takeda, Tohoru [School of Allied Health Sciences, Kitasato University, Sagamihara, Kanagawa (Japan); Nakano, Haruhisa; Maki, Koutaro [Department of Orthodontics, School of Dentistry Showa University, Ota-ku, Tokyo (Japan); Sumitani, Kazushi; Hirai, Yasuharu [Kyushu Synchrotron Light Research Center, Tosu, Saga (Japan)
2016-01-28
For fine three-dimensional observations of large biomedical and organic material samples, we developed a high-resolution X-ray CT system. The system consists of a sample positioner, a 5-μm scintillator, microscopy lenses, and a water-cooled sCMOS detector. Parallel beam geometry was adopted to attain a field of view of a few mm square. A fine three-dimensional image of birch branch was obtained using a 9-keV X-ray at BL16XU of SPring-8 in Japan. The spatial resolution estimated from the line profile of a sectional image was about 3 μm.
11. Plastic scintillators with {beta}-diketone Eu complexes for high ionizing radiation detection
Energy Technology Data Exchange (ETDEWEB)
Adadurov, A.F., E-mail: adadurov@isma.kharkov.ua [Institute for Scintillating Materials, NAN of Ukraine, Lenin Avenue 60, 61001 Kharkov (Ukraine); Zhmurin, P.N.; Lebedev, V.N.; Kovalenko, V.N. [Institute for Scintillating Materials, NAN of Ukraine, Lenin Avenue 60, 61001 Kharkov (Ukraine)
2011-10-15
Luminescent and scintillation properties of polystyrene-based plastic scintillators with {beta}-diketone Eu complexes are investigated. A scintillator with dibenzoylmethane Eu complex containing two phenyl groups demonstrates the maximum scintillating efficiency. It is shown that plastic scintillators efficiency is dramatically decreased if {beta}-diketone derivatives contain no phenyl groups as substituents. This fact can be explained by exciplex mechanism of energy transfer from a matrix to Eu complex. - Highlights: > Fluorescent properties of polystyrene scintillators with {beta}-diketone complexes of Eu were studied. > Scintillating efficiency is increased with the number of phenyl groups in Eu complex. > This is related to exciplex mechanism of energy transfer from a polymer matrix to Eu complex.
12. Plastic scintillators with β-diketone Eu complexes for high ionizing radiation detection
International Nuclear Information System (INIS)
2011-01-01
Luminescent and scintillation properties of polystyrene-based plastic scintillators with β-diketone Eu complexes are investigated. A scintillator with dibenzoylmethane Eu complex containing two phenyl groups demonstrates the maximum scintillating efficiency. It is shown that plastic scintillators efficiency is dramatically decreased if β-diketone derivatives contain no phenyl groups as substituents. This fact can be explained by exciplex mechanism of energy transfer from a matrix to Eu complex. - Highlights: → Fluorescent properties of polystyrene scintillators with β-diketone complexes of Eu were studied. → Scintillating efficiency is increased with the number of phenyl groups in Eu complex. → This is related to exciplex mechanism of energy transfer from a polymer matrix to Eu complex.
13. Survey meter using novel inorganic scintillators
International Nuclear Information System (INIS)
Yoshikawa, Akira; Fukuda, Kentaro; Kawaguchi, Noriaki; Kamada, Kei; Fujimoto, Yutaka; Yokota, Yuui; Kurosawa, Shunsuke; Yanagida, Takayuki
2012-01-01
Single crystal scintillator materials are widely used for detection of high-energy photons and particles. There is continuous demand for new scintillator materials with higher performance because of increasing number of medical, industrial, security and other applications. This article presents the recent development of three novel inorganic scintillators; Pr-doped Lu 3 Al 5 O 12 (Pr:LuAG), Ce doped Gd 3 (Al, Ga) 5 O 12 (Ce:GAGG) and Ce or Eu-doped 6 LiCaAlF 6 (Ce:LiCAF, Eu:LiCAF). Pr:LuAG shows very interesting scintillation properties including very fast decay time, high light yield and excellent energy resolution. Taking the advantage of these properties, positron emission mammography (PEM) equipped with Pr:LuAG were developed. Ce:GAGG shows very high light yield, which is much higher than that of Ce:LYSO. Survey meter using Ce:GAGG is developed using this scintillator. Ce:LiCAF and Eu:LiCAF were developed for neutron detection. The advantage and disadvantage are discussed comparing with halide scintillators. Eu-doped LiCAF indicated five times higher light yield than that of existing Li-glass. It is expected to be used as the alternative of 3 He. (author)
14. Performance comparison of scintillators for alpha particle detectors
Energy Technology Data Exchange (ETDEWEB)
Morishita, Yuki [Graduate School of Medicine, Nagoya University, 1-1-20 Daiko-Minami, Higashi-ku, Nagoya, Aichi 461-8673 (Japan); Japan Atomic Energy Agency, Muramatsu 4-33, Tokai-mura, Ibaraki 319-1194 (Japan); Yamamoto, Seiichi [Graduate School of Medicine, Nagoya University, 1-1-20 Daiko-Minami, Higashi-ku, Nagoya, Aichi 461-8673 (Japan); Izaki, Kenji [Japan Atomic Energy Agency, Muramatsu 4-33, Tokai-mura, Ibaraki 319-1194 (Japan); Kaneko, Junichi H.; Toui, Kohei; Tsubota, Youichi; Higuchi, Mikio [Graduate School of Engineering, Hokkaido University, Kita 13, Nishi 8, Kita-ku, Sapporo, Hokkaido 060-8628 (Japan)
2014-11-11
Scintillation detectors for alpha particles are often used in nuclear fuel facilities. Alpha particle detectors have also become important in the research field of radionuclide therapy using alpha emitters. ZnS(Ag) is the most often used scintillator for alpha particle detectors because its light output is high. However, the energy resolution of ZnS(Ag)-based scintillation detectors is poor because they are not transparent. A new ceramic sample, namely the cerium doped Gd{sub 2}Si{sub 2}O{sub 7} (GPS) scintillator, has been tested as alpha particle detector and its performances have been compared to that one of three different scintillating materials: ZnS(Ag), GAGG and a standard plastic scintillator. The different scintillating materials have been coupled to two different photodetectors, namely a photomultiplier tube (PMT) and a Silicon Photo-multiplier (Si-PM): the performances of each detection system have been compared. Promising results as far as the energy resolution performances (10% with PMT and 14% with Si-PM) have been obtained in the case of GPS and GAGG samples. Considering the quantum efficiencies of the photodetectors under test and their relation to the emission wavelength of the different scintillators, the best results were achieved coupling the GPS with the PMT and the GAGG with the Si-PM.
15. Advanced Multilayer Composite Heavy-Oxide Scintillator Detectors for High Efficiency Fast Neutron Detection
Science.gov (United States)
Ryzhikov, Vladimir D.; Naydenov, Sergei V.; Pochet, Thierry; Onyshchenko, Gennadiy M.; Piven, Leonid A.; Smith, Craig F.
2018-01-01
We have developed and evaluated a new approach to fast neutron and neutron-gamma detection based on large-area multilayer composite heterogeneous detection media consisting of dispersed granules of small-crystalline scintillators contained in a transparent organic (plastic) matrix. Layers of the composite material are alternated with layers of transparent plastic scintillator material serving as light guides. The resulting detection medium - designated as ZEBRA - serves as both an active neutron converter and a detection scintillator which is designed to detect both neutrons and gamma-quanta. The composite layers of the ZEBRA detector consist of small heavy-oxide scintillators in the form of granules of crystalline BGO, GSO, ZWO, PWO and other materials. We have produced and tested the ZEBRA detector of sizes 100x100x41 mm and greater, and determined that they have very high efficiency of fast neutron detection (up to 49% or greater), comparable to that which can be achieved by large sized heavy-oxide single crystals of about Ø40x80 cm3 volume. We have also studied the sensitivity variation to fast neutron detection by using different types of multilayer ZEBRA detectors of 100 cm2 surface area and 41 mm thickness (with a detector weight of about 1 kg) and found it to be comparable to the sensitivity of a 3He-detector representing a total cross-section of about 2000 cm2 (with a weight of detector, including its plastic moderator, of about 120 kg). The measured count rate in response to a fast neutron source of 252Cf at 2 m for the ZEBRA-GSO detector of size 100x100x41 mm3 was 2.84 cps/ng, and this count rate can be doubled by increasing the detector height (and area) up to 200x100 mm2. In summary, the ZEBRA detectors represent a new type of high efficiency and low cost solid-state neutron detector that can be used for stationary neutron/gamma portals. They may represent an interesting alternative to expensive, bulky gas counters based on 3He or 10B neutron
16. High resolution NMR in zeolites
Energy Technology Data Exchange (ETDEWEB)
Diaz, Anix [INTEVEP, Filial de Petroleos de Venezuela, SA, Caracas (Venezuela). Dept. de Analisis y Evalucion
1992-12-31
In this work {sup 29} Si and {sup 27} Al NMR spectroscopy was used to study various types of zeolites. The corresponding spectra were used to measure the Si/Al ratios, to follow chemical modifications induced by acid and hydrothermal treatments, to determine non-equivalent crystallographic sites in highly dealuminated mordenites, and to detect modifications of faujasites due to the insertion of titanium atoms in the lattice. (author) 7 refs., 7 figs., 2 tabs.
17. High resolution NMR in zeolites
International Nuclear Information System (INIS)
Diaz, Anix
1991-01-01
In this work 29 Si and 27 Al NMR spectroscopy was used to study various types of zeolites. The corresponding spectra were used to measure the Si/Al ratios, to follow chemical modifications induced by acid and hydrothermal treatments, to determine non-equivalent crystallographic sites in highly dealuminated mordenites, and to detect modifications of faujasites due to the insertion of titanium atoms in the lattice. (author)
18. High resolution sequence stratigraphy in China
International Nuclear Information System (INIS)
Zhang Shangfeng; Zhang Changmin; Yin Yanshi; Yin Taiju
2008-01-01
Since high resolution sequence stratigraphy was introduced into China by DENG Hong-wen in 1995, it has been experienced two development stages in China which are the beginning stage of theory research and development of theory research and application, and the stage of theoretical maturity and widely application that is going into. It is proved by practices that high resolution sequence stratigraphy plays more and more important roles in the exploration and development of oil and gas in Chinese continental oil-bearing basin and the research field spreads to the exploration of coal mine, uranium mine and other strata deposits. However, the theory of high resolution sequence stratigraphy still has some shortages, it should be improved in many aspects. The authors point out that high resolution sequence stratigraphy should be characterized quantitatively and modelized by computer techniques. (authors)
19. High resolution CT of the chest
Energy Technology Data Exchange (ETDEWEB)
Barneveld Binkhuysen, F H [Eemland Hospital (Netherlands), Dept. of Radiology
1996-12-31
Compared to conventional CT high resolution CT (HRCT) shows several extra anatomical structures which might effect both diagnosis and therapy. The extra anatomical structures were discussed briefly in this article. (18 refs.).
20. High-resolution spectrometer at PEP
International Nuclear Information System (INIS)
Weiss, J.M.; HRS Collaboration.
1982-01-01
A description is presented of the High Resolution Spectrometer experiment (PEP-12) now running at PEP. The advanced capabilities of the detector are demonstrated with first physics results expected in the coming months
1. Ultra-high resolution coded wavefront sensor
KAUST Repository
Wang, Congli
2017-06-08
Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.
2. Structure of high-resolution NMR spectra
CERN Document Server
Corio, PL
2012-01-01
Structure of High-Resolution NMR Spectra provides the principles, theories, and mathematical and physical concepts of high-resolution nuclear magnetic resonance spectra.The book presents the elementary theory of magnetic resonance; the quantum mechanical theory of angular momentum; the general theory of steady state spectra; and multiple quantum transitions, double resonance and spin echo experiments.Physicists, chemists, and researchers will find the book a valuable reference text.
3. Scintillator structures
International Nuclear Information System (INIS)
Cusano, D.A.; Prener, J.S.
1978-01-01
Distributed phosphor scintillator structures providing superior optical coupling to photoelectrically responsive devices together with methods for fabricating said scintillator structures are disclosed. In accordance with one embodiment of the invention relating to scintillator structures, the phosphor is distributed in a 'layered' fashion with certain layers being optically transparent so that the visible wavelength output of the scintillator is better directed to detecting devices. In accordance with another embodiment of the invention relating to scintillator structures, the phosphor is distributed throughout a transparent matrix in a continuous fashion whereby emitted light is more readily transmitted to a photodetector. Methods for fabricating said distributed phosphor scintillator structures are also disclosed. (Auth.)
4. Theoretical investigations on the high light yield of the LuI3:Ce scintillator
International Nuclear Information System (INIS)
Vasil'ev, A.N.; Iskandarova, I.M.; Scherbinin, A.V.; Markov, I.A.; Bagatur'yants, A.A.; Potapkin, B.V.; Srivastava, A.M.; Vartuli, J.S.; Duclos, S.J.
2009-01-01
The extremely high scintillation efficiency of lutetium iodide doped by cerium is explained as a result of at least three factors controlling the energy transfer from the host matrix to activator. We propose and theoretically validate the possibility of a new channel of energy transfer to excitons and directly to cerium, namely the Auger process when Lu 4f hole relaxes to the valence band hole with simultaneous creation of additional exciton or excitation of cerium. This process should be efficient in LuI 3 , and inefficient in LuCl 3 . To justify this channel, we perform calculations of density of states using a periodic plane-wave density functional approach. The second factor is the increase of the efficiency of valence hole capture by cerium in the row LuCl 3 -LuBr 3 -LuI 3 . The third one is the increase of the efficiency of energy transfer from self-trapped excitons to cerium ions in the same row. The latter two factors are verified by cluster ab initio calculations. We estimate either the relaxation of these excitations and barriers for the diffusion of self-trapped holes (STH) and self-trapped exciton (STE). The performed estimations theoretically justify the high LuI 3 :Ce 3+ scintillator yield.
5. Real-time volumetric scintillation dosimetry
International Nuclear Information System (INIS)
Beddar, S
2015-01-01
The goal of this brief review is to review the current status of real-time 3D scintillation dosimetry and what has been done so far in this area. The basic concept is to use a large volume of a scintillator material (liquid or solid) to measure or image the dose distributions from external radiation therapy (RT) beams in three dimensions. In this configuration, the scintillator material fulfills the dual role of being the detector and the phantom material in which the measurements are being performed. In this case, dose perturbations caused by the introduction of a detector within a phantom will not be at issue. All the detector configurations that have been conceived to date used a Charge-Coupled Device (CCD) camera to measure the light produced within the scintillator. In order to accurately measure the scintillation light, one must correct for various optical artefacts that arise as the light propagates from the scintillating centers through the optical chain to the CCD chip. Quenching, defined in its simplest form as a nonlinear response to high-linear energy transfer (LET) charged particles, is one of the disadvantages when such systems are used to measure the absorbed dose from high-LET particles such protons. However, correction methods that restore the linear dose response through the whole proton range have been proven to be effective for both liquid and plastic scintillators. Volumetric scintillation dosimetry has the potential to provide fast, high-resolution and accurate 3D imaging of RT dose distributions. Further research is warranted to optimize the necessary image reconstruction methods and optical corrections needed to achieve its full potential
6. Scintillator performance at low dose rates and low temperatures for the CMS High Granularity Calorimeter for HL-LHC
CERN Document Server
Ricci-Tam, Francesca
2018-01-01
The High Luminosity LHC (HL-LHC) will integrate 10 times more luminosity than the LHC, posing significant challenges for radiation tolerance, especially for forward calorimetry, and highlights the issue for future colliders. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. The upgrade includes both electromagnetic and hadronic components, with the latter using a mixture of silicon sensors (in the highest radiation regions at high pseudorapidity) and scintillator as its active components. The scintillator will nevertheless receive large doses accumulated at low dose rates, and will have to operate at low temperature - around -30 degrees Celsius. We discuss measurements of scintillator radiation tolerance, from in-situ measurements from the current CMS endcap calorimeters, and from measurements at low temperature and low dose-rate at gamma sources in the laboratory.
7. Ultra-high resolution protein crystallography
International Nuclear Information System (INIS)
Takeda, Kazuki; Hirano, Yu; Miki, Kunio
2010-01-01
Many protein structures have been determined by X-ray crystallography and deposited with the Protein Data Bank. However, these structures at usual resolution (1.5< d<3.0 A) are insufficient in their precision and quantity for elucidating the molecular mechanism of protein functions directly from structural information. Several studies at ultra-high resolution (d<0.8 A) have been performed with synchrotron radiation in the last decade. The highest resolution of the protein crystals was achieved at 0.54 A resolution for a small protein, crambin. In such high resolution crystals, almost all of hydrogen atoms of proteins and some hydrogen atoms of bound water molecules are experimentally observed. In addition, outer-shell electrons of proteins can be analyzed by the multipole refinement procedure. However, the influence of X-rays should be precisely estimated in order to derive meaningful information from the crystallographic results. In this review, we summarize refinement procedures, current status and perspectives for ultra high resolution protein crystallography. (author)
8. Scintillation scanner
International Nuclear Information System (INIS)
Mehrbrodt, A.W.; Mog, W.F.; Brunnett, C.J.
1977-01-01
A scintillation scanner having a visual image producing means coupled through a lost motion connection to the boom which supports the scintillation detector is described. The lost motion connection is adjustable to compensate for such delays as may occur between sensing and recording scintillations. 13 claims, 5 figures
9. A high granularity scintillator hadronic — calorimeter with SiPM readout for a linear collider detector
Czech Academy of Sciences Publication Activity Database
Andreev, V.; Balagura, V.; Bobchenko, B.; Cvach, Jaroslav; Janata, Milan; Kacl, Ivan; Němeček, Stanislav; Polák, Ivo; Valkár, Š.; Weichert, Jan; Zálešák, Jaroslav
2005-01-01
Roč. 540, - (2005), s. 368-380 ISSN 0168-9002 R&D Projects: GA MŠk(CZ) LN00A006 Institutional research plan: CEZ:AV0Z10100502 Keywords : linear collider detector * analog calorimeter * semiconductor detectors * scintillator * high granularity Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.224, year: 2005
10. Scintillator material. Szintillatormaterial
Energy Technology Data Exchange (ETDEWEB)
Siegmund, M; Bendig, J; Regenstein, W
1987-11-25
A scintillator material for detection and quantitative determination of ionizing radiation is discussed consisting of an acridone dissolved in a fluid or solid medium. Solvent mixtures with at least one protogenic component or polymers and copolymers are used. The scintillator material is distinguished by an excellent stability at high energy doses.
11. High-pressure plastic scintillation detector for measuring radiogenic gases in flow systems
CERN Document Server
Schell, W R; Yoon, S R; Tobin, M J
1999-01-01
Radioactive gases are emitted into the atmosphere from nuclear electric power and nuclear fuel reprocessing plants, from hospitals discarding xenon used in diagnostic medicine, as well as from nuclear weapons tests. A high-pressure plastic scintillation detector was constructed to measure atmospheric levels of such radioactive gases by detecting the beta and internal conversion (IC) electron decays. Operational tests and calibrations were made that permit integration of the flow detectors into a portable Gas Analysis, Separation and Purification system (GASP). The equipment developed can be used for measuring fission gases released from nuclear reactor sources and/or as part of monitoring equipment for enforcing the Comprehensive Test Ban Treaty. The detector is being used routinely for in-line gas separation efficiency measurements, at the elevated operational pressures used for the high-pressure swing analysis system (2070 kPa) and at flow rates of 5-15 l/min . This paper presents the design features, opera...
12. Development of a scintillating-fibre detector for fast topological triggers in high-luminosity particle physics experiments
CERN Document Server
Agoritsas, V; Bing, O; Bravar, A; Cardini, A; Dreossi, D; Drevenak, R; Finger, M H; Flaminio, Vincenzo; Di Girolamo, B; Gorin, A; Kulikov, A; Kuroda, K; Manuilov, I V; Okada, K; Önel, Y M; Penzo, Aldo L; Rapin, D; Rappazzo, G F; Riazantsev, A V; Rykalin, V I; Slunecka, M; Takeutchi, F; Trusov, S V; Yoshida, T
1998-01-01
In the framework of the RD-17 project at CERN extensive work is in progresson the development of scintillating-fibre detectors using position-sensitive photomultipliers. With o.5 mm diameter fibres as spatial resolution of about 125 µm was obtained with a detection efficiency higher than 95%. The time resolution of the detector is about 600 ps, and the track position is properly digitized in real time in less than 10 ns by a peak sensing circuit. A simulation, based on experimental data, was also performed to compare different types of front-end electronics.
13. A high resolution portable spectroscopy system
International Nuclear Information System (INIS)
Kulkarni, C.P.; Vaidya, P.P.; Paulson, M.; Bhatnagar, P.V.; Pande, S.S.; Padmini, S.
2003-01-01
Full text: This paper describes the system details of a High Resolution Portable Spectroscopy System (HRPSS) developed at Electronics Division, BARC. The system can be used for laboratory class, high-resolution nuclear spectroscopy applications. The HRPSS consists of a specially designed compact NIM bin, with built-in power supplies, accommodating a low power, high resolution MCA, and on-board embedded computer for spectrum building and communication. A NIM based spectroscopy amplifier and a HV module for detector bias are integrated (plug-in) in the bin. The system communicates with a host PC via a serial link. Along-with a laptop PC, and a portable HP-Ge detector, the HRPSS offers a laboratory class performance for portable applications
14. Determination of Np, Pu and Am in high level radioactive waste with extraction-liquid scintillation counting
International Nuclear Information System (INIS)
Yang Dazhu; Zhu Yongjun; Jiao Rongzhou
1994-01-01
A new method for the determination of transuranium elements, Np, Pu and Am with extraction-liquid scintillation counting has been studied systematically. Procedures for the separation of Pu and Am by HDEHP-TRPO extraction and for the separation of Np by TTA-TiOA extraction have been developed, by which the recovery of Np, Pu and Am is 97%, 99% and 99%, respectively, and the decontamination factors for the major fission products ( 90 Sr, 137 Cs etc.) are 10 4 -10 6 . Pulse shape discrimination (PSD) technique has been introduced to liquid scintillation counting, by which the counting efficiency of α-activity is >99% and the rejection of β-counts is >99.95%. This new method, combining extraction and pulse shape discrimination with liquid scintillation technique, has been successfully applied to the assay of Np, Pu and Am in high level radioactive waste. (author) 7 refs.; 7 figs.; 4 tabs
15. Scintillation trigger system of the liquid argon neutrino detector
International Nuclear Information System (INIS)
Belikov, S.V.; Gurzhiev, S.N.; Gutnikov, Yu.E.; Denisov, A.G.; Kochetkov, V.I.; Matveev, M.Yu.; Mel'nikov, E.A.; Usachev, A.P.
1994-01-01
This paper presents the organization of the Scintillation Trigger System (STS) for the Liquid Argon Neutrino Detector of the Tagged Neutrino Facility. STS is aimed at the effective registration of the needed neutrino interaction type and production of a fast trigger signal with high time resolution. The fast analysis system of analog signal from the trigger scintillation planes for rejection of the trigger signals from background processes is described. Real scintillation trigger planes characteristics obtained on the basis of the presented data acquisition system are shown. 10 refs., 12 figs., 3 tabs
16. Spatial resolution limit study of a CCD camera and scintillator based neutron imaging system according to MTF determination and analysis
International Nuclear Information System (INIS)
Kharfi, F.; Denden, O.; Bourenane, A.; Bitam, T.; Ali, A.
2012-01-01
Spatial resolution limit is a very important parameter of an imaging system that should be taken into consideration before examination of any object. The objectives of this work are the determination of a neutron imaging system's response in terms of spatial resolution. The proposed procedure is based on establishment of the Modulation Transfer Function (MTF). The imaging system being studied is based on a high sensitivity CCD neutron camera (2×10 −5 lx at f1.4). The neutron beam used is from the horizontal beam port (H.6) of the Algerian Es-Salam research reactor. Our contribution is on the MTF determination by proposing an accurate edge identification method and a line spread function undersampling problem-resolving procedure. These methods and procedure are integrated into a MatLab code. The methods, procedures and approaches proposed in this work are available for any other neutron imaging system and allow for judging the ability of a neutron imaging system to produce spatial (internal details) properties of any object under examination. - Highlights: ► Determination of spatial response of a neutron imaging system. ► Ability of a neutron imaging system to reproduce spatial properties of any object. ► Spatial resolution limits measurement using MTF with the slanted edge method. ► Accurate edge identification and line spread function sampling improvement. ► Development of a MatLab code to compute automatically the MTF.
17. High-resolution multi-slice PET
International Nuclear Information System (INIS)
Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.
1992-01-01
This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed
18. High resolution neutron spectroscopy for helium isotopes
International Nuclear Information System (INIS)
Abdel-Wahab, M.S.; Klages, H.O.; Schmalz, G.; Haesner, B.H.; Kecskemeti, J.; Schwarz, P.; Wilczynski, J.
1992-01-01
A high resolution fast neutron time-of-flight spectrometer is described, neutron time-of-flight spectra are taken using a specially designed TDC in connection to an on-line computer. The high time-of-flight resolution of 5 ps/m enabled the study of the total cross section of 4 He for neutrons near the 3/2 + resonance in the 5 He nucleus. The resonance parameters were determined by a single level Breit-Winger fit to the data. (orig.)
19. Development of radiophotometric dosemeters with high sensitivity using plastic scintillators as a light intensifier
International Nuclear Information System (INIS)
1987-01-01
Rectangular plates of plastic scintillators are developed and their effect as light converter evaluated, when used as film-holder in conventional photography dosemeters. In this dosemeter, the radiation that not interacts in the photographic film can be detected by light photons generation in the plastic scintillators, sensitizing the film. (C.G.C.) [pt
20. Development of nuclear counting system for plateau high voltage scintillation detector test facilities
International Nuclear Information System (INIS)
Sarizah Mohamed Nor; Siti Hawa Md Zain; Muhd Izham Ahmad; Izuhan Ismail
2010-01-01
Nuclear counter system is a system monitoring and analysis of radioactivity used in scientific and technical research and development in the Malaysian Nuclear Agency. It consists of three basic parts, namely sensors, signal conditioning and monitoring. Nuclear counter system set up for use in the testing of nuclear detectors using radioactive sources such as 60 Co and 137 Cs and other radioactive sources. It can determine the types of scintillation detectors and the equivalent function properly, always operate in the range plateau high voltage and meet the specifications. Hence, it should be implemented on all systems in the Nuclear Nuclear counter Malaysia and documented as Standard Working Procedure (SWP) is a reference to the technicians, trainees IPTA / IPTS and related workers. (author)
1. Smartphone microendoscopy for high resolution fluorescence imaging
Directory of Open Access Journals (Sweden)
Xiangqian Hong
2016-09-01
Full Text Available High resolution optical endoscopes are increasingly used in diagnosis of various medical conditions of internal organs, such as the cervix and gastrointestinal (GI tracts, but they are too expensive for use in resource-poor settings. On the other hand, smartphones with high resolution cameras and Internet access have become more affordable, enabling them to diffuse into most rural areas and developing countries in the past decade. In this paper, we describe a smartphone microendoscope that can take fluorescence images with a spatial resolution of 3.1 μm. Images collected from ex vivo, in vitro and in vivo samples using the device are also presented. The compact and cost-effective smartphone microendoscope may be envisaged as a powerful tool for detecting pre-cancerous lesions of internal organs in low and middle-income countries (LMICs.
2. Investigations on imaging properties of inorganic scintillation screens under irradiation with high energetic heavy ions
Energy Technology Data Exchange (ETDEWEB)
Lieberwirth, Alice
2016-09-15
scintillation record was used to examine the material stability under long time application. Here, the light yield Y of the targets was nearly constant or decreased only in the range of 10-15 %, relative to the initial value. For the targets with single crystal characteristic (P46, YAG:Ce), Y even increased slightly and than saturated, offering an enhanced mobility of charge carriers under irradiation. The emission spectra were reproduced continuously and the beam profiles showed good accordance to the reference methods. Within all performed beam times, the targets offered a great stability. Non-linear characteristics, e.g. due to quenching during irradiation at high beam intensities, were not observed. The light yield Y showed a decreasing tendency as function of calculated electronic energy loss dE/dx. The characteristics of the calculated beam profiles, as well as the recorded emission spectra did not change significantly. So a material degradation in the investigated materials was not verified. This observation is confirmed by the performed material characterization measurements. The need of target replacement, e.g. due to damage, did not occur and was thus not performed during the complete investigations. As material for future beam diagnostics of FAIR cerium-doped Y{sub 3}Al{sub 5}O{sub 12} single crystal with a thickness in the range of 300 μm is recommended in cross-points between different storage sections, due to the stable imaging properties for high energy ion beams, even under long-time irradiation. For beam alignment to experimental and research areas, common Al{sub 2}O{sub 3}:Cr is recommended due to the cost advantage.
3. Silicon photomultipliers for scintillating trackers
Energy Technology Data Exchange (ETDEWEB)
Rabaioli, S., E-mail: simone.rabaioli@gmail.com [Universita degli Studi dell' Insubria, Via Valleggio, 11 - 22100 Como (Italy); Berra, A.; Bolognini, D. [Universita degli Studi dell' Insubria, Via Valleggio, 11 - 22100 Como (Italy); INFN sezione di Milano Bicocca (Italy); Bonvicini, V. [INFN sezione di Trieste (Italy); Bosisio, L. [Universita degli Studi di Trieste and INFN sezione di Trieste (Italy); Ciano, S.; Iugovaz, D. [INFN sezione di Trieste (Italy); Lietti, D. [Universita degli Studi dell' Insubria, Via Valleggio, 11 - 22100 Como (Italy); INFN sezione di Milano Bicocca (Italy); Penzo, A. [INFN sezione di Trieste (Italy); Prest, M. [Universita degli Studi dell' Insubria, Via Valleggio, 11 - 22100 Como (Italy); INFN sezione di Milano Bicocca (Italy); Rashevskaya, I.; Reia, S. [INFN sezione di Trieste (Italy); Stoppani, L. [Universita degli Studi dell' Insubria, Via Valleggio, 11 - 22100 Como (Italy); Vallazza, E. [INFN sezione di Trieste (Italy)
2012-12-11
In recent years, silicon photomultipliers (SiPMs) have been proposed as a new kind of readout device for scintillating detectors in many experiments. A SiPM consists of a matrix of parallel-connected pixels, which are independent photon counters working in Geiger mode with very high gain ({approx}10{sup 6}). This contribution presents the use of an array of eight SiPMs (manufactured by FBK-irst) for the readout of a scintillating bar tracker (a small size prototype of the Electron Muon Ranger detector for the MICE experiment). The performances of the SiPMs in terms of signal to noise ratio, efficiency and time resolution will be compared to the ones of a multi-anode photomultiplier tube (MAPMT) connected to the same bars. Both the SiPMs and the MAPMT are interfaced to a VME system through a 64 channel MAROC ASIC.
4. Silicon photomultipliers for scintillating trackers
Science.gov (United States)
Rabaioli, S.; Berra, A.; Bolognini, D.; Bonvicini, V.; Bosisio, L.; Ciano, S.; Iugovaz, D.; Lietti, D.; Penzo, A.; Prest, M.; Rashevskaya, I.; Reia, S.; Stoppani, L.; Vallazza, E.
2012-12-01
In recent years, silicon photomultipliers (SiPMs) have been proposed as a new kind of readout device for scintillating detectors in many experiments. A SiPM consists of a matrix of parallel-connected pixels, which are independent photon counters working in Geiger mode with very high gain (∼106). This contribution presents the use of an array of eight SiPMs (manufactured by FBK-irst) for the readout of a scintillating bar tracker (a small size prototype of the Electron Muon Ranger detector for the MICE experiment). The performances of the SiPMs in terms of signal to noise ratio, efficiency and time resolution will be compared to the ones of a multi-anode photomultiplier tube (MAPMT) connected to the same bars. Both the SiPMs and the MAPMT are interfaced to a VME system through a 64 channel MAROC ASIC.
5. High resolution Neutron and Synchrotron Powder Diffraction
International Nuclear Information System (INIS)
Hewat, A.W.
1986-01-01
The use of high-resolution powder diffraction has grown rapidly in the past years, with the development of Rietveld (1967) methods of data analysis and new high-resolution diffractometers and multidetectors. The number of publications in this area has increased from a handful per year until 1973 to 150 per year in 1984, with a ten-year total of over 1000. These papers cover a wide area of solid state-chemistry, physics and materials science, and have been grouped under 20 subject headings, ranging from catalysts to zeolites, and from battery electrode materials to pre-stressed superconducting wires. In 1985 two new high-resolution diffractometers are being commissioned, one at the SNS laboratory near Oxford, and one at the ILL in Grenoble. In different ways these machines represent perhaps the ultimate that can be achieved with neutrons and will permit refinement of complex structures with about 250 parameters and unit cell volumes of about 2500 Angstrom/sp3/. The new European Synchotron Facility will complement the Grenoble neutron diffractometers, and extend the role of high-resolution powder diffraction to the direct solution of crystal structures, pioneered in Sweden
6. High resolution CT in diffuse lung disease
International Nuclear Information System (INIS)
Webb, W.R.
1995-01-01
High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.)
7. Classification of high resolution satellite images
OpenAIRE
Karlsson, Anders
2003-01-01
In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.
8. High resolution CT in diffuse lung disease
Energy Technology Data Exchange (ETDEWEB)
Webb, W R [California Univ., San Francisco, CA (United States). Dept. of Radiology
1996-12-31
High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.).
9. High-resolution clean-sc
NARCIS (Netherlands)
Sijtsma, P.; Snellen, M.
2016-01-01
In this paper a high-resolution extension of CLEAN-SC is proposed: HR-CLEAN-SC. Where CLEAN-SC uses peak sources in “dirty maps” to define so-called source components, HR-CLEAN-SC takes advantage of the fact that source components can likewise be derived from points at some distance from the peak,
10. A High-Resolution Stopwatch for Cents
Science.gov (United States)
Gingl, Z.; Kopasz, K.
2011-01-01
A very low-cost, easy-to-make stopwatch is presented to support various experiments in mechanics. The high-resolution stopwatch is based on two photodetectors connected directly to the microphone input of a sound card. Dedicated free open-source software has been developed and made available to download. The efficiency is demonstrated by a free…
11. Evaluations of the new LiF-scintillator and optional brightness enhancement films for neutron imaging
Energy Technology Data Exchange (ETDEWEB)
Iikura, H., E-mail: Iikura.hiroshi@jaea.go.jp [Japan Atomic Energy Agency, 2-4 Shirakata-shirane, Tokai-mura, Naka-gun, Ibaraki (Japan); Tsutsui, N. [Chichibu Fuji Co., Ltd., Ogano, Chichibu, Saitama 368-0193 (Japan); Nakamura, T.; Katagiri, M.; Kureta, M. [Japan Atomic Energy Agency, 2-4 Shirakata-shirane, Tokai-mura, Naka-gun, Ibaraki (Japan); Kubo, J. [Nissan Motor Co., Ltd., Atsugi, Kanagawa 243-0126 (Japan); Matsubayashi, M. [Japan Atomic Energy Agency, 2-4 Shirakata-shirane, Tokai-mura, Naka-gun, Ibaraki (Japan)
2011-09-21
Japan Atomic Energy Agency has developed the neutron scintillator jointly with Chichibu Fuji Co., Ltd. In this study, we evaluated the new ZnS(Ag):Al/{sup 6}Li scintillator developed for neutron imaging. It was confirmed that the brightness increased by about double while maintaining equal performance for the spatial resolution as compared with a conventional scintillator. High frame-rate imaging using a high-speed video camera system and this new scintillator made it possible to image beyond 10 000 frames per second while still having enough brightness. This technique allowed us to obtain a high-frame-rate visualization of oil flow in a running car engine. Furthermore, we devised a technique to increase the light intensity of reception for a camera by adding brightness enhancement films on the output surface of the scintillator. It was confirmed that the spatial resolution degraded more than double, but the brightness increased by about three times.
12. Use of a YAP:Ce matrix coupled to a position-sensitive photomultiplier for high resolution positron emission tomography
International Nuclear Information System (INIS)
Del Guerra, A.; Zavattini, G.; Notaristefani, F. de; Giganti, M.; Piffanelli, A.; Pani, R.; Turra, A.
1996-01-01
A new scintillation detector system has been designed for application in high resolution Positron Emission Tomography (PET). The detector is a bundle of small YAlO 3 :Ce (YAP) crystals closely packed (0.2 x 0.2 x 3.0 cm 3 ), coupled to a position sensitive photomultiplier tube (PSPMT). The preliminary results obtained for spatial resolution, time resolution, energy resolution and efficiency of two such detectors working in coincidence are presented. These are 1.2 mm for the FWHM spatial resolution, 2.0 ns for the FWHM time resolution and 20% for the FWHM energy resolution at 511 keV. The measured efficiency is (44 ± 3)% with a 150 keV threshold and (20 ± 2)% with a 300 keV threshold
13. Detectors for high resolution dynamic pet
International Nuclear Information System (INIS)
Derenzo, S.E.; Budinger, T.F.; Huesman, R.H.
1983-05-01
This report reviews the motivation for high spatial resolution in dynamic positron emission tomography of the head and the technical problems in realizing this objective. We present recent progress in using small silicon photodiodes to measure the energy deposited by 511 keV photons in small BGO crystals with an energy resolution of 9.4% full-width at half-maximum. In conjunction with a suitable phototube coupled to a group of crystals, the photodiode signal to noise ratio is sufficient for the identification of individual crystals both for conventional and time-of-flight positron tomography
14. Constructing a WISE High Resolution Galaxy Atlas
Science.gov (United States)
Jarrett, T. H.; Masci, F.; Tsai, C. W.; Petty, S.; Cluver, M.; Assef, Roberto J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.;
2012-01-01
After eight months of continuous observations, the Wide-field Infrared Survey Explorer (WISE) mapped the entire sky at 3.4 micron, 4.6 micron, 12 micron, and 22 micron. We have begun a dedicated WISE High Resolution Galaxy Atlas project to fully characterize large, nearby galaxies and produce a legacy image atlas and source catalog. Here we summarize the deconvolution techniques used to significantly improve the spatial resolution of WISE imaging, specifically designed to study the internal anatomy of nearby galaxies. As a case study, we present results for the galaxy NGC 1566, comparing the WISE enhanced-resolution image processing to that of Spitzer, Galaxy Evolution Explorer, and ground-based imaging. This is the first paper in a two-part series; results for a larger sample of nearby galaxies are presented in the second paper.
15. International Nuclear Information System (INIS)
MacKinnon, I.D.R.; Aselage, T.; Van Deusen, S.B.
1986-01-01
Two samples of boron carbide have been examined using high resolution transmission electron microscopy (HRTEM). A hot-pressed B 13 C 2 sample shows a high density of variable width twins normal to (10*1). Subtle shifts or offsets of lattice fringes along the twin plane and normal to approx.(10*5) were also observed. A B 4 C powder showed little evidence of stacking disorder in crystalline regions
16. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera
Science.gov (United States)
Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.
2011-06-01
Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.
17. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.
Science.gov (United States)
Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V
2011-06-01
Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.
18. Highly segmented, high resolution time-of-flight system
Energy Technology Data Exchange (ETDEWEB)
Nayak, T.K.; Nagamiya, S.; Vossnack, O.; Wu, Y.D.; Zajc, W.A. [Columbia Univ., New York, NY (United States); Miake, Y.; Ueno, S.; Kitayama, H.; Nagasaka, Y.; Tomizawa, K.; Arai, I.; Yagi, K [Univ. of Tsukuba, (Japan)
1991-12-31
The light attenuation and timing characteristics of time-of-flight counters constructed of 3m long scintillating fiber bundles of different shapes and sizes are presented. Fiber bundles made of 5mm diameter fibers showed good timing characteristics and less light attenuation. The results for a 1.5m long scintillator rod are also presented.
19. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane
International Nuclear Information System (INIS)
Goddu, S; Sun, B; Grantham, K; Zhao, T; Zhang, T; Bradley, J; Mutic, S
2016-01-01
Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range and SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal
20. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane
Energy Technology Data Exchange (ETDEWEB)
Goddu, S; Sun, B; Grantham, K; Zhao, T; Zhang, T; Bradley, J; Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)
2016-06-15
Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range and SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal.
1. Particle detector spatial resolution
International Nuclear Information System (INIS)
Perez-Mendez, V.
1992-01-01
Method and apparatus for producing separated columns of scintillation layer material, for use in detection of X-rays and high energy charged particles with improved spatial resolution is disclosed. A pattern of ridges or projections is formed on one surface of a substrate layer or in a thin polyimide layer, and the scintillation layer is grown at controlled temperature and growth rate on the ridge-containing material. The scintillation material preferentially forms cylinders or columns, separated by gaps conforming to the pattern of ridges, and these columns direct most of the light produced in the scintillation layer along individual columns for subsequent detection in a photodiode layer. The gaps may be filled with a light-absorbing material to further enhance the spatial resolution of the particle detector. 12 figs
2. High resolution measurements and study of the neutron inelastic scattering reaction on 56Fe
International Nuclear Information System (INIS)
Dupont, E.
1998-01-01
High resolution measures of neutrons inelastic scattering cross section, have been performed on 56 Fe from 862 KeV to 3 MeV. The time of flight method has been used on the GELINA source of the IRMM in Geel (Belgium). Four barium fluoride scintillators, placed around the samples, recorded the gamma rays emissions coming from the iron and the boron. A study of the correlations between the partial elastic and inelastic lengths has been performed taking into account first transmission measures realized at Geel. (A.L.B.)
3. Geant4 simulation of a 3D high resolution gamma camera
International Nuclear Information System (INIS)
Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.
2015-01-01
The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)
4. High-Resolution Imaging System (HiRIS) based on H9500 PSPMT
International Nuclear Information System (INIS)
Trotta, C.; Massari, R.; Trinci, G.; Palermo, N.; Boccalini, S.; Scopinaro, F.; Soluri, A.
2008-01-01
The H8500 PhotoMultiplier Tube (PMT) from Hamamatsu has been used in the last years to assemble several scintigraphic devices in order to achieve high-resolution gamma cameras. If the detector is coupled to discrete scintillator with millimetric pixel size, the resulting charge distribution that emerges is not properly sampled by its anodes (6x6 mm 2 ). The new position sensitive PMT H9500, with its 3x3 mm 2 anodes, allows a better charge distribution sampling, improving both spatial resolution and linearity of the system. In this paper, we investigate the imaging performances of the H9500 PMT coupled with a CsI(Tl) array having 1 mm pixel size and compare the results with the same scintillator coupled with H8500 PMT. A portable imaging system named HiRIS (High-Resolution Imaging System) was then realized using a miniaturized readout electronic. Thanks to its lightness, it can be easily used in Medical Imaging. We used HiRIS, together with a rotating system, to carry out a tomographic reconstruction of the biodistribution of a radiopharmaceutical in rats
5. High resolution SETI: Experiences and prospects
Science.gov (United States)
Horowitz, Paul; Clubok, Ken
Megachannel spectroscopy with sub-Hertz resolution constitutes an attractive strategy for a microwave search for extraterrestrial intelligence (SETI), assuming the transmission of a narrowband radiofrequency beacon. Such resolution matches the properties of the interstellar medium, and the necessary Doppler corrections provide a high degree of interference rejection. We have constructed a frequency-agile receiver with an FFT-based 8 megachannel digital spectrum analyzer, on-line signal recognition, and multithreshold archiving. We are using it to conduct a meridian transit search of the northern sky at the Harvard-Smithsonian 26-m antenna, with a second identical system scheduled to begin observations in Argentina this month. Successive 400 kHz spectra, at 0.05 Hz resolution, are searched for features characteristic of an intentional narrowband beacon transmission. These spectra are centered on guessable frequencies (such as λ21 cm), referenced successively to the local standard of rest, the galactic barycenter, and the cosmic blackbody rest frame. This search has rejected interference admirably, but is greatly limited both in total frequency coverage and sensitivity to signals other than carriers. We summarize five years of high resolution SETI at Harvard, in the context of answering the questions "How useful is narrowband SETI, how serious are its limitations, what can be done to circumvent them, and in what direction should SETI evolve?" Increasingly powerful signal processing hardware, combined with ever-higher memory densities, are particularly relevant, permitting the construction of compact and affordable gigachannel spectrum analyzers covering hundreds of megahertz of instantaneous bandwidth.
6. A high-speed scintillation-based electronic portal imaging device to quantitatively characterize IMRT delivery.
Science.gov (United States)
Ranade, Manisha K; Lynch, Bart D; Li, Jonathan G; Dempsey, James F
2006-01-01
We have developed an electronic portal imaging device (EPID) employing a fast scintillator and a high-speed camera. The device is designed to accurately and independently characterize the fluence delivered by a linear accelerator during intensity modulated radiation therapy (IMRT) with either step-and-shoot or dynamic multileaf collimator (MLC) delivery. Our aim is to accurately obtain the beam shape and fluence of all segments delivered during IMRT, in order to study the nature of discrepancies between the plan and the delivered doses. A commercial high-speed camera was combined with a terbium-doped gadolinium-oxy-sulfide (Gd2O2S:Tb) scintillator to form an EPID for the unaliased capture of two-dimensional fluence distributions of each beam in an IMRT delivery. The high speed EPID was synchronized to the accelerator pulse-forming network and gated to capture every possible pulse emitted from the accelerator, with an approximate frame rate of 360 frames-per-second (fps). A 62-segment beam from a head-and-neck IMRT treatment plan requiring 68 s to deliver was recorded with our high speed EPID producing approximately 6 Gbytes of imaging data. The EPID data were compared with the MLC instruction files and the MLC controller log files. The frames were binned to provide a frame rate of 72 fps with a signal-to-noise ratio that was sufficient to resolve leaf positions and segment fluence. The fractional fluence from the log files and EPID data agreed well. An ambiguity in the motion of the MLC during beam on was resolved. The log files reported leaf motions at the end of 33 of the 42 segments, while the EPID observed leaf motions in only 7 of the 42 segments. The static IMRT segment shapes observed by the high speed EPID were in good agreement with the shapes reported in the log files. The leaf motions observed during beam-on for step-and-shoot delivery were not temporally resolved by the log files.
7. A high-speed scintillation-based electronic portal imaging device to quantitatively characterize IMRT delivery
International Nuclear Information System (INIS)
Ranade, Manisha K.; Lynch, Bart D.; Li, Jonathan G.; Dempsey, James F.
2006-01-01
We have developed an electronic portal imaging device (EPID) employing a fast scintillator and a high-speed camera. The device is designed to accurately and independently characterize the fluence delivered by a linear accelerator during intensity modulated radiation therapy (IMRT) with either step-and-shoot or dynamic multileaf collimator (MLC) delivery. Our aim is to accurately obtain the beam shape and fluence of all segments delivered during IMRT, in order to study the nature of discrepancies between the plan and the delivered doses. A commercial high-speed camera was combined with a terbium-doped gadolinium-oxy-sulfide (Gd 2 O 2 S:Tb) scintillator to form an EPID for the unaliased capture of two-dimensional fluence distributions of each beam in an IMRT delivery. The high speed EPID was synchronized to the accelerator pulse-forming network and gated to capture every possible pulse emitted from the accelerator, with an approximate frame rate of 360 frames-per-second (fps). A 62-segment beam from a head-and-neck IMRT treatment plan requiring 68 s to deliver was recorded with our high speed EPID producing approximately 6 Gbytes of imaging data. The EPID data were compared with the MLC instruction files and the MLC controller log files. The frames were binned to provide a frame rate of 72 fps with a signal-to-noise ratio that was sufficient to resolve leaf positions and segment fluence. The fractional fluence from the log files and EPID data agreed well. An ambiguity in the motion of the MLC during beam on was resolved. The log files reported leaf motions at the end of 33 of the 42 segments, while the EPID observed leaf motions in only 7 of the 42 segments. The static IMRT segment shapes observed by the high speed EPID were in good agreement with the shapes reported in the log files. The leaf motions observed during beam-on for step-and-shoot delivery were not temporally resolved by the log files
8. Collimator trans-axial tomographic scintillation camera
International Nuclear Information System (INIS)
Jaszczak, R.J.
1977-01-01
A collimator is provided for a scintillation camera system in which a detector precesses in an orbit about a patient. The collimator is designed to have high resolution and lower sensitivity with respect to radiation traveling in paths laying wholly within planes perpendicular to the cranial-caudal axis of the patient. The collimator has high sensitivity and lower resolution to radiation traveling in other planes. Variances in resolution and sensitivity are achieved by altering the length, spacing or thickness of the septa of the collimator
9. Development of radiophotometric dosemeters of high sensitivity using plastic scintillators as light intensifiers
International Nuclear Information System (INIS)
1987-01-01
The use of rectangular plates of plastic scintillators as film holders in conventional photographic dosemeters is reported. The efficiency of their use as light converters for increase the sensitivity of these dosemeters are studied. (M.A.C.) [pt
10. High resolution NMR theory and chemical applications
CERN Document Server
Becker, Edwin D
2012-01-01
High Resolution NMR: Theory and Chemical Applications discusses the principles and theory of nuclear magnetic resonance and how this concept is used in the chemical sciences. This book is written at an intermediate level, with mathematics used to augment verbal descriptions of the phenomena. This text pays attention to developing and interrelating four approaches - the steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The style of this book is based on the assumption that the reader has an acquaintance with the general principles of quantum mechanics, but no extensive background in quantum theory or proficiency in mathematics is required. This book begins with a description of the basic physics, together with a brief account of the historical development of the field. It looks at the study of NMR in liquids, including high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. This book is intended to assis...
11. High resolution NMR theory and chemical applications
CERN Document Server
Becker, Edwin D
1999-01-01
High Resolution NMR provides a broad treatment of the principles and theory of nuclear magnetic resonance (NMR) as it is used in the chemical sciences. It is written at an "intermediate" level, with mathematics used to augment, rather than replace, clear verbal descriptions of the phenomena. The book is intended to allow a graduate student, advanced undergraduate, or researcher to understand NMR at a fundamental level, and to see illustrations of the applications of NMR to the determination of the structure of small organic molecules and macromolecules, including proteins. Emphasis is on the study of NMR in liquids, but the treatment also includes high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. Careful attention is given to developing and interrelating four approaches - steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The presentation is based on the assumption that the reader has an acquaintan...
12. High-resolution flurescence spectroscopy in immunoanalysis
Energy Technology Data Exchange (ETDEWEB)
Grubor, Nenad M. [Iowa State Univ., Ames, IA (United States)
2005-01-01
The work presented in this dissertation combines highly sensitive and selective fluorescence line-narrowing spectroscopy (FLNS) detection with various modes of immunoanalytical techniques. It has been shown that FLNS is capable of directly probing molecules immunocomplexed with antibodies, eliminating analytical ambiguities that may arise from interferences that accompany traditional immunochemical techniques. Moreover, the utilization of highly cross-reactive antibodies for highly specific analyte determination has been demonstrated. Finally, they demonstrate the first example of the spectral resolution of diastereomeric analytes based on their interaction with a cross-reactive antibody.
13. Scintillation screen materials for beam profile measurements of high energy ion beams
Energy Technology Data Exchange (ETDEWEB)
Krishnakumar, Renuka
2016-06-22
For the application as a transverse ion beam diagnostics device, various scintillation screen materials were analysed. The properties of the materials such as light output, image reproduction and radiation stability were investigated with the ion beams extracted from heavy ion synchrotron SIS-18. The ion species (C, Ne, Ar, Ta and U) were chosen to cover the large range of elements in the periodic table. The ions were accelerated to the kinetic energies of 200 MeV/u and 300 MeV/u extracted with 300 ms pulse duration and applied to the screens. The particle intensity of the ion beam was varied from 10{sup 4} to 10{sup 9} particles per pulse. The screens were irradiated with typically 40 beam pulses and the scintillation light was captured using a CCD camera followed by characterization of the beam spot. The radiation hardness of the screens was estimated with high intensity Uranium ion irradiation. In the study, a linear light output for 5 orders of magnitude of particle intensities was observed from sensitive scintillators and ceramic screens such as Al{sub 2}O{sub 3}:Cr and Al{sub 2}O{sub 3}. The highest light output was recorded by CsI:Tl and the lowest one by Herasil. At higher beam intensity saturation of light output was noticed from Y and Mg doped ZrO{sub 2} screens. The light output from the screen depends not only on the particle intensity but also on the ion species used for irradiation. The light yield (i.e. the light intensity normalised to the energy deposition in the material by the ion) is calculated from the experimental data for each ion beam setting. It is shown that the light yield for light ions is about a factor 2 larger than the one of heavy ions. The image widths recorded exhibit a dependence on the screens material and differences up to 50 % were registered. On radiation stability analysis with high particle intensity of Uranium ions of about 6 x 10{sup 8} ppp, a stable performance in light output and image reproduction was documented from Al
14. High-Resolution MRI in Rectal Cancer
International Nuclear Information System (INIS)
2010-01-01
High-resolution MRI is the best method of assessing the relation of the rectal tumor with the potential circumferential resection margin (CRM). Therefore it is currently considered the method of choice for local staging of rectal cancer. The primary surgery of rectal cancer is total mesorectal excision (TME), which plane of dissection is formed by the mesorectal fascia surrounding mesorectal fat and rectum. This fascia will determine the circumferential margin of resection. At the same time, high resolution MRI allows adequate pre-operative identification of important prognostic risk factors, improving the selection and indication of therapy for each patient. This information includes, besides the circumferential margin of resection, tumor and lymph node staging, extramural vascular invasion and the description of lower rectal tumors. All these should be described in detail in the report, being part of the discussion in the multidisciplinary team, the place where the decisions involving the patient with rectal cancer will take place. The aim of this study is to provide the information necessary to understand the use of high resolution MRI in the identification of prognostic risk factors in rectal cancer. The technical requirements and standardized report for this study will be describe, as well as the anatomical landmarks of importance for the total mesorectal excision (TME), as we have said is the surgery of choice for rectal cancer. (authors) [es
15. USGS High Resolution Orthoimagery Collection - Historical - National Geospatial Data Asset (NGDA) High Resolution Orthoimagery
Data.gov (United States)
U.S. Geological Survey, Department of the Interior — USGS high resolution orthorectified images from The National Map combine the image characteristics of an aerial photograph with the geometric qualities of a map. An...
16. Basic performance of Mg co-doped new scintillator used for TOF-DOI-PET systems
International Nuclear Information System (INIS)
Kobayashi, Takahiro; Yamamoto, Seiichi; Okumura, Satoshi; Yeom, Jung Yeol; Kamada, Kei; Yoshikawa, Akira
2017-01-01
Phoswich depth-of-interaction (DOI) detectors utilizing multiple scintillators with different decay time are a useful device for developing a high spatial resolution, high sensitivity PET scanner. However, in order to apply pulse shape discrimination (PSD), there are not many combinations of scintillators for which phoswich technique can be implemented. Ce doped Gd_3Ga_3Al_2O_1_2 (GFAG) is a recently developed scintillator with a fast decay time. This scintillator is similar to Ce doped Gd_3Al_2Ga_3O_1_2 (GAGG), which is a promising scintillator for PET detector with high light yield. By stacking these scintillators, it may be possible to realize a high spatial resolution and high timing resolution phoswich DOI detector. Such phoswich DOI detector may be applied to time-of-flight (TOF) systems with high timing performance. Therefore, in this study, we tested the basic performance of the new scintillator –GFAG for use in a TOF phoswich detector. The measured decay time of a GFAG element of 2.9 mmx2.9 mmx10 mm in dimension, which was optically coupled to a photomultiplier tube (PMT), was faster (66 ns) than that of same sized GAGG (103 ns). The energy resolution of the GFAG element was 5.7% FWHM which was slightly worse than that of GAGG with 4.9% FWHM for 662 keV gamma photons without saturation correction. Then we assembled the GFAG and the GAGG crystals in the depth direction to form a 20 mm long phoswich element (GFAG/GAGG). By pulse shape analysis, the two types of scintillators were clearly resolved. Measured timing resolution of a pair of opposing GFAG/GAGG phoswich scintillator coupled to Silicon Photomultipliers (Si-PM) was good with coincidence resolving time of 466 ps FWHM. These results indicate that the GFAG combined with GAGG can be a candidate for TOF-DOI-PET systems.
17. Inorganic scintillators for detector systems physical principles and crystal engineering
CERN Document Server
Lecoq, Paul; Korzhik, Mikhail
2017-01-01
This second edition features new chapters highlighting advances in our understanding of the behavior and properties of scintillators, and the discovery of new families of materials with light yield and excellent energy resolution very close to the theoretical limit. The book focuses on the discovery of next-generation scintillation materials and on a deeper understanding of fundamental processes. Such novel materials with high light yield as well as significant advances in crystal engineering offer exciting new perspectives. Most promising is the application of scintillators for precise time tagging of events, at the level of 100 ps or higher, heralding a new era in medical applications and particle physics. Since the discovery of the Higgs Boson with a clear signature in the lead tungstate scintillating blocks of the CMS Electromagnetic Calorimeter detector, the current trend in particle physics is toward very high luminosity colliders, in which timing performance will ultimately be essential to mitigating...
18. Cerium doped lanthanum halides: fast scintillators for medical imaging
International Nuclear Information System (INIS)
Selles, O.
2006-12-01
This work is dedicated to two recently discovered scintillating crystals: cerium doped lanthanum halides (LaCl 3 :Ce 3+ and LaBr 3 :Ce 3+ ).These scintillators exhibit interesting properties for gamma detection, more particularly in the field of medical imaging: a short decay time, a high light yield and an excellent energy resolution. The strong hygroscopicity of these materials requires adapting the usual experimental methods for determining physico-chemical properties. Once determined, these can be used for the development of the industrial manufacturing process of the crystals. A proper comprehension of the scintillation mechanism and of the effect of defects within the material lead to new possible ways for optimizing the scintillator performance. Therefore, different techniques are used (EPR, radioluminescence, laser excitation, thermally stimulated luminescence). Alongside Ce 3+ ions, self-trapped excitons are involved in the scintillation mechanism. Their nature and their role are detailed. The knowledge of the different processes involved in the scintillation mechanism leads to the prediction of the effect of temperature and doping level on the performance of the scintillator. A mechanism is proposed to explain the thermally stimulated luminescence processes that cause slow components in the light emission and a loss of light yield. Eventually the study of afterglow reveals a charge transfer to deep traps involved in the high temperature thermally stimulated luminescence. (author)
19. High-pressure plastic scintillation detector for measuring radiogenic gases in flow systems
International Nuclear Information System (INIS)
Schell, W.R.; Tobin, M.J.; Vives-Batlle, J.; Yoon, S.R.
1999-01-01
Radioactive gases are emitted into the atmosphere from nuclear electric power and nuclear fuel reprocessing plants, from hospitals discarding xenon used in diagnostic medicine, as well as from nuclear weapons tests. A high-pressure plastic scintillation detector was constructed to measure atmospheric levels of such radioactive gases by detecting the beta and internal conversion (IC) electron decays. Operational tests and calibrations were made that permit integration of the flow detectors into a portable Gas Analysis, Separation and Purification system (GASP). The equipment developed can be used for measuring fission gases released from nuclear reactor sources and/or as part of monitoring equipment for enforcing the Comprehensive Test Ban Treaty. The detector is being used routinely for in-line gas separation efficiency measurements, at the elevated operational pressures used for the high-pressure swing analysis system (2070 kPa) and at flow rates of 5-15 l/min. This paper presents the design features, operational methods, calibration, and detector applications. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
20. High-pressure plastic scintillation detector for measuring radiogenic gases in flow systems
Science.gov (United States)
Schell, W. R.; Vives-Batlle, J.; Yoon, S. R.; Tobin, M. J.
1999-02-01
Radioactive gases are emitted into the atmosphere from nuclear electric power and nuclear fuel reprocessing plants, from hospitals discarding xenon used in diagnostic medicine, as well as from nuclear weapons tests. A high-pressure plastic scintillation detector was constructed to measure atmospheric levels of such radioactive gases by detecting the beta and internal conversion (IC) electron decays. Operational tests and calibrations were made that permit integration of the flow detectors into a portable Gas Analysis, Separation and Purification system (GASP). The equipment developed can be used for measuring fission gases released from nuclear reactor sources and/or as part of monitoring equipment for enforcing the Comprehensive Test Ban Treaty. The detector is being used routinely for in-line gas separation efficiency measurements, at the elevated operational pressures used for the high-pressure swing analysis system (2070 kPa) and at flow rates of 5-15 l/min [1, 2]. This paper presents the design features, operational methods, calibration, and detector applications.
1. High-pressure plastic scintillation detector for measuring radiogenic gases in flow systems
International Nuclear Information System (INIS)
Schell, W.R.; Vives-Batlle, J.; Yoon, S.R; Tobin, M.J.
1999-01-01
Radioactive gases are emitted into the atmosphere from nuclear electric power and nuclear fuel reprocessing plants, from hospitals discarding xenon used in diagnostic medicine, as well as from nuclear weapons tests. A high-pressure plastic scintillation detector was constructed to measure atmospheric levels of such radioactive gases by detecting the beta and internal conversion (IC) electron decays. Operational tests and calibrations were made that permit integration of the flow detectors into a portable Gas Analysis, Separation and Purification system (GASP). The equipment developed can be used for measuring fission gases released from nuclear reactor sources and/or as part of monitoring equipment for enforcing the Comprehensive Test Ban Treaty. The detector is being used routinely for in-line gas separation efficiency measurements, at the elevated operational pressures used for the high-pressure swing analysis system (2070 kPa) and at flow rates of 5-15 l/min . This paper presents the design features, operational methods, calibration, and detector applications
2. Cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS flat panel detector: visibility of simulated microcalcifications.
Science.gov (United States)
Shen, Youtao; Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C
2013-10-01
To measure and investigate the improvement of microcalcification (MC) visibility in cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS/CsI flat panel detector (Dexela 2923, Perkin Elmer). Aluminum wires and calcium carbonate grains of various sizes were embedded in a paraffin cylinder to simulate imaging of calcifications in a breast. Phantoms were imaged with a benchtop experimental cone beam CT system at various exposure levels. In addition to the Dexela detector, a high pitch (50 μm), thin (150 μm) scintillator CMOS/CsI flat panel detector (C7921CA-09, Hamamatsu Corporation, Hamamatsu City, Japan) and a widely used low pitch (194 μm), thick (600 μm) scintillator aSi/CsI flat panel detector (PaxScan 4030CB, Varian Medical Systems) were also used in scanning for comparison. The images were independently reviewed by six readers (imaging physicists). The MC visibility was quantified as the fraction of visible MCs and measured as a function of the estimated mean glandular dose (MGD) level for various MC sizes and detectors. The modulation transfer functions (MTFs) and detective quantum efficiencies (DQEs) were also measured and compared for the three detectors used. The authors have demonstrated that the use of a high pitch (75 μm) CMOS detector coupled with a thick (500 μm) CsI scintillator helped make the smaller 150-160, 160-180, and 180-200 μm MC groups more visible at MGDs up to 10.8, 9, and 10.8 mGy, respectively. It also made the larger 200-212 and 212-224 μm MC groups more visible at MGDs up to 7.2 mGy. No performance improvement was observed for 224-250 μm or larger size groups. With the higher spatial resolution of the Dexela detector based system, the apparent dimensions and shapes of MCs were more accurately rendered. The results show that with the aforementioned detector, a 73% visibility could be achieved in imaging 160-180 μm MCs as compared to 28% visibility achieved by the low pitch (194 μm) aSi/CsI flat
3. Moderate resolution spectrophotometry of high redshift quasars
Science.gov (United States)
Schneider, Donald P.; Schmidt, Maarten; Gunn, James E.
1991-01-01
A uniform set of photometry and high signal-to-noise moderate resolution spectroscopy of 33 quasars with redshifts larger than 3.1 is presented. The sample consists of 17 newly discovered quasars (two with redshifts in excess of 4.4) and 16 sources drawn from the literature. The objects in this sample have r magnitudes between 17.4 and 21.4; their luminosities range from -28.8 to -24.9. Three of the 33 objects are broad absorption line quasars. A number of possible high redshift damped Ly-alpha systems were found.
4. SPIRAL2/DESIR high resolution mass separator
Energy Technology Data Exchange (ETDEWEB)
2013-12-15
DESIR is the low-energy part of the SPIRAL2 ISOL facility under construction at GANIL. DESIR includes a high-resolution mass separator (HRS) with a designed resolving power m/Δm of 31,000 for a 1 π-mm-mrad beam emittance, obtained using a high-intensity beam cooling device. The proposed design consists of two 90-degree magnetic dipoles, complemented by electrostatic quadrupoles, sextupoles, and a multipole, arranged in a symmetric configuration to minimize aberrations. A detailed description of the design and results of extensive simulations are given.
5. Waveshifting fiber readout of lanthanum halide scintillators
International Nuclear Information System (INIS)
Case, G.L.; Cherry, M.L.; Stacy, J.G.
2006-01-01
Newly developed high-light-yield inorganic scintillators coupled to waveshifting optical fibers provide the capability of efficient X-ray detection and millimeter scale position resolution suitable for high-energy cosmic ray instruments, hard X-ray/gamma ray astronomy telescopes and applications to national security. The CASTER design for NASA's proposed Black Hole Finder Probe mission, in particular, calls for a 6-8 m 2 hard X-ray coded aperture imaging telescope operating in the 20-600 keV energy band, putting significant constraints on cost and readout complexity. The development of new inorganic scintillator materials (e.g., cerium-doped LaBr 3 and LaCl 3 ) provides improved energy resolution and timing performance that is well suited to the requirements for national security and astrophysics applications. LaBr 3 or LaCl 3 detector arrays coupled with waveshifting fiber optic readout represent a significant advance in the performance capabilities of scintillator-based gamma cameras and provide the potential for a feasible approach to affordable, large area, extremely sensitive detectors. We describe some of the applications and present laboratory test results demonstrating the expected scintillator performance
6. Super-resolution processing for pulsed neutron imaging system using a high-speed camera
International Nuclear Information System (INIS)
Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi
2015-01-01
Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)
7. Processing method for high resolution monochromator
International Nuclear Information System (INIS)
Kiriyama, Koji; Mitsui, Takaya
2006-12-01
A processing method for high resolution monochromator (HRM) has been developed at Japanese Atomic Energy Agency/Quantum Beam Science Directorate/Synchrotron Radiation Research unit at SPring-8. For manufacturing a HRM, a sophisticated slicing machine and X-ray diffractometer have been installed for shaping a crystal ingot and orienting precisely the surface of a crystal ingot, respectively. The specification of the slicing machine is following; Maximum size of a diamond blade is φ 350mm in diameter, φ 38.1mm in the spindle diameter, and 2mm in thickness. A large crystal such as an ingot with 100mm in diameter, 200mm in length can be cut. Thin crystal samples such as a wafer can be also cut using by another sample holder. Working distance of a main shaft with the direction perpendicular to working table in the machine is 350mm at maximum. Smallest resolution of the main shaft with directions of front-and-back and top-and-bottom are 0.001mm read by a digital encoder. 2mm/min can set for cutting samples in the forward direction. For orienting crystal faces relative to the blade direction adjustment, a one-circle goniometer and 2-circle segment are equipped on the working table in the machine. A rotation and a tilt of the stage can be done by manual operation. Digital encoder in a turn stage is furnished and has angle resolution of less than 0.01 degrees. In addition, a hand drill as a supporting device for detailed processing of crystal is prepared. Then, an ideal crystal face can be cut from crystal samples within an accuracy of about 0.01 degrees. By installation of these devices, a high energy resolution monochromator crystal for inelastic x-ray scattering and a beam collimator are got in hand and are expected to be used for nanotechnology studies. (author)
8. Simulation of scintillating fiber gamma ray detectors for medical imaging
International Nuclear Information System (INIS)
Chaney, R.C.; Fenyves, E.J.; Antich, P.P.
1990-01-01
This paper reports on plastic scintillating fibers which have been shown to be effective for high spatial and time resolution of gamma rays. They may be expected to significantly improve the resolution of current medical imaging systems such as PET and SPECT. Monte Carlo simulation of imaging systems using these detectors, provides a means to optimize their performance in this application, as well as demonstrate their resolution and efficiency. Monte Carlo results are presented for PET and SPECT systems constructed using these detectors
9. X-ray Tomography using Thin Scintillator Films
CERN Document Server
Kozyrev, E A; Lemzyakov, A G; Petrozhitskiy, A V; Popov, A S
2017-01-01
2-14 μm thin CsI:Tl scintillation screens with high spatial resolution were prepared by the thermal deposition method for low energy X-ray imaging applications. The spatial resolution was measured as a function of the film thickness. It was proposed that the spatial resolution of the prepared conversion screens can be significantly improved by an additional deposition of a carbon layer.
10. Feasibility of a novel design of high resolution parallax-free Compton enhanced PET scanner dedicated to brain research
CERN Document Server
Braem, André; Chesi, Enrico Guido; Correia, J G; Garibaldi, F; Joram, C; Mathot, S; Nappi, E; Ribeiro da Silva, M; Schoenahl, F; Séguinot, Jacques; Weilhammer, P; Zaidi, H
2004-01-01
A novel concept for a positron emission tomography (PET) camera module is proposed, which provides full 3D reconstruction with high resolution over the total detector volume, free of parallax errors. The key components are a matrix of long scintillator crystals and hybrid photon detectors (HPDs) with matched segmentation and integrated readout electronics. The HPDs read out the two ends of the scintillator package. Both excellent spatial (x, y, z) and energy resolution are obtained. The concept allows enhancing the detection efficiency by reconstructing a significant fraction of events which underwent Compton scattering in the crystals. The proof of concept will first be demonstrated with yttrium orthoaluminate perovskite (YAP):Ce crystals, but the final design will rely on other scintillators more adequate for PET applications (e.g. LSO:Ce or LaBr /sub 3/:Ce). A promising application of the proposed camera module, which is currently under development, is a high resolution 3D brain PET camera with an axial fi...
11. High resolution study of the inclusive production of massive muon pairs by intense pion beams
CERN Multimedia
2002-01-01
This experiment measures with high resolution and large acceptance the inclusive production of massive muon pairs with the intense pion beam (up to $10^{10} \\pi/$pulse) in the experimental hall ECN3. The experiment explores extended M$^{2}$/s, x and transverse momentum ranges. The study of the departures of the lepton-pair production cross- section from scaling constitutes a good test of QCD ideas; in the framework of the 'Drell-Yan' process, the experiment allows a detailed study of the pion parton distribution functions. The detector consists of a beam dump, a pulsed toroidal a magnet, MWPC's and scintillator hodoscopes. Its $\\sim 2$% mass resolution at 10 GeV is adequate for the substraction of resonances in the high-mass region.
12. The EUV dayglow at high spectral resolution
International Nuclear Information System (INIS)
Morrison, M.D.; Bowers, C.W.; Feldman, P.D.; Meier, R.R.
1990-01-01
Rocket observations of the dayglow spectrum of the terrestrial atmosphere between 840 angstrom and 1860 angstrom at 2 angstrom resolution were obtained with a sounding rocket payload flown on January 17, 1985. Additionally, spectra were also obtained using a 0.125-m focal length scanning Ebert-Fastie monochromator covering the wavelength interval of 1150-1550 angstrom at 7 angstrom resolution on this flight and on a sounding rocket flight on August 29, 1983, under similar viewing geometries and solar zenith angles. Three bands of the N 2 c' 4 system are seen clearly resolved in the dayglow. Analysis of high-resolution N 2 Lyman-Birge-Hopfield data shows no anomalous vibrational distribution as has been reported from other observations. The altitude profiles of the observed O and N 2 emissions demonstrate that the MSIS-83 model O and N 2 densities are appropriate for the conditions of both the 1983 and 1985 rocket flights. A reduction of a factor of 2 in the model O 2 density is required for both flights to reproduce the low-altitude atomic oxygen emission profiles. The volume excitation rates calculated using the Hinteregger et al. (1981) SC number-sign 21REFW solar reference spectrum and the photoelectron flux model of Strickland and Meier (1982) need to be scaled upward by a factor of 1.4 for both fights to match the observations
13. Dynamic high resolution imaging of rats
International Nuclear Information System (INIS)
Miyaoka, R.S.; Lewellen, T.K.; Bice, A.N.
1990-01-01
A positron emission tomography with the sensitivity and resolution to do dynamic imaging of rats would be an invaluable tool for biological researchers. In this paper, the authors determine the biological criteria for dynamic positron emission imaging of rats. To be useful, 3 mm isotropic resolution and 2-3 second time binning were necessary characteristics for such a dedicated tomograph. A single plane in which two objects of interest could be imaged simultaneously was considered acceptable. Multi-layered detector designs were evaluated as a possible solution to the dynamic imaging and high resolution imaging requirements. The University of Washington photon history generator was used to generate data to investigate a tomograph's sensitivity to true, scattered and random coincidences for varying detector ring diameters. Intrinsic spatial uniformity advantages of multi-layered detector designs over conventional detector designs were investigated using a Monte Carlo program. As a result, a modular three layered detector prototype is being developed. A module will consist of a layer of five 3.5 mm wide crystals and two layers of six 2.5 mm wide crystals. The authors believe adequate sampling can be achieved with a stationary detector system using these modules. Economical crystal decoding strategies have been investigated and simulations have been run to investigate optimum light channeling methods for block decoding strategies. An analog block decoding method has been proposed and will be experimentally evaluated to determine whether it can provide the desired performance
14. Integrated High Resolution Monitoring of Mediterranean vegetation
Science.gov (United States)
Cesaraccio, Carla; Piga, Alessandra; Ventura, Andrea; Arca, Angelo; Duce, Pierpaolo; Mereu, Simone
2017-04-01
The study of the vegetation features in a complex and highly vulnerable ecosystems, such as Mediterranean maquis, leads to the need of using continuous monitoring systems at high spatial and temporal resolution, for a better interpretation of the mechanisms of phenological and eco-physiological processes. Near-surface remote sensing techniques are used to quantify, at high temporal resolution, and with a certain degree of spatial integration, the seasonal variations of the surface optical and radiometric properties. In recent decades, the design and implementation of global monitoring networks involved the use of non-destructive and/or cheaper approaches such as (i) continuous surface fluxes measurement stations, (ii) phenological observation networks, and (iii) measurement of temporal and spatial variations of the vegetation spectral properties. In this work preliminary results from the ECO-SCALE (Integrated High Resolution Monitoring of Mediterranean vegetation) project are reported. The project was manly aimed to develop an integrated system for environmental monitoring based on digital photography, hyperspectral radiometry , and micrometeorological techniques during three years of experimentation (2013-2016) in a Mediterranean site of Italy (Capo Caccia, Alghero). The main results concerned the analysis of chromatic coordinates indices from digital images, to characterized the phenological patterns for typical shrubland species, determining start and duration of the growing season, and the physiological status in relation to different environmental drought conditions; then the seasonal patterns of canopy phenology, was compared to NEE (Net Ecosystem Exchange) patterns, showing similarities. However, maximum values of NEE and ER (Ecosystem respiration), and short term variation, seemed mainly tuned by inter annual pattern of meteorological variables, in particular of temperature recorded in the months preceding the vegetation green-up. Finally, green signals
15. High-resolution X-ray television and high-resolution video recorders
International Nuclear Information System (INIS)
Haendle, J.; Horbaschek, H.; Alexandrescu, M.
1977-01-01
The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de
16. High-resolution phylogenetic microbial community profiling
Energy Technology Data Exchange (ETDEWEB)
Singer, Esther; Coleman-Derr, Devin; Bowman, Brett; Schwientek, Patrick; Clum, Alicia; Copeland, Alex; Ciobanu, Doina; Cheng, Jan-Fang; Gies, Esther; Hallam, Steve; Tringe, Susannah; Woyke, Tanja
2014-03-17
The representation of bacterial and archaeal genome sequences is strongly biased towards cultivated organisms, which belong to merely four phylogenetic groups. Functional information and inter-phylum level relationships are still largely underexplored for candidate phyla, which are often referred to as microbial dark matter. Furthermore, a large portion of the 16S rRNA gene records in the GenBank database are labeled as environmental samples and unclassified, which is in part due to low read accuracy, potential chimeric sequences produced during PCR amplifications and the low resolution of short amplicons. In order to improve the phylogenetic classification of novel species and advance our knowledge of the ecosystem function of uncultivated microorganisms, high-throughput full length 16S rRNA gene sequencing methodologies with reduced biases are needed. We evaluated the performance of PacBio single-molecule real-time (SMRT) sequencing in high-resolution phylogenetic microbial community profiling. For this purpose, we compared PacBio and Illumina metagenomic shotgun and 16S rRNA gene sequencing of a mock community as well as of an environmental sample from Sakinaw Lake, British Columbia. Sakinaw Lake is known to contain a large age of microbial species from candidate phyla. Sequencing results show that community structure based on PacBio shotgun and 16S rRNA gene sequences is highly similar in both the mock and the environmental communities. Resolution power and community representation accuracy from SMRT sequencing data appeared to be independent of GC content of microbial genomes and was higher when compared to Illumina-based metagenome shotgun and 16S rRNA gene (iTag) sequences, e.g. full-length sequencing resolved all 23 OTUs in the mock community, while iTags did not resolve closely related species. SMRT sequencing hence offers various potential benefits when characterizing uncharted microbial communities.
17. Scintillation Particle Detectors Based on Plastic Optical Fibres and Microfluidics
CERN Document Server
Mapelli, Alessandro; Renaud, Philippe
2011-01-01
This thesis presents the design, development, and experimental validation of two types of scintillation particle detectors with high spatial resolution. The first one is based on the well established scintillating fibre technology. It will complement the ATLAS (A Toroidal Large ApparatuS) detector at the CERN Large Hadron Collider (LHC). The second detector consists in a microfabricated device used to demonstrate the principle of operation of a novel type of scintillation detector based on microfluidics. The first part of the thesis presents the work performed on a scintillating fibre tracking system for the ATLAS experiment. It will measure the trajectory of protons elastically scattered at very small angles to determine the absolute luminosity of the CERN LHC collider at the ATLAS interaction point. The luminosity of an accelerator characterizes its performance. It is a process-independent parameter that is completely determined by the properties of the colliding beams and it relates the cross section of a ...
18. High resolution extremity CT for biomechanics modeling
International Nuclear Information System (INIS)
Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.
1995-01-01
With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling
19. High resolution extremity CT for biomechanics modeling
Energy Technology Data Exchange (ETDEWEB)
Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.
1995-09-23
With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.
20. High-resolution computer-aided moire
Science.gov (United States)
Sciammarella, Cesar A.; Bhat, Gopalakrishna K.
1991-12-01
This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.
1. High-resolution stratigraphy with strontium isotopes.
Science.gov (United States)
Depaolo, D J; Ingram, B L
1985-02-22
The isotopic ratio of strontium-87 to strontium-86 shows no detectable variation in present-day ocean water but changes slowly over millions of years. The strontium contained in carbonate shells of marine organisms records the ratio of strontium-87 to strontium-86 of the oceans at the time that the shells form. Sedimentary rocks composed of accumulated fossil carbonate shells can be dated and correlated with the use of high precision measurements of the ratio of strontium-87 to strontium-86 with a resolution that is similar to that of other techniques used in age correlation. This method may prove valuable for many geological, paleontological, paleooceanographic, and geochemical problems.
2. Laboratory of High resolution gamma spectrometry
International Nuclear Information System (INIS)
Mendez G, A.; Giber F, J.; Rivas C, I.; Reyes A, B.
1992-01-01
The Department of Nuclear Experimentation of the Nuclear Systems Management requests the collaboration of the Engineering unit for the supervision of the execution of the work of the High resolution Gamma spectrometry and low bottom laboratory, using the hut of the sub critic reactor of the Nuclear Center of Mexico. This laboratory has the purpose of determining the activity of special materials irradiated in nuclear power plants. In this report the architecture development, concepts, materials and diagrams for the realization of this type of work are presented. (Author)
3. Novel high resolution tactile robotic fingertips
DEFF Research Database (Denmark)
Drimus, Alin; Jankovics, Vince; Gorsic, Matija
2014-01-01
This paper describes a novel robotic fingertip based on piezoresistive rubber that can sense pressure tactile stimuli with a high spatial resolution over curved surfaces. The working principle is based on a three-layer sandwich structure (conductive electrodes on top and bottom and piezoresistive...... with specialized data acquisition electronics that acquire 500 frames per second provides rich information regarding contact force, shape and angle for bio- inspired robotic fingertips. Furthermore, a model of estimating the force of contact based on values of the cells is proposed....
4. Scintillation camera
International Nuclear Information System (INIS)
Zioni, J.; Klein, Y.; Inbar, D.
1975-01-01
The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de
5. High resolution and high speed positron emission tomography data acquisition
International Nuclear Information System (INIS)
Burgiss, S.G.; Byars, L.G.; Jones, W.F.; Casey, M.E.
1986-01-01
High resolution positron emission tomography (PET) requires many detectors. Thus, data collection systems for PET must have high data rates, wide data paths, and large memories to histogram the events. This design uses the VMEbus to cost effectively provide these features. It provides for several modes of operation including real time sorting, list mode data storage, and replay of stored list mode data
6. Physics of scintillation detectors
International Nuclear Information System (INIS)
Novotny, R.
1991-01-01
The general concept of a radiation detector is based on three fundamental principles: sensitivity of the device to the radiation of interest which requires a large cross-section in the detector material, detector response function to the physical properties of the radiation. As an example, a scintillation detector for charged particles should allow to identify the charge of the particle, its kinetic energy and the time of impact combined with optimum resolutions. Optimum conversion of the detector response (like luminescence of a scintillator) into electronical signals for further processing. The following article will concentrate on the various aspects of the first two listed principles as far as they appear to be relevant for photon and charged particle detection using organic and inorganic scintillation detectors. (orig.)
7. Scintillator plate calorimetry
International Nuclear Information System (INIS)
Price, L.E.
1990-01-01
Calorimetry using scintillator plates or tiles alternated with sheets of (usually heavy) passive absorber has been proven over multiple generations of collider detectors. Recent detectors including UA1, CDF, and ZEUS have shown good results from such calorimeters. The advantages offered by scintillator calorimetry for the SSC environment, in particular, are speed (<10 nsec), excellent energy resolution, low noise, and ease of achieving compensation and hence linearity. On the negative side of the ledger can be placed the historical sensitivity of plastic scintillators to radiation damage, the possibility of nonuniform response because of light attenuation, and the presence of cracks for light collection via wavelength shifting plastic (traditionally in sheet form). This approach to calorimetry is being investigated for SSC use by a collaboration of Ames Laboratory/Iowa State University, Argonne National Laboratory, Bicron Corporation, Florida State University, Louisiana State University, University of Mississippi, Oak Ridge National Laboratory, Virginia Polytechnic Institute and State University, Westinghouse Electric Corporation, and University of Wisconsin
8. High Resolution Powder Diffraction and Structure Determination
International Nuclear Information System (INIS)
Cox, D. E.
1999-01-01
It is clear that high-resolution synchrotrons X-ray powder diffraction is a very powerful and convenient tool for material characterization and structure determination. Most investigations to date have been carried out under ambient conditions and have focused on structure solution and refinement. The application of high-resolution techniques to increasingly complex structures will certainly represent an important part of future studies, and it has been seen how ab initio solution of structures with perhaps 100 atoms in the asymmetric unit is within the realms of possibility. However, the ease with which temperature-dependence measurements can be made combined with improvements in the technology of position-sensitive detectors will undoubtedly stimulate precise in situ structural studies of phase transitions and related phenomena. One challenge in this area will be to develop high-resolution techniques for ultra-high pressure investigations in diamond anvil cells. This will require highly focused beams and very precise collimation in front of the cell down to dimensions of 50 (micro)m or less. Anomalous scattering offers many interesting possibilities as well. As a means of enhancing scattering contrast it has applications not only to the determination of cation distribution in mixed systems such as the superconducting oxides discussed in Section 9.5.3, but also to the location of specific cations in partially occupied sites, such as the extra-framework positions in zeolites, for example. Another possible application is to provide phasing information for ab initio structure solution. Finally, the precise determination of f as a function of energy through an absorption edge can provide useful information about cation oxidation states, particularly in conjunction with XANES data. In contrast to many experiments at a synchrotron facility, powder diffraction is a relatively simple and user-friendly technique, and most of the procedures and software for data analysis
9. Scintillation hodoscopes on the basis of hodoscopic photomultipliers using scintillation fibers
International Nuclear Information System (INIS)
Alimova, T.V.; Vasil'chenko, V.G.; Vechkanov, G.N.
1986-01-01
Scintillation hodoscopes characteristics and their design features have been considered. The space resolution for hodoscopes consisting of 4 layers of scintillation fibres 200 mm long and 1 mm in diameter is 0.4-0.6 mm. With 2 fibres layer 1 m long and 3.8 mm in diameter the space resolution 3 mm has been obtained. A possibility to construct 0.1 mm resolution scintillation hodoscopes is discussed
10. A method for unfolding high-energy scintillation gamma-ray spectra up to 8 MeV
International Nuclear Information System (INIS)
Dymke, N.; Hofmann, B.
1982-01-01
In unfolding a high-energy scintillation gamma-ray spectrum up to 8 MeV with the help of a response matrix, the means of linear algebra fail if the matrix is ill conditioned. In such cases, unfolding could be accomplished by means of a mathematical method based on a priori knowledge of the photon spectrum to be expected. The method which belongs to the class of regularization techniques was tested on in-situ gamma-ray spectra of 16 N recorded in a nuclear power plant near the primary circuit, using an 1.5 x 1.5 in. NaI(Tl) scintillation detector. For one regularized unfolding the results were presented in the form of an energy and a dose-rate spectrum. (author)
11. A high-granularity scintillator hadronic-calorimeter with SiPM readout for a linear collider detector
International Nuclear Information System (INIS)
Andreev, V.; Balagura, V; Bobchenko, B.
2004-01-01
We report upon the design, construction and operation of a prototype for a high-granularity tile hadronic calorimeter for a future international linear collider(ILC) detector. Scintillating tiles are read out via wavelength-shifting fibers which guides the scintillation light to a novel photodetector, the Silicon Photomultiplier. The prototype has been tested at DESY using a positron test beam. The results are compared with a reference prototype equipped with multichannel vacuum photomultipliers. Detector calibration, noise, linearity and stability are discussed, and the energy response in a 1-6 GeV positron beam is compared with simulation. The work presented serves to establish the application of SiPM for calorimetry, and leads to the choice of this device for the construction of a 1m 3 calorimeter prototype for tests in hadron beams. (orig.)
12. Multiple scattering effects in fast neutron polarization experiments using high-pressure helium-xenon gas scintillators as analyzers
International Nuclear Information System (INIS)
Tornow, W.; Mertens, G.
1977-01-01
In order to study multiple scattering effects both in the gas and particularly in the solid materials of high-pressure gas scintillators, two asymmetry experiments have been performed by scattering of 15.6 MeV polarized neutrons from helium contained in stainless steel vessels of different wall thicknesses. A monte Carlo computer code taking into account the polarization dependence of the differential scattering cross sections has been written to simulate the experiments and to calculate corrections for multiple scattering on helium, xenon and the gas containment materials. Besides the asymmetries for the various scattering processes involved, the code yields time-of-flight spectra of the scattered neutrons and pulse height spectra of the helium recoil nuclei in the gas scintillator. The agreement between experimental results and Monte Carlo calculations is satisfactory. (Auth.)
13. Limiting liability via high resolution image processing
Energy Technology Data Exchange (ETDEWEB)
1996-12-31
The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as evidence ready, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.
14. High-Resolution Scintimammography: A Pilot Study
Energy Technology Data Exchange (ETDEWEB)
Rachel F. Brem; Joelle M. Schoonjans; Douglas A. Kieper; Stan Majewski; Steven Goodman; Cahid Civelek
2002-07-01
This study evaluated a novel high-resolution breast-specific gamma camera (HRBGC) for the detection of suggestive breast lesions. Methods: Fifty patients (with 58 breast lesions) for whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a novel HRBGC prototype. The results of conventional and high-resolution nuclear studies were prospectively classified as negative (normal or benign) or positive (suggestive or malignant) by 2 radiologists who were unaware of the mammographic and histologic results. All of the included lesions were confirmed by pathology. Results: There were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. The specificity with both systems was 93.3% (28/30). For the 18 nonpalpable lesions, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and the HRBGC, respectively. For lesions 1 cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four lesions (median size, 8.5 mm) were detected only with the HRBGC and were missed by the conventional camera. Conclusion: Evaluation of indeterminate breast lesions with an HRBGC results in improved sensitivity for the detection of cancer, with greater improvement shown for nonpalpable and 1-cm lesions.
15. High resolution studies of barium Rydberg states
International Nuclear Information System (INIS)
Eliel, E.R.
1982-01-01
The subtle structure of Rydberg states of barium with orbital angular momentum 0, 1, 2 and 3 is investigated. Some aspects of atomic theory for a configuration with two valence electrons are reviewed. The Multi Channel Quantum Defect Theory (MQDT) is concisely introduced as a convenient way to describe interactions between Rydberg series. Three high-resolution UV studies are presented. The first two, presenting results on a transition in indium and europium serve as an illustration of the frequency doubling technique. The third study is of hyperfine structure and isotope shifts in low-lying p states in Sr and Ba. An extensive study of the 6snp and 6snf Rydberg states of barium is presented with particular emphasis on the 6snf states. It is shown that the level structure cannot be fully explained with the model introduced earlier. Rather an effective two-body spin-orbit interaction has to be introduced to account for the observed splittings, illustrating that high resolution studies on Rydberg states offer an unique opportunity to determine the importance of such effects. Finally, the 6sns and 6snd series are considered. The hyperfine induced isotope shift in the simple excitation spectra to 6sns 1 S 0 is discussed and attention is paid to series perturbers. It is shown that level mixing parameters can easily be extracted from the experimental data. (Auth.)
16. Principles of high resolution NMR in solids
CERN Document Server
Mehring, Michael
1983-01-01
The field of Nuclear Magnetic Resonance (NMR) has developed at a fascinating pace during the last decade. It always has been an extremely valuable tool to the organic chemist by supplying molecular "finger print" spectra at the atomic level. Unfortunately the high resolution achievable in liquid solutions could not be obtained in solids and physicists and physical chemists had to live with unresolved lines open to a wealth of curve fitting procedures and a vast amount of speculations. High resolution NMR in solids seemed to be a paradoxon. Broad structure less lines are usually encountered when dealing with NMR in solids. Only with the recent advent of mUltiple pulse, magic angle, cross-polarization, two-dimen sional and multiple-quantum spectroscopy and other techniques during the last decade it became possible to resolve finer details of nuclear spin interactions in solids. I have felt that graduate students, researchers and others beginning to get involved with these techniques needed a book which trea...
17. Fine-pitch glass GEM for high-resolution X-ray imaging
International Nuclear Information System (INIS)
Fujiwara, T.; Toyokawa, H.; Mitsuya, Y.
2016-01-01
We have developed a fine-pitch glass gas electron multiplier (G-GEM) for high-resolution X-ray imaging. The fine-pitch G-GEM is made of a 400 μm thick photo-etchable glass substrate with 150 μm pitch holes. It is fabricated using the same wet etching technique as that for the standard G-GEM. In this work, we present the experimental results obtained with a single fine-pitch G-GEM with a 50 × 50 mm 2 effective area. We recorded an energy resolution of 16.2% and gas gain up to 5,500 when the detector was irradiated with 5.9 keV X-rays. We present a 50 × 50 mm 2 X-ray radiograph image acquired with a scintillation gas and optical readout system.
18. Design of a fusion reaction-history measurement system with high temporal resolution
International Nuclear Information System (INIS)
Peng Xiaoshi; Wang Feng; Liu Shenye; Jiang Xiaohua; Tang Qi
2010-01-01
In order to accurately measure the history of fusion reaction for experimental study of inertial confinement fusion, we advance the design of a fusion reaction-history measurement system with high temporal resolution. The diagnostic system is composed of plastic scintillator and nose cone, an optical imaging system and the system of optic streak camera. Analyzing the capability of the system indicated that the instrument measured fusion reaction history at temporal resolution as low as 55ps and 40ps correspond to 2.45MeV DD neutrons and 14.03MeV DT neutrons. The instrument is able to measure the fusion reaction history at yields 1.5 x 10 9 DD neutrons, about 4 x 10 8 DT neutrons are required for a similar quality signal. (authors)
19. High Resolution Displays Using NCAP Liquid Crystals
Science.gov (United States)
Macknick, A. Brian; Jones, Phil; White, Larry
1989-07-01
Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.
20. High resolution crystal calorimetry at LHC
International Nuclear Information System (INIS)
Schneegans, M.; Ferrere, D.; Lebeau, M.; Vivargent, M.
1991-01-01
The search for Higgs bosons above Lep200 reach could be one of the main tasks of the future pp and ee colliders. In the intermediate mass region, and in particular in the range 80-140 GeV/c 2 , only the 2-photon decay mode of a Higgs produced inclusively or in association with a W, gives a good chance of observation. A 'dedicated' very high resolution calorimeter with photon angle reconstruction and pion identification capability should detect a Higgs signal with high probability. A crystal calorimeter can be considered as a conservative approach to such a detector, since a large design and operation experience already exists. The extensive R and D needed for finding a dense, fast and radiation hard crystal, is under way. Guide-lines for designing an optimum calorimeter for LHC are discussed and preliminary configurations are given. (author) 7 refs., 3 figs., 2 tabs
1. Calculation of efficiency of high-energy neutron detection by plastic scintillators
International Nuclear Information System (INIS)
Telegin, Yu.N.
1977-01-01
A computer was used to calculate neutron (5-30O MeV) registration effeciencies with plastic scintillators 2,5,10, 20,30,40 and 50 cm thick. The results are shown in the form of tables. The contributions to efficiency of various processes have been analysed. The calculation results may be used in planning experiments with neutron counters
2. Luminescence and scintillation of Eu.sup.2+./sup.-doped high silica glass
Czech Academy of Sciences Publication Activity Database
Chewpraditkul, W.; Chen, D.; Yu, B.; Zhang, Q.; Shen, Y.; Nikl, Martin; Kučerková, Romana; Beitlerová, Alena; Wanarak, C.; Phunpueok, A.
2011-01-01
Roč. 5, č. 1 (2011), s. 40-42 ISSN 1862-6254 R&D Projects: GA MŠk(CZ) ME10084 Institutional research plan: CEZ:AV0Z10100521 Keywords : glasses * Eu 2+ * luminescence * scintillation * time-resolved luminescence * porous materials Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.218, year: 2011
3. Central Tracking Detector Based on Scintillating Fibres
CERN Multimedia
2002-01-01
Scintillating fibres form a reasonable compromise for central tracking detectors in terms of price, resolution, response time, occupancy and heat production. \\\\ \\\\ New fluorescents with large Stokes shifts have been produced, capable of working without wavelength shifters. Coherent multibundles have been developed to achieve high packing fractions. Small segments of tracker shell have been assembled and beam tests have confirmed expectations on spatial resolution. An opto-electronic delay line has been designed to delay the track patterns and enable coincidences with a first level trigger. Replacement of the conventional phosphor screen anode with a Si pixel chip is achieved. This tube is called ISPA-tube and has already been operated in beam tests with a scintillating fibres tracker. \\\\ \\\\ The aim of the proposal is to improve hit densities for small diameter fibres by increasing the fraction of trapped light, by reducing absorption and reflection losses, by reflecting light at the free fibre end, and by inc...
4. Concept for a new high resolution high intensity diffractometer
Energy Technology Data Exchange (ETDEWEB)
Stuhr, U [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1997-09-01
A concept of a new time-of-flight powder-diffractometer for a thermal neutral beam tube at SINQ is presented. The design of the instrument optimises the contradictory conditions of high intensity and high resolution. The high intensity is achieved by using many neutron pulses simultaneously. By analysing the time-angle-pattern of the detected neutrons an assignment of the neutrons to a single pulse is possible. (author) 3 figs., tab., refs.
5. High resolution computed tomography of positron emitters
International Nuclear Information System (INIS)
Derenzo, S.E.; Budinger, T.F.; Cahoon, J.L.; Huesman, R.H.; Jackson, H.G.
1976-10-01
High resolution computed transaxial radionuclide tomography has been performed on phantoms containing positron-emitting isotopes. The imaging system consisted of two opposing groups of eight NaI(Tl) crystals 8 mm x 30 mm x 50 mm deep and the phantoms were rotated to measure coincident events along 8960 projection integrals as they would be measured by a 280-crystal ring system now under construction. The spatial resolution in the reconstructed images is 7.5 mm FWHM at the center of the ring and approximately 11 mm FWHM at a radius of 10 cm. We present measurements of imaging and background rates under various operating conditions. Based on these measurements, the full 280-crystal system will image 10,000 events per sec with 400 μCi in a section 1 cm thick and 20 cm in diameter. We show that 1.5 million events are sufficient to reliably image 3.5-mm hot spots with 14-mm center-to-center spacing and isolated 9-mm diameter cold spots in phantoms 15 to 20 cm in diameter
6. High resolution CT of temporal bone trauma
International Nuclear Information System (INIS)
Youn, Eun Kyung
1986-01-01
Radiographic studies of the temporal bone following head trauma are indicated when there is cerebrospinal fluid otorrhea or rhinorrhoea, hearing loss, or facial nerve paralysis. Plain radiography displays only 17-30% of temporal bone fractures and pluridirectional tomography is both difficult to perform, particularly in the acutely ill patient, and less satisfactory for the demonstration of fine fractures. Consequently, high resolution CT is the imaging method of choice for the investigation of suspected temporal bone trauma and allows special resolution of fine bony detail comparable to that attainable by conventional tomography. Eight cases of temporal bone trauma examined at Korea General Hospital April 1985 through May 1986. The results were as follows: Seven patients (87%) suffered longitudinal fractures. In 6 patients who had purely conductive hearing loss, CT revealed various ossicular chain abnormality. In one patient who had neuro sensory hearing loss, CT demonstrated intract ossicular with a fracture nearing lateral wall of the lateral semicircular canal. In one patient who had mixed hearing loss, CT showed complex fracture.
7. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations
Science.gov (United States)
Rino, C. L.; Carrano, C. S.; Yokoyama, T.
2017-12-01
In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently
8. High resolution CT of the lung
Energy Technology Data Exchange (ETDEWEB)
Itoh, Harumi (Kyoto Univ. (Japan). Faculty of Medicine)
1991-02-01
The emergence of computed tomography (CT) in the early 1970s has greatly contributed to diagnostic radiology. The brain was the first organ examined with CT, followed by the abdomen. For the chest, CT has also come into use shortly after the introduction in the examination of the thoracic cavity and mediastinum. CT techniques were, however, of limited significance in the evaluation of pulmonary diseases, especially diffuse pulmonary diseases. High-resolution CT (HRCT) has been introduced in clinical investigations of the lung field. This article is designed to present chest radiographic and conventional tomographic interpretations and to introduce findings of HRCT corresponding to the same shadows, with a summation of the significance of HRCT and issues of diagnostic imaging. Materials outlined are tuberculosis, pneumoconiosis, bronchopneumonia, mycoplasma pneumonia, lymphangitic carcinomatosis, sarcoidosis, diffuse panbronchiolitis, interstitial pneumonia, and pulmonary emphysema. Finally, an overview of basic investigations evolved from HRCT is given. (N.K.) 140 refs.
9. Classification of High Spatial Resolution, Hyperspectral ...
Science.gov (United States)
EPA announced the availability of the final report,Classification of High Spatial Resolution, Hyperspectral Remote Sensing Imagery of the Little Miami River Watershed in Southwest Ohio, USA . This report and associated land use/land cover (LULC) coverage is the result of a collaborative effort among an interdisciplinary team of scientists with the U.S. Environmental Protection Agency's (U.S. EPA's) Office of Research and Development in Cincinnati, Ohio. A primary goal of this project is to enhance the use of geography and spatial analytic tools in risk assessment, and to improve the scientific basis for risk management decisions affecting drinking water and water quality. The land use/land cover classification is derived from 82 flight lines of Compact Airborne Spectrographic Imager (CASI) hyperspectral imagery acquired from July 24 through August 9, 2002 via fixed-wing aircraft.
10. A high resolution jet analysis for LEP
International Nuclear Information System (INIS)
Hariri, S.
1992-11-01
A high resolution multijet analysis of hadronic events produced in e + e - annihilation at a C.M.S. energy of 91.2 GeV is described. Hadronic events produced in e + e - annihilations are generated using the Monte Carlo program JETSET7.3 with its two options: Matrix Element (M.E.) and Parton Showers (P.S.). The shower option is used with its default parameter values while the M.E. option is used with an invariant mass cut Y CUT =0.01 instead of 0.02. This choice ensures a better continuity in the evolution of the event shape variables. (K.A.) 3 refs.; 26 figs.; 1 tab
11. High resolution VUV facility at INDUS-1
International Nuclear Information System (INIS)
Krishnamurty, G.; Saraswathy, P.; Rao, P.M.R.; Mishra, A.P.; Kartha, V.B.
1993-01-01
Synchrotron radiation (SR) generated in the electron storage rings is an unique source for the study of atomic and molecular spectroscopy especially in the vacuum ultra violet region. Realizing the potential of this light source, efforts are in progress to develop a beamline facility at INDUS-1 to carry out high resolution atomic and molecular spectroscopy. This beam line consists of a fore-optic which is a combination of three cylindrical mirrors. The mirrors are so chosen that SR beam having a 60 mrad (horizontal) x 6 mrad (vertical) divergence is focussed onto a slit of a 6.65 metre off-plane spectrometer in Eagle Mount equipped with horizontal slit and vertical dispersion. The design of the various components of the beam line is completed. It is decided to build the spectrometer as per the requirements of the user community. Details of the various aspects of the beam line will be presented. (author). 3 figs
12. High-resolution CT of airway reactivity
International Nuclear Information System (INIS)
Herold, C.J.; Brown, R.H.; Hirshman, C.A.; Mitzner, W.; Zerhouni, E.A.
1990-01-01
Assessment of airway reactivity has generally been limited to experimental nonimaging models. This authors of this paper used high-resolution CT (HRCT) to evaluate airway reactivity and to calculate airway resistance (Raw) compared with lung resistance (RL). Ten anesthetized and ventilated dogs were investigated with HRCT (10 contiguous 2-mm sections through the lower lung lobes) during control state, following aerosol histamine challenge, and following posthistamine hyperinflation. The HRCT scans were digitized, and areas of 10 airways per dog (diameter, 1-10 mm) were measured with a computer edging process. Changes in airway area and Raw (calculated by 1/[area] 2 ) were measured. RL was assessed separately, following the same protocol. Data were analyzed by use of a paired t-test with significance at p < .05
13. High-resolution CT of otosclerosis
International Nuclear Information System (INIS)
Dewen, Yang; Kodama, Takao; Tono, Tetsuya; Ochiai, Reiji; Kiyomizu, Kensuke; Suzuki, Yukiko; Yano, Takanori; Watanabe, Katsushi
1997-01-01
High-resolution CT (HRCT) scans of thirty-two patients (60 ears) with the clinical diagnosis of fenestral otosclerosis were evaluated retrospectively. HRCT was performed with 1-mm-thick targeted sections and 1-mm (36 ears) or 0.5-mm (10 ears) intervals in the semiaxial projection. Seven patients (14 ears) underwent helical scanning with a 1-mm slice thickness and 1-mm/sec table speed. Forty-five ears (75%) were found to have one or more otospongiotic or otosclerotic foci on HRCT. In most instances (30 ears), the otospongiotic foci were found in the region of the fissula ante fenestram. No significant correlations between CT findings and air conduction threshold were observed. We found a significant relationship between lesions of the labyrinthine capsule and sensorineural hearing loss. We conclude that HRCT is a valuable modality for diagnosing otosclerosis, especially when otospongiotic focus is detected. (author)
14. High resolution CT in pulmonary sarcoidosis
International Nuclear Information System (INIS)
Spina, Juan C.; Curros, Marisela L.; Gomez, M.; Gonzalez, A.; Chacon, Carolina; Guerendiain, G.
2000-01-01
Objectives: To establish the particular advantages of High Resolution CT (HRCT) for the diagnosis of pulmonary sarcoidosis. Material and Methods: A series of fourteen patients, (4 men and 10 women; mean age 44,5 years) with thoracic sarcoidosis. All patients were studied using HRCT and diagnosis was confirmed for each case. Confidence intervals were obtained for different disease manifestations. Results: The most common findings were: lymph node enlargement (n=14 patients), pulmonary nodules (n=13), thickening of septa (n=6), peribronquial vascular thickening (n=5) pulmonary pseudo mass (n=5) and signs of fibrosis (n=4). The stage most commonly observed was stage II. It is worth noting that no cases of pleural effusion or cavitations of pulmonary lesions were observed. Conclusions: In this series, confidence interval overlapping for lymph node enlargement, single pulmonary nodules and septum thickening, allows to infer that their presence in a young adult, with few clinical symptoms, forces to rule out first the possibility of sarcoidosis. (author)
15. Improved methods for high resolution electron microscopy
Energy Technology Data Exchange (ETDEWEB)
Taylor, J.R.
1987-04-01
Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.
16. High resolution infrared spectroscopy of symbiotic stars
International Nuclear Information System (INIS)
Bensammar, S.
1989-01-01
We report here very early results of high resolution (5x10 3 - 4x10 4 ) infrared spectroscopy (1 - 2.5 μm) of different symbiotic stars (T CrB, RW Hya, CI Cyg, PU Vul) observed with the Fourier Transform Spectrometer of the 3.60m Canada France Hawaii Telescope. These stars are usually considered as interacting binaries and only little details are known about the nature of their cool component. CO absorption lines are detected for the four stars. Very different profiles of hydrogen Brackett γ and helium 10830 A lines are shown for CI Cyg observed at different phases, while Pu Vul shows very intense emission lines
17. GRANULOMETRIC MAPS FROM HIGH RESOLUTION SATELLITE IMAGES
Directory of Open Access Journals (Sweden)
Catherine Mering
2011-05-01
Full Text Available A new method of land cover mapping from satellite images using granulometric analysis is presented here. Discontinuous landscapes such as steppian bushes of semi arid regions and recently growing urban settlements are especially concerned by this study. Spatial organisations of the land cover are quantified by means of the size distribution analysis of the land cover units extracted from high resolution remotely sensed images. A granulometric map is built by automatic classification of every pixel of the image according to the granulometric density inside a sliding neighbourhood. Granulometric mapping brings some advantages over traditional thematic mapping by remote sensing by focusing on fine spatial events and small changes in one peculiar category of the landscape.
18. High resolution imaging detectors and applications
CERN Document Server
Saha, Swapan K
2015-01-01
Interferometric observations need snapshots of very high time resolution of the order of (i) frame integration of about 100 Hz or (ii) photon-recording rates of several megahertz (MHz). Detectors play a key role in astronomical observations, and since the explanation of the photoelectric effect by Albert Einstein, the technology has evolved rather fast. The present-day technology has made it possible to develop large-format complementary metal oxide–semiconductor (CMOS) and charge-coupled device (CCD) array mosaics, orthogonal transfer CCDs, electron-multiplication CCDs, electron-avalanche photodiode arrays, and quantum-well infrared (IR) photon detectors. The requirements to develop artifact-free photon shot noise-limited images are higher sensitivity and quantum efficiency, reduced noise that includes dark current, read-out and amplifier noise, smaller point-spread functions, and higher spectral bandwidth. This book aims to address such systems, technologies and design, evaluation and calibration, control...
19. High resolution mid-infrared spectroscopy based on frequency upconversion
DEFF Research Database (Denmark)
Dam, Jeppe Seidelin; Hu, Qi; Tidemand-Lichtenberg, Peter
2013-01-01
signals can be analyzed. The obtainable frequency resolution is usually in the nm range where sub nm resolution is preferred in many applications, like gas spectroscopy. In this work we demonstrate how to obtain sub nm resolution when using upconversion. In the presented realization one object point...... high resolution spectral performance by observing emission from hot water vapor in a butane gas burner....
20. A high resolution solar atlas for fluorescence calculations
Science.gov (United States)
Hearn, M. F.; Ohlmacher, J. T.; Schleicher, D. G.
1983-01-01
The characteristics required of a solar atlas to be used for studying the fluorescence process in comets are examined. Several sources of low resolution data were combined to provide an absolutely calibrated spectrum from 2250 A to 7000A. Three different sources of high resolution data were also used to cover this same spectral range. The low resolution data were then used to put each high resolution spectrum on an absolute scale. The three high resolution spectra were then combined in their overlap regions to produce a single, absolutely calibrated high resolution spectrum over the entire spectral range.
1. High-Resolution Integrated Optical System
Science.gov (United States)
Prakapenka, V. B.; Goncharov, A. F.; Holtgrewe, N.; Greenberg, E.
2017-12-01
Raman and optical spectroscopy in-situ at extreme high pressure and temperature conditions relevant to the planets' deep interior is a versatile tool for characterization of wide range of properties of minerals essential for understanding the structure, composition, and evolution of terrestrial and giant planets. Optical methods, greatly complementing X-ray diffraction and spectroscopy techniques, become crucial when dealing with light elements. Study of vibrational and optical properties of minerals and volatiles, was a topic of many research efforts in past decades. A great deal of information on the materials properties under extreme pressure and temperature has been acquired including that related to structural phase changes, electronic transitions, and chemical transformations. These provide an important insight into physical and chemical states of planetary interiors (e.g. nature of deep reservoirs) and their dynamics including heat and mass transport (e.g. deep carbon cycle). Optical and vibrational spectroscopy can be also very instrumental for elucidating the nature of the materials molten states such as those related to the Earth's volatiles (CO2, CH4, H2O), aqueous fluids and silicate melts, planetary ices (H2O, CH4, NH3), noble gases, and H2. The optical spectroscopy study performed concomitantly with X-ray diffraction and spectroscopy measurements at the GSECARS beamlines on the same sample and at the same P-T conditions would greatly enhance the quality of this research and, moreover, will provide unique new information on chemical state of matter. The advanced high-resolution user-friendly integrated optical system is currently under construction and expected to be completed by 2018. In our conceptual design we have implemented Raman spectroscopy with five excitation wavelengths (266, 473, 532, 660, 946 nm), confocal imaging, double sided IR laser heating combined with high temperature Raman (including coherent anti-Stokes Raman scattering) and
2. Thallium bromide photodetectors for scintillation detection
CERN Document Server
Hitomi, K; Shoji, T; Hiratate, Y; Ishibashi, H; Ishii, M
2000-01-01
A wide bandgap compound semiconductor, TlBr, has been investigated as a blue sensitive photodetector material for scintillation detection. The TlBr photodetectors have been fabricated from the TlBr crystals grown by the TMZ method using materials purified by many pass zone refining. The performance of the photodetectors has been evaluated by measuring their leakage current, quantum efficiency, spatial uniformity, direct X-ray detection and scintillation detection characteristics. The photodetectors have shown high quantum efficiency for the blue wavelength region and high spatial uniformity for their optical response. In addition, good direct X-ray detection characteristics with an energy resolution of 4.5 keV FWHM for 22 keV X-rays from a sup 1 sup 0 sup 9 Cd radioactive source have been obtained. Detection of blue scintillation from GSO and LSO scintillators irradiated with a sup 2 sup 2 Na radioactive source has been done successfully by using the photodetectors at room temperature. A clear full-energy pea...
3. Characterization of a high resolution and high sensitivity pre-clinical PET scanner with 3D event reconstruction
CERN Document Server
Rissi, M; Bolle, E; Dorholt, O; Hines, K E; Rohne, O; Skretting, A; Stapnes, S; Volgyes, D
2012-01-01
COMPET is a preclinical PET scanner aiming towards a high sensitivity, a high resolution and MRI compatibility by implementing a novel detector geometry. In this approach, long scintillating LYSO crystals are used to absorb the gamma-rays. To determine the point of interaction (P01) between gamma-ray and crystal, the light exiting the crystals on one of the long sides is collected with wavelength shifters (WLS) perpendicularly arranged to the crystals. This concept has two main advantages: (1) The parallax error is reduced to a minimum and is equal for the whole field of view (FOV). (2) The P01 and its energy deposit is known in all three dimension with a high resolution, allowing for the reconstruction of Compton scattered gamma-rays. Point (1) leads to a uniform point source resolution (PSR) distribution over the whole FOV, and also allows to place the detector close to the object being imaged. Both points (1) and (2) lead to an increased sensitivity and allow for both high resolution and sensitivity at the...
4. High-resolution downscaling for hydrological management
Science.gov (United States)
Ulbrich, Uwe; Rust, Henning; Meredith, Edmund; Kpogo-Nuwoklo, Komlan; Vagenas, Christos
2017-04-01
Hydrological modellers and water managers require high-resolution climate data to model regional hydrologies and how these may respond to future changes in the large-scale climate. The ability to successfully model such changes and, by extension, critical infrastructure planning is often impeded by a lack of suitable climate data. This typically takes the form of too-coarse data from climate models, which are not sufficiently detailed in either space or time to be able to support water management decisions and hydrological research. BINGO (Bringing INnovation in onGOing water management; ) aims to bridge the gap between the needs of hydrological modellers and planners, and the currently available range of climate data, with the overarching aim of providing adaptation strategies for climate change-related challenges. Producing the kilometre- and sub-daily-scale climate data needed by hydrologists through continuous simulations is generally computationally infeasible. To circumvent this hurdle, we adopt a two-pronged approach involving (1) selective dynamical downscaling and (2) conditional stochastic weather generators, with the former presented here. We take an event-based approach to downscaling in order to achieve the kilometre-scale input needed by hydrological modellers. Computational expenses are minimized by identifying extremal weather patterns for each BINGO research site in lower-resolution simulations and then only downscaling to the kilometre-scale (convection permitting) those events during which such patterns occur. Here we (1) outline the methodology behind the selection of the events, and (2) compare the modelled precipitation distribution and variability (preconditioned on the extremal weather patterns) with that found in observations.
5. Toward high-resolution optoelectronic retinal prosthesis
Science.gov (United States)
Palanker, Daniel; Huie, Philip; Vankov, Alexander; Asher, Alon; Baccus, Steven
2005-04-01
It has been already demonstrated that electrical stimulation of retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. Current retinal implants provide very low resolution (just a few electrodes), while several thousand pixels are required for functional restoration of sight. We present a design of the optoelectronic retinal prosthetic system that can activate a retinal stimulating array with pixel density up to 2,500 pix/mm2 (geometrically corresponding to a visual acuity of 20/80), and allows for natural eye scanning rather than scanning with a head-mounted camera. The system operates similarly to "virtual reality" imaging devices used in military and medical applications. An image from a video camera is projected by a goggle-mounted infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Such a system provides a broad field of vision by allowing for natural eye scanning. The goggles are transparent to visible light, thus allowing for simultaneous utilization of remaining natural vision along with prosthetic stimulation. Optical control of the implant allows for simple adjustment of image processing algorithms and for learning. A major prerequisite for high resolution stimulation is the proximity of neural cells to the stimulation sites. This can be achieved with sub-retinal implants constructed in a manner that directs migration of retinal cells to target areas. Two basic implant geometries are described: perforated membranes and protruding electrode arrays. Possibility of the tactile neural stimulation is also examined.
6. HIGH RESOLUTION AIRBORNE SHALLOW WATER MAPPING
Directory of Open Access Journals (Sweden)
F. Steinbacher
2012-07-01
Full Text Available In order to meet the requirements of the European Water Framework Directive (EU-WFD, authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river
7. High Resolution Airborne Shallow Water Mapping
Science.gov (United States)
Steinbacher, F.; Pfennigbauer, M.; Aufleger, M.; Ullrich, A.
2012-07-01
In order to meet the requirements of the European Water Framework Directive (EU-WFD), authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim) a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length) of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river bed was achieved
8. High-resolution CCD imaging alternatives
Science.gov (United States)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
9. Luminescence and scintillation of Ce.sup.3+./sup.- doped high silica glass
Czech Academy of Sciences Publication Activity Database
Chewpraditkul, W.; Shen, Y.; Chen, D.; Yu, B.; Průša, Petr; Nikl, Martin; Beitlerová, Alena; Wanarak, C.
2012-01-01
Roč. 34, č. 11 (2012), s. 1762-1766 ISSN 0925-3467 R&D Projects: GA MŠk LH12185 Institutional research plan: CEZ:AV0Z10100521 Keywords : Ce 3+ * luminescence * porous materials * scintillation * photoluminescence decay Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.918, year: 2012 http://dx.doi.org/10.1016/j.optmat.2012.04.012
10. Prototype tests for a highly granular scintillator-based hadronic calorimeter
OpenAIRE
Liu, Yong; Collaboration, for the CALICE
2017-01-01
Within the CALICE collaboration, several concepts for the hadronic calorimeter of a future lepton collider detector are studied. After having demonstrated the capabilities of the measurement methods in "physics prototypes", the focus now lies on improving their implementation in "technological prototypes", that are scalable to the full linear collider detector. The Analogue Hadronic Calorimeter (AHCAL) concept is a sampling calorimeter of tungsten or steel absorber plates and plastic scintill...
11. A High Resolution Phoswich Detector: LaBr3(Ce) Coupled With LaCl3(Ce)
Science.gov (United States)
Carmona-Gallardo, M.; Borge, M. J. G.; Briz, J. A.; Gugliermina, V.; Perea, A.; Tengblad, O.; Turrión, M.
2010-04-01
An innovative solution for the forward end-cap CALIFA calorimeter of R3B is under investigation consisting of two scintillation crystals, LaBr3 and LaCl3, stacked together in a phoswich configuration with one readout only. This dispositive should be capable of a good determination of the energy of protons and gamma radiation. This composite detector allows to deduce the initial energy of charged particles by ΔE1+ΔE2 identification. For gammas, the simulations show that there is a high probability that the first interaction occurs inside the scintillator at few centimeters, with a second layer, the rest of the energy is absorbed, or it can be used as veto event in case of no deposition in the first layer. One such a detector has been tested at the Centro de MicroAnálisis de Materiales (CMAM) in Madrid. Good resolution and time signal separation have been achieved.
12. Liquid scintillation systems and apparatus for measuring high-energy radiation emitted by samples in standard laboratory test tubes
International Nuclear Information System (INIS)
Benvenutti, R.A.
1976-01-01
Liquid scintillation detection system employs improved sample holders in which the cap of a glass vial is provided with a well for receiving a standard laboratory test tube containing a radioactive sample. The well is immersed in a liquid scintillator in the vial, the scintillator containing lead acetate solution to enhance its efficiency. A commercially available beta-counting liquid scintillation apparatus is modified to provide gamma-counting with the improved sample holders
13. Resolution enhancement of low quality videos using a high-resolution frame
NARCIS (Netherlands)
Pham, T.Q.; Van Vliet, L.J.; Schutte, K.
2006-01-01
This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of
14. A method for generating high resolution satellite image time series
Science.gov (United States)
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation
15. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET
Energy Technology Data Exchange (ETDEWEB)
Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es
2010-04-07
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
16. High resolution gas volume change sensor
International Nuclear Information System (INIS)
Dirckx, Joris J. J.; Aernouts, Jef E. F.; Aerts, Johan R. M.
2007-01-01
Changes of gas quantity in a system can be measured either by measuring pressure changes or by measuring volume changes. As sensitive pressure sensors are readily available, pressure change is the commonly used technique. In many physiologic systems, however, buildup of pressure influences the gas exchange mechanisms, thus changing the gas quantity change rate. If one wants to study the gas flow in or out of a biological gas pocket, measurements need to be done at constant pressure. In this article we present a highly sensitive sensor for quantitative measurements of gas volume change at constant pressure. The sensor is based on optical detection of the movement of a droplet of fluid enclosed in a capillary. The device is easy to use and delivers gas volume data at a rate of more than 15 measurements/s and a resolution better than 0.06 μl. At the onset of a gas quantity change the sensor shows a small pressure artifact of less than 15 Pa, and at constant change rates the pressure artifact is smaller than 10 Pa or 0.01% of ambient pressure
17. Photoionization of Ar2 at high resolution
International Nuclear Information System (INIS)
Dehmer, P.M.
1982-01-01
The relative photoionization cross section of Ar 2 was determined at a resolution of 0.07 A in the wavelength region from 800 to 850 A using a new photoionization mass spectrometer that combines a high intensity helium continuum lamp with a free supersonic molecular beam source. In the region studied, the photoionization cross section is dominated by autoionization of molecular Rydberg states, and the structure is diffuse owing to the combined effects of autoionization and predissociation. The molecular photoionization spectrum is extremely complex and shows little resemblence either to the corresponding atomic spectrum (indicating that the spectrum of the dimer is not simply a perturbed atomic spectrum) or to the molecular absorption spectrum at longer wavelengths. The regular vibrational progressions seen at longer wavelengths are absent above the first ionization potential. Detailed spectroscopic analysis is possible for only a small fraction of the observed features; however, vibrational intervals of 50--100 cm -1 suggest that some of the Rydberg states have B 2 Pi/sub 3/2g/ ionic cores. A comparison of the absorption and photoionization spectra shows that, at wavelengths shorter than approx.835 A, many of the excited states decay via mechanisms other than autoionization
18. DUACS: Toward High Resolution Sea Level Products
Science.gov (United States)
Faugere, Y.; Gerald, D.; Ubelmann, C.; Claire, D.; Pujol, M. I.; Antoine, D.; Desjonqueres, J. D.; Picot, N.
2016-12-01
The DUACS system produces, as part of the CNES/SALP project, and the Copernicus Marine Environment and Monitoring Service, high quality multimission altimetry Sea Level products for oceanographic applications, climate forecasting centers, geophysic and biology communities... These products consist in directly usable and easy to manipulate Level 3 (along-track cross-calibrated SLA) and Level 4 products (multiple sensors merged as maps or time series) and are available in global and regional version (Mediterranean Sea, Arctic, European Shelves …).The quality of the products is today limited by the altimeter technology "Low Resolution Mode" (LRM), and the lack of available observations. The launch of 2 new satellites in 2016, Jason-3 and Sentinel-3A, opens new perspectives. Using the global Synthetic Aperture Radar mode (SARM) coverage of S3A and optimizing the LRM altimeter processing (retracking, editing, ...) will allow us to fully exploit the fine-scale content of the altimetric missions. Thanks to this increase of real time altimetry observations we will also be able to improve Level-4 products by combining these new Level-3 products and new mapping methodology, such as dynamic interpolation. Finally these improvements will benefit to downstream products : geostrophic currents, Lagrangian products, eddy atlas… Overcoming all these challenges will provide major upgrades of Sea Level products to better fulfill user needs.
19. High resolution simultaneous measurements of airborne radionuclides
International Nuclear Information System (INIS)
Abe, T.; Yamaguchi, Y.; Tanaka, K.; Komura, K.
2006-01-01
High resolution (2-3 hrs) simultaneous measurements of airborne radionuclides, 212 Pb, 210 Pb and 7 Be, have been performed by using extremely low background Ge detectors at Ogoya Underground Laboratory. We have measured above radionuclides at three monitoring points viz, 1) Low Level Radioactivity Laboratory (LLRL) Kanazawa University, 2) Shishiku Plateau (640 m MSL) located about 8 km from LLRL to investigate vertical difference of activity levels, and 3) Hegura Island (10 m MSL) located about 50 km from Noto Peninsula in the Sea of Japan to evaluate the influences of Asian continent or mainland of Japan on the variation to the activity levels. Variations of short-lived 212 Pb concentration showed noticeable time lags between at LLRL and at Shishiku Plateau. These time lags might be caused by change of height of a planetary boundary layer. On the contrary, variations of long-lived 210 Pb and 7 Be showed simultaneity at three locations because of homogeneity of these concentrations all over the area. (author)
20. Development of Omnidirectional Gamma-imager with Stacked Scintillators
International Nuclear Information System (INIS)
Takahashi, Tone; Kawarabayashi, Jun; Tomita, Hideki; Iguchi, Tetsuo; Takada, Eiji
2013-06-01
In the severe accident at nuclear power plant, a rapid measurement of radioactive fallout is required. So we have developed a Compton imager with high efficiency and omni-directional sensitivity. Three dimensional position resolutions were evaluated about several kinds of scintillators. The all-directional imaging was demonstrated by the simulation of detection of 137 Cs point source. Imaging quality with angle resolution of 28 deg. and detection efficiency of 1.1% was estimated. (authors)
1. DETECTORS AND EXPERIMENTAL METHODS: Studies of a scintillator-bar detector for a neutron wall at an external target facility
Science.gov (United States)
Yu, Yu-Hong; Xu, Hua-Gen; Xu, Hu-Shan; Zhan, Wen-Long; Sun, Zhi-Yu; Guo, Zhong-Yan; Hu, Zheng-Guo; Wang, Jian-Song; Chen, Jun-Ling; Zheng, Chuan
2009-07-01
To achieve a better time resolution of a scintillator-bar detector for a neutron wall at the external target facility of HIRFL-CSR, we have carried out a detailed study of the photomultiplier, the wrapping material and the coupling media. The timing properties of a scintillator-bar detector have been studied in detail with cosmic rays using a high and low level signal coincidence. A time resolution of 80 ps has been achieved in the center of the scintillator-bar detector.
2. The High-Resolution IRAS Galaxy Atlas
Science.gov (United States)
Cao, Yu; Terebey, Susan; Prince, Thomas A.; Beichman, Charles A.; Oliversen, R. (Technical Monitor)
1997-01-01
An atlas of the Galactic plane (-4.7 deg is less than b is less than 4.7 deg), along with the molecular clouds in Orion, rho Oph, and Taurus-Auriga, has been produced at 60 and 100 microns from IRAS data. The atlas consists of resolution-enhanced co-added images with 1 min - 2 min resolution and co-added images at the native IRAS resolution. The IRAS Galaxy Atlas, together with the Dominion Radio Astrophysical Observatory H(sub I) line/21 cm continuum and FCRAO CO (1-0) Galactic plane surveys, which both have similar (approx. 1 min) resolution to the IRAS atlas, provides a powerful tool for studying the interstellar medium, star formation, and large-scale structure in our Galaxy. This paper documents the production and characteristics of the atlas.
3. A high resolution β-detector
International Nuclear Information System (INIS)
Charon, Y.; Cuzon, J.C.; Tricoire, H.; Valentin, L.
1987-01-01
We present a detector which associates a charge coupled device to a light amplifier. This image sensor must detect weak β-activity, with a 10 μm resolution and should replace the autoradiographic films used for molecular hybridization. The best results are obtained with the 35 S emittor, for which the resolution and the efficiency are respectively 20 μm and 100% (relative to the measured standard source)
4. High resolution spectrometry for relativistic heavy ions
Energy Technology Data Exchange (ETDEWEB)
Gabor, G; Schimmerling, W; Greiner, D; Bieser, F; Lindstrom, P [California Univ., Berkeley (USA). Lawrence Berkeley Lab.
1975-12-01
Several techniques are discussed for velocity and energy spectrometry of relativistic heavy ions with good resolution. A foil telescope with chevron channel plate detectors is described. A test of this telescope was performed using 2.1 GeV/A C/sup 6 +/ ions, and a time-of-flight resolution of 160 ps was measured. Qualitative information on the effect of foil thickness was also obtained.
5. High resolution fire risk mapping in Italy
Science.gov (United States)
Fiorucci, Paolo; Biondi, Guido; Campo, Lorenzo; D'Andrea, Mirko
2014-05-01
extinguishing actions, leaving more resources to improve safety in areas at risk. With the availability of fire perimeters mapped over a period spanning from 5 to 10 years, depending by the region, a procedure was defined in order to assess areas at risk with high spatial resolution (900 m2) based on objective criteria by observing past fire events. The availability of fire perimeters combined with a detailed knowledge of topography and land cover allowed to understand which are the main features involved in forest fire occurrences and their behaviour. The seasonality of the fire regime was also considered, partitioning the analysis in two macro season (November- April and May- October). In addition, the total precipitation obtained from the interpolation of 30 years-long time series from 460 raingauges and the average air temperature obtained downscaling 30 years ERA-INTERIM data series were considered. About 48000 fire perimeters which burnt about 5500 km2 were considered in the analysis. The analysis has been carried out at 30 m spatial resolution. Some important considerations relating to climate and the territorial features that characterize the fire regime at national level contribute to better understand the forest fire phenomena. These results allow to define new strategies for forest fire prevention and management extensible to other geographical areas.
6. Extruding plastic scintillator at Fermilab
International Nuclear Information System (INIS)
Pla-Dalmau, Anna; Bross, Alain D.; Rykalin, Viktor V.
2003-01-01
An understanding of the costs involved in the production of plastic scintillators and the development of a less expensive material have become necessary with the prospects of building very large plastic scintillation detectors. Several factors contribute to the high cost of plastic scintillating sheets, but the principal reason is the labor-intensive nature of the manufacturing process. In order to significantly lower the costs, the current casting procedures had to be abandoned. Since polystyrene is widely used in the consumer industry, the logical path was to investigate the extrusion of commercial-grade polystyrene pellets with dopants to yield high quality plastic scintillator. This concept was tested and high quality extruded plastic scintillator was produced. The D0 and MINOS experiments are already using extruded scintillator strips in their detectors. An extrusion line has recently been installed at Fermilab in collaboration with NICADD (Northern Illinois Center for Accelerator and Detector Development). This new facility will serve to further develop and improve extruded plastic scintillator. This paper will discuss the characteristics of extruded plastic scintillator and its raw materials, the different manufacturing techniques and the current R andD program at Fermilab
7. Performance of the first prototype of the CALICE scintillator strip electromagnetic calorimeter
CERN Document Server
Francis, K.; Schlereth, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S.T.; Sosebee, M.; White, A.P.; Yu, J.; Eigen, G.; Mikami, Y.; Watson, N.K.; Thomson, M.A.; Ward, D.R.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Apostolakis, J.; Dotti, A.; Folger, G.; Ivantchenko, V.; Ribon, A.; Uzhinskiy, V.; Carloganu, C.; Gay, P.; Manen, S.; Royer, L.; Tytgat, M.; Zaganidis, N.; Blazey, G.C.; Dyshkant, A.; Lima, J.G.R.; Zutshi, V.; Hostachy, J. -Y.; Morin, L.; Cornett, U.; David, D.; Ebrahimi, A.; Falley, G.; Gadow, K.; Goettlicher, P.; Guenter, C.; Hartbrich, O.; Hermberg, B.; Karstensen, S.; Krivan, F.; Krueger, K.; Lutz, B.; Morozov, S.; Morgunov, V.; Neubueser, C.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Garutti, E.; Laurien, S.; Lu, S.; Marchesini, I.; Matysek, M.; Ramilli, M.; Briggl, K.; Eckert, P.; Harion, T.; Schultz-Coulon, H. -Ch.; Shen, W.; Stamen, R.; Bilki, B.; Norbeck, E.; Northacker, D.; Onel, Y.; Wilson, G.W.; Kawagoe, K.; Sudo, Y.; Yoshioka, T.; Dauncey, P.D.; Wing, M.; Salvatore, F.; Cortina Gil, E.; Mannai, S.; Baulieu, G.; Calabria, P.; Caponetto, L.; Combaret, C.; Della Negra, R.; Grenier, G.; Han, R.; Ianigro, J-C.; Kieffer, R.; Laktineh, I.; Lumb, N.; Mathez, H.; Mirabito, L.; Petrukhin, A.; Steen, A.; Tromeur, W.; Vander donckt, M.; Zoccarato, Y.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Corriveau, F.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Besson, D.; Buzhan, P.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Tikhomirov, V.; Kiesling, C.; Seidel, K.; Simon, F.; Soldner, C.; Weuste, L.; Amjad, M.S.; Bonis, J.; Callier, S.; Conforti di Lorenzo, S.; Cornebise, P.; Doublet, Ph.; Dulucq, F.; Fleury, J.; Frisson, T.; van der Kolk, N.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch.; Poeschl, R.; Raux, L.; Rouene, J.; Seguin-Moreau, N.; Anduze, M.; Balagura, V.; Boudry, V.; Brient, J-C.; Cornat, R.; Frotin, M.; Gastaldi, F.; Guliyev, E.; Haddad, Y.; Magniette, F.; Musat, G.; Ruan, M.; Tran, T.H.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Kotera, K.; Ono, H.; Takeshita, T.; Uozumi, S.; Jeans, D.; Chang, S.; Khan, A.; Kim, D.H.; Kong, D.J.; Oh, Y.D.; Goetze, M.; Sauer, J.; Weber, S.; Zeitnitz, C.
2014-11-01
A first prototype of a scintillator strip-based electromagnetic calorimeter was built, consisting of 26 layers of tungsten absorber plates interleaved with planes of 45x10x3 mm3 plastic scintillator strips. Data were collected using a positron test beam at DESY with momenta between 1 and 6 GeV/c. The prototype's performance is presented in terms of the linearity and resolution of the energy measurement. These results represent an important milestone in the development of highly granular calorimeters using scintillator strip technology. This technology is being developed for a future linear collider experiment, aiming at the precise measurement of jet energies using particle flow techniques.
8. Zeolites - a high resolution electron microscopy study
International Nuclear Information System (INIS)
Alfredsson, V.
1994-10-01
High resolution transmission electron microscopy (HRTEM) has been used to investigate a number of zeolites (EMT, FAU, LTL, MFI and MOR) and a member of the mesoporous M41S family. The electron optical artefact, manifested as a dark spot in the projected centre of the large zeolite channels, caused by insufficient transfer of certain reflections in the objective lens has been explained. The artefact severely hinders observation of materials confined in the zeolite channels and cavities. It is shown how to circumvent the artefact problem and how to image confined materials in spite of disturbance caused by the artefact. Image processing by means of a Wiener filter has been applied for removal of the artefact. The detailed surface structure of FAU has been investigated. Comparison of experimental micrographs with images simulated using different surface models indicates that the surface can be terminated in different ways depending on synthesis methods. The dealuminated form of FAU (USY) is covered by an amorphous region. Platinum incorporated in FAU has a preponderance to aggregate in the (111) twin planes, probably due to a local difference in cage structure with more spacious cages. It is shown that platinum is intra-zeolitic as opposed to being located on the external surface of the zeolite crystal. This could be deduced from tomography of ultra-thin sections among observations. HRTEM studies of the mesoporous MCM-41 show that the pores have a hexagonal shape and also supports the mechanistic model proposed which involves a cooperative formation of a mesophase including the silicate species as well as the surfactant. 66 refs, 24 figs
9. Photonic crystal scintillators and methods of manufacture
Science.gov (United States)
Torres, Ricardo D.; Sexton, Lindsay T.; Fuentes, Roderick E.; Cortes-Concepcion, Jose
2015-08-11
Photonic crystal scintillators and their methods of manufacture are provided. Exemplary methods of manufacture include using a highly-ordered porous anodic alumina membrane as a pattern transfer mask for either the etching of underlying material or for the deposition of additional material onto the surface of a scintillator. Exemplary detectors utilizing such photonic crystal scintillators are also provided.
10. Studies on a modular high-energy photon spectrometer of pure CsI scintillators
International Nuclear Information System (INIS)
Kopyto, D.
1994-04-01
Aim of the present thesis is the optimization of components for the construction of a high-energy photon spectrometer of pure CsI for the detection of the neutral pseudoscalar mesons π 0 , η, and η' at COSY. These mesons are distinguished by their decay into two γ quanta and can therefore be detected by means of a photon spectrometer. A concept of a 2-arm shower counter of pure CsI is presented. Conclusions on the energy resolution of such a calorimeter shall yield a test module, which is constructed of 5.5 CsI(pure) pyramide trunk, each of which possesses a length of 30 cm and an angular acceptance of 6 .6 . The geometry of the moduls is formed in such a way that its extension to a 2-arm shower counter is possible at any time. Hitherto 14 by teflon foils wrapped up crystals for the test module were tested. Their energy resolution varies at 0.66 MeV between 20 and 25 % FWHM. Furthermore a method was found, which allows to trim the position dependence to the required values. So for the position dependence of a crystal even a value of 1.1 % could be reached. The energy resolution amounted thereby to 22 % FWHM. A measurement of the energy resolution with 20 MeV protons yielded a value of 7 %. For the energy calibration of the single detector elements in a dynamic range between 1 MeV and 12 GeV with low-energy γ sources the charge response function of the photoelectron multiplier to be applied in the test module was determined in dependence on the light intensity. The measurement resulted that the photomultiplier at 40 MeV (related to a CsI(pure) reference crystal with an about twofold so high efficiency of the detectable light in comparison to the long pyramide trunks) deviates by 4 % and at 300 MeV by 38 % from the linear behaviour, while it at 500 MeV shows a deviation of 50 %
11. A high resolution TOF-PET concept with axial geometry and digital SiPM readout
CERN Document Server
Casella, C; Joram, C; Schneider, T
2014-01-01
The axial arrangement of long scintillation crystals is a promising concept in PET instrumentation to address the need for optimized resolution and sensitivity. Individual crystal readout and arrays of wavelength shifter strips placed orthogonally to the crystals lead to a 3D-detection of the annihilations photons. A fully operational demonstrator scanner, developed by the AX-PET collaboration, proved the potential of this concept in terms of energy and spatial resolution as well as sensitivity. This paper describes a feasibility study, performed on axial prototype detector modules with 100 mm long LYSO crystals, read out by the novel digital Silicon Photomultipliers (dSiPM) from Philips. With their highly integrated readout electronics and excellent intrinsic time resolution, dSiPMs allow for compact, axial detector modules which may extend the potential of the axial PET concept by time of fl ight capabilities (TOF-PET). A coincidence time resolution of 211 ps (FWHM) was achieved in the coincidence of two ax...
12. High spectral resolution studies of gamma ray bursts on new missions
International Nuclear Information System (INIS)
Desai, U. D.; Acuna, M. H.; Cline, T. L.; Dennis, B. R.; Orwig, L. E.; Trombka, J. I.; Starr, R. D.
1996-01-01
Two new missions will be launched in 1996 and 1997, each carrying X-ray and gamma ray detectors capable of high spectral resolution at room temperature. The Argentine Satelite de Aplicaciones Cientificas (SAC-B) and the Small Spacecraft Technology Initiative (SSTI) Clark missions will each carry several arrays of X-ray detectors primarily intended for the study of solar flares and gamma-ray bursts. Arrays of small (1 cm 2 ) cadmium zinc telluride (CZT) units will provide x-ray measurements in the 10 to 80 keV range with an energy resolution of ≅6 keV. Arrays of both silicon avalanche photodiodes (APD) and P-intrinsic-N (PIN) photodiodes (for the SAC-B mission only) will provide energy coverage from 2-25 keV with ≅1 keV resolution. For SAC-B, higher energy spectral data covering the 30-300 keV energy range will be provided by CsI(Tl) scintillators coupled to silicon APDs, resulting in similar resolution but greater simplicity relative to conventional CsI/PMT systems. Because of problems with the Pegasus launch vehicle, the launch of SAC-B has been delayed until 1997. The launch of the SSTI Clark mission is scheduled for June 1996
13. Pinhole SPECT: high resolution imaging of brain tumours in small laboratory animals
International Nuclear Information System (INIS)
Franceschim, M.; Bokulic, T.; Kusic, Z.; Strand, S.E.; Erlandsson, K.
1994-01-01
The performance properties of pinhole SPECT and the application of this technology to evaluate radionuclide uptake in brain in small laboratory animals were investigated. System sensitivity and spatial resolution measurements of a rotating scintillation camera system were made for a low energy pinhole collimator equipped with 2.0 mm aperture pinhole insert. Projection data were acquired at 4 degree increments over 360 degrees in the step and shoot mode using a 4.5 cm radius of rotation. Pinhole planar and SPECT imaging were obtained to evaluate regional uptake of Tl-201, Tc-99m-MIBI, Tc-99m-HMPAO and Tc-99m-DTPA in tumor and control regions of the brain in a primary brain tumor model in Fisher 344 rats. Pinhole SPECT images were reconstructed using a modified cone- beam algorithm developed from a two dimensional fan-beam filtered backprojection algorithm. The reconstructed transaxial resolution of 2.8 FWHM and system sensitivity of 0.086 c/s/kBq with the 2.0 mm pinhole collimator aperture were measured. Tumor to non-tumor uptake ratios at 19-28 days post tumor cell inoculation varied by a factor > 20:1 on SPECT images. Pinhole SPECT provides an important new approach for performing high resolution imaging: the resolution properties of pinhole SPECT are superior to those which have been achieved with conventional SPECT or PET imaging technologies. (author)
14. High resolution time-of-flight (TOF) detector for particle identification
Energy Technology Data Exchange (ETDEWEB)
Boehm, Merlin; Lehmann, Albert; Pfaffinger, Markus; Uhlig, Fred [Physikalisches Institut, Universitaet Erlangen-Nuernberg (Germany); Collaboration: PANDA-Collaboration
2016-07-01
Several prototype tests were performed with the PANDA DIRC detectors at the CERN T9 beam line. A mixed hadron beam with pions, kaons and protons was used at momenta from 2 to 10 GeV/c. For these tests a good particle identification was mandatory. We report about a high resolution TOF detector built especially for this purpose. It consists of two stations each consisting of a Cherenkov radiator read out by a Microchannel-Plate Photomultiplier (MCP-PMT) and a Scintillating Tile (SciTil) counter read out by silicon photomultipliers (SiPMs). With a flight path of 29 m a pion/kaon separation up to 5 GeV/c and a pion/proton separation up to 10 GeV/c was obtained. From the TOF resolutions of different counter combinations the time resolution (sigma) of the individual MCP-PMTs and SciTils was determined. The best counter reached a time resolution of 50 ps.
15. Scintillating optical fibers for fine-grained hodoscopes
International Nuclear Information System (INIS)
Borenstein, S.R.; Strand, R.C.
1981-01-01
Fast detectors with fine spatial resolution will be needed to exploit high event rates at ISABELLE. Scintillating optical fibers for fine grained hodoscopes have been developed by the authors. A commercial manufacturer of optical fibers has drawn and clad PVT scintillator. Detection efficiencies greater than 99% have been achieved for a 1 mm fiber with a PMT over lengths up to 60 cm. Small diameter PMT's and avalanche photodiodes have been tested with the fibers. Further improvements are sought for the fiber and for the APD's sensitivity and coupling efficiency with the fiber
16. Comparison of polystyrene scintillator fiber array and monolithic polystyrene for neutron imaging and radiography
Energy Technology Data Exchange (ETDEWEB)
Simpson, R., E-mail: raspberry@lanl.gov; Cutler, T. E.; Danly, C. R.; Espy, M. A.; Goglio, J. H.; Hunter, J. F.; Madden, A. C.; Mayo, D. R.; Merrill, F. E.; Nelson, R. O.; Swift, A. L.; Wilde, C. H.; Zocco, T. G. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)
2016-11-15
The neutron imaging diagnostic at the National Ignition Facility has been operating since 2011 generating neutron images of deuterium-tritium (DT) implosions at peak compression. The current design features a scintillating fiber array, which allows for high imaging resolution to discern small-scale structure within the implosion. In recent years, it has become clear that additional neutron imaging systems need to be constructed in order to provide 3D reconstructions of the DT source and these additional views need to be on a shorter line of sight. As a result, there has been increased effort to identify new image collection techniques that improve upon imaging resolution for these next generation neutron imaging systems, such as monolithic deuterated scintillators. This work details measurements performed at the Weapons Neutron Research Facility at Los Alamos National Laboratory that compares the radiographic abilities of the fiber scintillator with a monolithic scintillator, which may be featured in a future short line of sight neutron imaging systems.
17. Advantages of GSO Scintillator in Imaging and Law Level Gamma-ray Spectroscopy
CERN Document Server
Sharaf, J
2002-01-01
The single GSO crystal is an excellent scintillation material featuring a high light yield and short decay time for gamma-ray detection. Its performance characteristics were investigated and directly compared to those of BGO. For this purpose, the two scintillators are cut into small crystals of approximately 4*4*10 mm sup 3 and mounted on a PMT. Energy resolution, detection efficiency and counting precision have been measured for various photon energies. In addition to this spectroscopic characterization, the imaging performance of GSO was studied using a scanning rig. The modulation transfer function was calculated and the spatial resolution evaluated by measurements of the detector's point spread function. It is shown that there exists some source intensity for which the two scintillators yield identical precision for identical count time. Below this intensity, the GSO is superior to the BGO detector. The presented properties of GSO suggest potential applications of this scintillator in gamma-ray spectrosc...
18. Liquid scintillation solutions
International Nuclear Information System (INIS)
Long, E.C.
1976-01-01
The liquid scintillation solution described includes a mixture of: a liquid scintillation solvent, a primary scintillation solute, a secondary scintillation solute, a variety of appreciably different surfactants, and a dissolving and transparency agent. The dissolving and transparency agent is tetrahydrofuran, a cyclic ether. The scintillation solvent is toluene. The primary scintillation solute is PPO, and the secondary scintillation solute is dimethyl POPOP. The variety of appreciably different surfactants is composed of isooctylphenol-polyethoxyethanol and sodium dihexyl sulphosuccinate [fr
19. Preamplifier development for high count-rate, large dynamic range readout of inorganic scintillators
Energy Technology Data Exchange (ETDEWEB)
Keshelashvili, Irakli; Erni, Werner; Steinacher, Michael; Krusche, Bernd; Collaboration: PANDA-Collaboration
2013-07-01
Electromagnetic calorimeter are central component of many experiments in nuclear and particle physics. Modern ''trigger less'' detectors run with very high count-rates, require good time and energy resolution, and large dynamic range. In addition photosensors and preamplifiers must work in hostile environments (magnetic fields). Due to later constraints mainly Avalanche Photo Diodes (APD's), Vacuum Photo Triodes (VPT's), and Vacuum Photo Tetrodes (VPTT's) are used. A disadvantage is their low gain which together with other requirements is a challenge for the preamplifier design. Our group has developed special Low Noise / Low Power (LNP) preamplifier for this purpose. They will be used to equip PANDA EMC forward end-cap (dynamic range 15'000, rate 1MHz), where the PWO II crystals and preamplifier have to run in an environment cooled down to -25{sup o}C. Further application is the upgrade of the Crystal Barrel detector at the Bonn ELSA accelerator with APD readout for which special temperature comparison of the APD gain and good time resolution is necessary. Development and all test procedures after the mass production done by our group during past several years in Basel University will be reported.
20. High resolution time integration for SN radiation transport
International Nuclear Information System (INIS)
Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.
2009-01-01
First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method
1. R&D of the CEPC scintillator-tungsten ECAL
Science.gov (United States)
Dong, M. Y.
2018-03-01
The circular electron and positron collider (CEPC) was proposed as a future Higgs factory. To meet the physics requirements, a particle flow algorithm-oriented calorimeter system with high energy resolution and precise reconstruction is considered. A sampling calorimeter with scintillator-tungsten sandwich structure is selected as one of the electromagnetic calorimeter (ECAL) options due to its good performance and relatively low cost. We present the design, the test and the optimization of the scintillator module read out by silicon photomultiplier (SiPM), including the design and the development of the electronics. To estimate the performance of the scintillator and SiPM module for particles with different energy, the beam test of a mini detector prototype without tungsten shower material was performed at the E3 beams in Institute of High Energy Physics (IHEP). The results are consistent with the expectation. These studies provide a reference and promote the development of particle flow electromagnetic calorimeter for the CEPC.
2. Transparent Ceramic Scintillator Fabrication, Properties and Applications
International Nuclear Information System (INIS)
Cherepy, N.J.; Kuntz, J.D.; Roberts, J.J.; Hurst, T.A.; Drury, O.B.; Sanner, R.D.; Tillotson, T.M.; Payne, S.A.
2008-01-01
Transparent ceramics offer an alternative to single crystals for scintillator applications such as gamma ray spectroscopy and radiography. We have developed a versatile, scaleable fabrication method, using Flame Spray Pyrolysis (FSP) to produce feedstock which is readily converted into phase-pure transparent ceramics. We measure integral light yields in excess of 80,000 Ph/MeV with Cerium-doped Garnets, and excellent optical quality. Avalanche photodiode readout of Garnets provides resolution near 6%. For radiography applications, Lutetium Oxide offers a high performance metric and is formable by ceramics processing. Scatter in transparent ceramics due to secondary phases is the principal limitation to optical quality, and afterglow issues that affect the scintillation performance are presently being addressed
3. New scintillating crystals for PET scanners
CERN Document Server
Lecoq, P
2002-01-01
Systematic R&D on basic mechanism in inorganic scintillators, initiated by the Crystal Clear Collaboration at CERN 10 years ago, has contributed not to a small amount, to the development of new materials for a new generation of medical imaging devices with increased resolution and sensitivity. The first important requirement for a scintillator to be used in medical imaging devices is the stopping power for the given energy range of X and gamma rays to be considered, and more precisely the conversion efficiency. A high light yield is also mandatory to improve the energy resolution, which is essentially limited by the photostatistics and the electronic noise at these energies. A short scintillation decay time allows to reduce the dead time and therefore to increase the limiting counting rate. When all these requirements are fulfilled the sensitivity and image contrast are increased for a given patient dose, or the dose can be reduced. Examples of new materials under development by the Crystal Clear Collabor...
4. High-pressure {sup 3}He-Xe gas scintillators for simultaneous detection of neutrons and gamma rays over a large energy range
Energy Technology Data Exchange (ETDEWEB)
Tornow, W., E-mail: tornow@tunl.duke.edu [Department of Physics, Duke University, Durham, NC 27708 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708 (United States); Esterline, J.H. [Department of Physics, Duke University, Durham, NC 27708 (United States); Triangle Universities Nuclear Laboratory, Durham, NC 27708 (United States); Leckey, C.A. [Department of Physics, The College of William and Mary, Williamsburg, VA 23187 (United States); Weisel, G.J. [Department of Physics, Penn State Altoona, Altoona, PA 16601 (United States)
2011-08-11
We report on features of high-pressure {sup 3}He-Xe gas scintillators which have not been sufficiently addressed in the past. Such gas scintillators can be used not only for the efficient detection of low-energy neutrons but at the same time for the detection and identification of {gamma}-rays as well. Furthermore, {sup 3}He-Xe gas scintillators are also very convenient detectors for fast neutrons in the 1-10 MeV energy range and for high-energy {gamma}-rays in the 7-15 MeV energy range. Due to their linear pulse-height response and self calibration via the {sup 3}He(n,p){sup 3}H reaction, neutron and {gamma}-ray energies can easily be determined in this high-energy regime.
5. High-pressure 3He-Xe gas scintillators for simultaneous detection of neutrons and gamma rays over a large energy range
International Nuclear Information System (INIS)
Tornow, W.; Esterline, J.H.; Leckey, C.A.; Weisel, G.J.
2011-01-01
We report on features of high-pressure 3 He-Xe gas scintillators which have not been sufficiently addressed in the past. Such gas scintillators can be used not only for the efficient detection of low-energy neutrons but at the same time for the detection and identification of γ-rays as well. Furthermore, 3 He-Xe gas scintillators are also very convenient detectors for fast neutrons in the 1-10 MeV energy range and for high-energy γ-rays in the 7-15 MeV energy range. Due to their linear pulse-height response and self calibration via the 3 He(n,p) 3 H reaction, neutron and γ-ray energies can easily be determined in this high-energy regime.
6. High-resolution gamma spectroscopy with whole-body and partial-body counters. Experience, recommendations. Report
International Nuclear Information System (INIS)
Sahre, P.
1997-12-01
The application of high-resolution gamma spectroscopy with whole-body and partial-body counters shows a steadily rising upward trend over the last few years. This induced the ''Arbeitskreis Inkorporationsueberwachung'' of the association ''Fachverband fuer Strahlenschutz e.V.'' to organise a meeting for joint elaboration of a guide on recommended applications of this measuring technique, based on a review of existing experience and results. A key item on the agenda of the meeting was the comparative evaluation of the Ge semiconductor detector and the NaI solid scintillation detector. (orig./CB) [de
7. Novel scintillators and silicon photomultipliers for nuclear physics and applications
International Nuclear Information System (INIS)
Jenkins, David
2015-01-01
Until comparatively recently, scintillator detectors were seen as an old-fashioned tool of nuclear physics with more attention being given to areas such as gamma-ray tracking using high-purity germanium detectors. Next-generation scintillator detectors, such as lanthanum bromide, which were developed for the demands of space science and gamma- ray telescopes, are found to have strong applicability to low energy nuclear physics. Their excellent timing resolution makes them very suitable for fast timing measurements and their much improved energy resolution compared to conventional scintillators promises to open up new avenues in nuclear physics research which were presently hard to access. Such 'medium-resolution' spectroscopy has broad interest across several areas of contemporary interest such as the study of nuclear giant resonances. In addition to the connections to space science, it is striking that the demands of contemporary medical imaging have strong overlap with those of experimental nuclear physics. An example is the interest in PET-MRI combined imaging which requires putting scintillator detectors in a high magnetic field environment. This has led to strong advances in the area of silicon photomultipliers, a solid-state replacement for photomultiplier tubes, which are insensitive to magnetic fields. Broad application to nuclear physics of this technology may be foreseen. (paper)
8. Some possible improvements in scintillation calorimeters
International Nuclear Information System (INIS)
Lorenz, E.
1985-03-01
Two ideas for improvements of scintillation calorimeters will be presented: a) improved readout of scintillating, totally active electromagnetic calorimeters with combinations of silicon photodiodes and fluorescent panel collectors, b) use of time structure analysis on calorimetry, both for higher rate applications and improved resolution for hadron calorimeters. (orig.)
9. High resolution time integration for Sn radiation transport
International Nuclear Information System (INIS)
Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.
2008-01-01
First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)
10. The measurement of the presampled MTF of a high spatial resolution neutron imaging system
International Nuclear Information System (INIS)
Cao, Raymond Lei; Biegalski, Steven R.
2007-01-01
A high spatial resolution neutron imaging device was developed at the Mark II TRIGA reactor at University of Texas at Austin. As the modulation transfer function (MTF) is recognized as a well-established parameter for evaluation of imaging system resolution, the aliasing associated with digital sampling adds complexity to its measurement. Aliasing is especially problematic when using a high spatial resolution micro-channel plate (MCP) neutron detector that has a pixel grid size similar to that of a CCD array. To compensate for the aliasing an angulated edge method was used to evaluate the neutron imaging facility, overcoming aliasing by obtaining an oversampled edge spread function (ESF). Baseline correction was applied to the ESF to remove the noticeable trends and the LSF was multiplied by Hann window to obtain a smoothed version of presampled MTF. The computing procedure is confirmed by visual inspection of a testing phantom; in addition, it is confirmed by comparison to the MTF measurement of a scintillation screen with a known MTF curve
11. Molecular imaging: High-resolution detectors for early diagnosis and therapy monitoring of breast cancer
Energy Technology Data Exchange (ETDEWEB)
Garibaldi, F. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy)]. E-mail: Franco.garibaldi@iss.infn.it; Cisbani, E. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Colilli, S. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Cusanno, F. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Fratoni, R. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Giuliani, F. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Gricia, M. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Lucentini, M. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Fratoni, R. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Lo Meo, S. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Magliozzi, M.L. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Santanvenere, F. [Istituto Superiore di Sanita and INFN-gr. Sanita-Rome (Italy); Cinti, M.N. [University La Sapienza, Rome (Italy); Pani, R. [University La Sapienza, Rome (Italy); Pellegrini, R. [University La Sapienza, Rome (Italy); Simonetti, G. [University Tor Vergata, Rome (Italy); Schillaci, O. [University Tor Vergata, Rome (Italy); Del Vecchio, S. [CNR Napoli, Naples (Italy); Salvatore, M. [CNR Napoli, Naples (Italy); Majewski, S. [Jefferson Lab, Newport News, VA (United States); Lanza, R.C. [Massachusetts Institute of Technology, Cambridge, MA (United States); De Vincentis, G. [University La Sapienza, Rome (Italy); Scopinaro, F. [University La Sapienza, Rome (Italy)
2006-12-20
Dedicated high-resolution detectors are required for detection of small cancerous breast tumours by molecular imaging with radionuclides. Absorptive collimation is normally applied in imaging single photon emitters, but it results in a strong reduction in detection efficiency. Systems based on electronic collimation are complex and expensive. For these reasons simulations and measurements have been performed to design optimised dedicated high-resolution mini gamma camera. Critical parameters are contrast and signal-to-noise ratio (SNR). Intrinsic performance (spatial resolution, pixel identification, and response linearity and uniformity) were first optimised. Pixellated scintillator arrays (NaI(Tl)) of different pixel size were coupled to arrays of PSPMTs with different anode pad dimensions (6x6 mm{sup 2} and 3x3 mm{sup 2}). Detectors having a field of view (FOV) of 100x100 mm{sup 2} and 150x200 mm{sup 2} were designed and built. The electronic system allows read out of all the anode pad signals. The collimation technique was then considered and limits of coded aperture option were studied. Preliminary results are presented.
12. Mechanical design of a high-resolution x-ray powder diffractometer at the Advanced Photon Source.
Energy Technology Data Exchange (ETDEWEB)
Shu, D.; Lee, P.; Preissner, C.; Ramanathan, M.; Beno, M.; VonDreele, R.; Ranay, R.; Ribaud, L.; Kurtz, C.; Jiao, X.; Kline, D.; Jemian, P.; Toby, B.
2007-01-01
A novel high-resolution x-ray powder diffractometer has been designed and commissioned at the bending magnet beamline 11-BM at the Advanced Photon Source (APS), Argonne National Laboratory (ANL). This state-of-the-art instrument is designed to meet challenging mechanical and optical specifications for producing high-quality powder diffraction data with high throughput. The 2600 mm (H) X 2100 mm (L) X 1700 mm (W) diffractometer consists of five subassemblies: a customized two-circle goniometer with a 3-D adjustable supporting base; a twelve-channel high-resolution crystal analyzer system with an array of precision x-ray slits; a manipulator system for a twelve scintillator x-ray detectors; a 4-D sample manipulator with cryo-cooling capability; and a robot-based sample exchange automation system. The mechanical design of the diffractometer as well as the test results of its positioning performance are presented in this paper.
13. Mechanical design of a high-resolution x-ray powder diffractometer at the Advanced Photon Source
International Nuclear Information System (INIS)
Shu, D.; Lee, P.; Preissner, C.; Ramanathan, M.; Beno, M.; VonDreele, R.; Ranay, R.; Ribaud, L.; Kurtz, C.; Jiao, X.; Kline, D.; Jemian, P.; Toby, B.
2007-01-01
A novel high-resolution x-ray powder diffractometer has been designed and commissioned at the bending magnet beamline 11-BM at the Advanced Photon Source (APS), Argonne National Laboratory (ANL). This state-of-the-art instrument is designed to meet challenging mechanical and optical specifications for producing high-quality powder diffraction data with high throughput. The 2600 mm (H) X 2100 mm (L) X 1700 mm (W) diffractometer consists of five subassemblies: a customized two-circle goniometer with a 3-D adjustable supporting base; a twelve-channel high-resolution crystal analyzer system with an array of precision x-ray slits; a manipulator system for a twelve scintillator x-ray detectors; a 4-D sample manipulator with cryo-cooling capability; and a robot-based sample exchange automation system. The mechanical design of the diffractometer as well as the test results of its positioning performance are presented in this paper.
14. R&D on scintillation materials for novel ionizing radiation detectors for High Energy Physics, medical imaging and industrial applications
CERN Multimedia
Chipaux, R; Rinaldi, D; Boursier, Y M; Vasilyev, A; Tikhomirov, V; Morel, C; Choi, Y; Tamulaitis, G
2002-01-01
The Crystal Clear Collaboration (CCC) was approved by the Detector R&D Committee as RD18 in 1990 with the objective of developing new inorganic scintillators suitable for crystal electromagnetic calorimeters of LHC experiments. From 1990 to 1994, CCC made an intensive investigation for the quest of the most adequate ideal scintillator for the LHC; three main candidates were identified and extensively studied : CeF$_{3}$, PbWO$_{4}$ and heavy scintillating glasses. Lead tungstate was chosen by CMS and ALICE as the most cost effective crystal compliant to LHC conditions. Today 76648 PWO crystals are installed in CMS and 17920 in ALICE. After this success Crystal clear has continued its investigation on new scintillators and the understanding of scintillation mechanisms and light transfer properties in particular : The understanding of cerium ion as activator, The development of LuAP, LuYAP crystals for medical imaging applications, (CERN patent) Investigation of Ytterbium based scintillators for solar ne...
15. Scintillating camera
International Nuclear Information System (INIS)
Vlasbloem, H.
1976-01-01
The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen
16. Scintillator structure
International Nuclear Information System (INIS)
Cusano, D.A.; Prener, J.S.
1979-01-01
A scintillator structure comprises at least one layer of transparent fused quartz with a phosphor coating on one or both sides adjacent to at least one transparent layer of epoxy resin which directs light from the phosphor to a detector. The phosphor layer may be formed from a powder optionally with a binder, a single crystal or a melt, or by evaporation or sintering. A plurality of multiple layers may be used or the structure tilted for greater absorption. The structure may be surrounded by another such structure optionally operating in cascade with the first. Many phosphors are specified. A scintillator structure comprises phosphor particles dispersed in epoxy resin or copoly imide-silicone and cast in a multi-compartment box with long sides transparent to X-rays and dividers opaque to X-rays. (UK)
17. A high resolution, low power time-of-flight system for the space experiment AMS
International Nuclear Information System (INIS)
Alvisi, D.; Anselmo, F.; Baldini, L.; Bari, G.; Basile, M.; Bellagamba, L.; Bruni, A.; Bruni, G.; Boscherini, D.; Casadei, D.; Cara Romeo, G.; Castellini, G.; Cifarelli, L.; Cindolo, F.; Contin, A.; De Pasquale, S.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Margotti, A.; Massam, T.; Nania, R.; Palmonari, F.; Polini, A.; Recupero, S.; Sartorelli, G.; Williams, C.; Zichichi, A.
1999-01-01
The system of plastic scintillator counters for the AMS experiment is described. The main characteristics of the detector are: (a) large sensitive area (four 1.6 m 2 planes) with small dead space; (b) low-power consumption (150 W for the power and the read-out electronics of 336 PMs); (c) 120 ps time resolution
18. EMODnet High Resolution Seabed Mapping - further developing a high resolution digital bathymetry for European seas
Science.gov (United States)
Schaap, D.; Schmitt, T.
2017-12-01
19. Designing an optimally proportional inorganic scintillator
Energy Technology Data Exchange (ETDEWEB)
Singh, Jai, E-mail: jai.singh@cdu.edu.au [School of Engineering and IT, B-Purple-12, Faculty of EHSE, Charles Darwin University, NT 0909 (Australia); Koblov, Alexander [School of Engineering and IT, B-Purple-12, Faculty of EHSE, Charles Darwin University, NT 0909 (Australia)
2012-09-01
The nonproportionality observed in the light yield of inorganic scintillators is studied theoretically as a function of the rates of bimolecular and Auger quenching processes occurring within the electron track initiated by a gamma- or X-ray photon incident on a scintillator. Assuming a cylindrical track, the influence of the track radius and concentration of excitations created within the track on the scintillator light yield is also studied. Analysing the calculated light yield a guideline for inventing an optimally proportional scintillator with optimal energy resolution is presented.
20. Designing an optimally proportional inorganic scintillator
International Nuclear Information System (INIS)
Singh, Jai; Koblov, Alexander
2012-01-01
The nonproportionality observed in the light yield of inorganic scintillators is studied theoretically as a function of the rates of bimolecular and Auger quenching processes occurring within the electron track initiated by a gamma- or X-ray photon incident on a scintillator. Assuming a cylindrical track, the influence of the track radius and concentration of excitations created within the track on the scintillator light yield is also studied. Analysing the calculated light yield a guideline for inventing an optimally proportional scintillator with optimal energy resolution is presented.
1. Inorganic-organic rubbery scintillators
CERN Document Server
Gektin, A V; Pogorelova, N; Neicheva, S; Sysoeva, E; Gavrilyuk, V
2002-01-01
Spectral-kinetic luminescence properties of films, containing homogeneously dispersed scintillation particles of CsI, CsI:Tl, CsI:Na, and NaI:Tl in optically transparent organosiloxane matrix, are presented. Material is flexible and rubbery and in consequence the detectors of convenient shapes can be produced. It is found that luminescence spectra of the received films are identical whereas decay times are much shorter compared to the same ones of the corresponding single crystals. Layers with pure CsI demonstrate only the fast UV emission (307 nm, 10 ns) without blue microsecond afterglow typical for crystals. The films containing NaI:Tl are non-hygroscopic and preserve scintillation properties for a long time in humid atmosphere unlike single crystals. Organosiloxane layers with CsI:Tl particles provide high light output with good energy resolution for sup 5 sup 5 Fe, sup 1 sup 0 sup 9 Cd, sup 2 sup 4 sup 1 Am sources, and are capable of detecting both X-rays and alpha-, beta-particles.
2. Highly sensitive high resolution Raman spectroscopy using resonant ionization methods
International Nuclear Information System (INIS)
Owyoung, A.; Esherick, P.
1984-05-01
In recent years, the introduction of stimulated Raman methods has offered orders of magnitude improvement in spectral resolving power for gas phase Raman studies. Nevertheless, the inherent weakness of the Raman process suggests the need for significantly more sensitive techniques in Raman spectroscopy. In this we describe a new approach to this problem. Our new technique, which we call ionization-detected stimulated Raman spectroscopy (IDSRS), combines high-resolution SRS with highly-sensitive resonant laser ionization to achieve an increase in sensitivity of over three orders of magnitude. The excitation/detection process involves three sequential steps: (1) population of a vibrationally excited state via stimulated Raman pumping; (2) selective ionization of the vibrationally excited molecule with a tunable uv source; and (3) collection of the ionized species at biased electrodes where they are detected as current in an external circuit
3. Comparison of CsI(Tl) and CsI(Na) partially slotted crystals for high-resolution SPECT imaging
International Nuclear Information System (INIS)
Giokaris, N.; Loudos, G.; Maintas, D.; Karabarbounis, A.; Lembesi, M.; Spanoudaki, V.; Stiliaris, E.; Boukis, S.; Sakellios, N.; Karakatsanis, N.; Gektin, A.; Boyarintsev, A.; Pedash, V.; Gayshan, V.
2006-01-01
Dedicated systems based on Position Sensitive Photomultiplier Tubes (PSPMTs) coupled to scintillators, have been used over the past years for the construction of compact systems, suitable for applications such as small animal imaging and small organs imaging. Most of the proposed systems are based on fully pixelized scintillators. Previous studies have shown that partially slotted scintillators offer a good compromise between cost, energy resolution and spatial resolution. In this work, the performance of two sets of CsI(Tl) and CsI(Na) partially slotted crystals is compared. Initial results show that CsI(Tl) scintillators are more suitable for gamma-ray detection, since their performance in terms of sensitivity, spatial and energy resolution is superior than that of CsI(Na)
4. Development of a New Class of Scintillating Fibres with Very Short Decay Time and High Light Yield
International Nuclear Information System (INIS)
Borshchev, O.; Ponomarenko, S.; Surin, N.; Cavalcante, A.B.R.; Gavardi, L.; Gruber, L.; Joram, C.; Shinji, O.
2017-01-01
We present first studies of a new class of scintillating fibres which are characterised by very short decay times and high light yield. The fibres are based on a novel type of luminophores admixed to a polystyrene core matrix. These so-called Nanostructured Organosilicon Luminophores (NOL) have high photoluminescense quantum yield and decay times just above 1 ns. A blue and a green emitting prototype fibre with 250 μm diameter were produced and characterised in terms of attenuation length, ionisation light yield, decay time and tolerance to x-ray irradiation. The well-established Kuraray SCSF-78 and SCSF-3HF fibres were taken as references. Even though the two prototype fibres mark just an intermediate step in an ongoing development, their performance is already on a competitive level. In particular, their decay time constants are about a factor of two shorter than the fastest known fibres, which makes them promising candidates for time critical applications.
5. Simulation of light collection in calcium tungstate scintillation detectors
Directory of Open Access Journals (Sweden)
F. A. Danevich
2015-12-01
Full Text Available Due to high operational properties, the oxide scintillators are perspective for cryogenic scintillation experiments with aim of study rare nuclear processes. In order to optimize light yield and the energy resolution we performed calculations of the efficiency of light collection for different geometries of scintillation detector with CaWO4 crystal by Monte-Carlo method using Litrani, Geant4 and Zemax packages. The calculations were compared with experimental data in the same configurations, depending on the crystal shape, surface treatment, material and shape of the reflector and presence of optical contact. The best results were obtained with crystals shaped as the right prism with triangle base, with completely diffused surfaces, using mirror reflector shaped as a truncated cone. Simulations by using Litrani have shown the best agreement with experimental results.
6. Scintillation Reduction using Conjugate-Plane Imaging (Abstract)
Science.gov (United States)
Vander Haagen, G. A.
2017-12-01
(Abstract only) All observatories are plagued by atmospheric turbulence exhibited as star scintillation or "twinkle" whether a high altitude adaptive optics research or a 30-cm amateur telescope. It is well known that these disturbances are caused by wind and temperature-driven refractive gradients in the atmosphere and limit the ultimate photometric resolution of land-based facilities. One approach identified by Fuchs (1998) for scintillation noise reduction was to create a conjugate image space at the telescope and focus on the dominant conjugate turbulent layer within that space. When focused on the turbulent layer little or no scintillation exists. This technique is described whereby noise reductions of 6 to 11/1 have been experienced with mathematical and optical bench simulations. Discussed is a proof-of-principle conjugate optical train design for an 80-mm, f7 telescope.
7. High resolution IVEM tomography of biological specimens
Energy Technology Data Exchange (ETDEWEB)
Sedat, J.W.; Agard, D.A. [Univ. of California, San Francisco, CA (United States)
1997-02-01
Electron tomography is a powerful tool for elucidating the three-dimensional architecture of large biological complexes and subcellular organelles. The introduction of intermediate voltage electron microscopes further extended the technique by providing the means to examine very large and non-symmetrical subcellular organelles, at resolutions beyond what would be possible using light microscopy. Recent studies using electron tomography on a variety of cellular organelles and assemblies such as centrosomes, kinetochores, and chromatin have clearly demonstrated the power of this technique for obtaining 3D structural information on non-symmetric cell components. When combined with biochemical and molecular observations, these 3D reconstructions have provided significant new insights into biological function.
8. High resolution synchrotron light analysis at ELSA
Energy Technology Data Exchange (ETDEWEB)
Switka, Michael; Zander, Sven; Hillert, Wolfgang [Bonn Univ. (Germany). Elektronen-Stretcher Anlage ELSA-Facility (ELSA)
2013-07-01
The pulse stretcher ring ELSA provides polarized electrons with energies up to 3.5 GeV for external hadron experiments. In order to suffice the need of stored beam intensities towards 200 mA, advanced beam instability studies need to be carried out. An external diagnostic beamline for synchrotron light analysis has been set up and provides the space for multiple diagnostic tools including a streak camera with time resolution of <1 ps. Beam profile measurements are expected to identify instabilities and reveal their thresholds. The effect of adequate countermeasures is subject to analysis. The current status of the beamline development is presented.
9. High resolution NMR spectroscopy of synthetic polymers in bulk
International Nuclear Information System (INIS)
Komorski, R.A.
1986-01-01
The contents of this book are: Overview of high-resolution NMR of solid polymers; High-resolution NMR of glassy amorphous polymers; Carbon-13 solid-state NMR of semicrystalline polymers; Conformational analysis of polymers of solid-state NMR; High-resolution NMR studies of oriented polymers; High-resolution solid-state NMR of protons in polymers; and Deuterium NMR of solid polymers. This work brings together the various approaches for high-resolution NMR studies of bulk polymers into one volume. Heavy emphasis is, of course, given to 13C NMR studies both above and below Tg. Standard high-power pulse and wide-line techniques are not covered
10. Extruded plastic scintillator for MINERvA
International Nuclear Information System (INIS)
Pla-Dalmau, Anna; Bross, Alan D.; FermilabRykalin, Victor V.; Wood, Brian M.; NICADD, DeKalb
2005-01-01
An extrusion line has recently been installed at Fermilab in collaboration with NICADD (Northern Illinois Center for Accelerator and Detector Development). This new facility will serve to further develop and improve extruded plastic scintillator. Since polystyrene is widely used in the consumer industry, the logical path was to investigate the extrusion of commercial-grade polystyrene pellets with dopants to yield high quality plastic scintillator. The D0 and MINOS experiments are already using extruded scintillator strips in their detectors. A new experiment at Fermilab is pursuing the use of extruded plastic scintillator. A new plastic scintillator strip is being tested and its properties characterized. The initial results are presented here
11. High resolution transmission imaging without lenses
International Nuclear Information System (INIS)
Rodenburg, J M; Hurst, A C; Maiden, A
2010-01-01
The whole history of transmission imaging has been dominated by the lens, whether used in visible-light optics, electron optics or X-ray optics. Lenses can be thought of as a very efficient method of processing a wave front scattered from an object into an image of that object. An alternative approach is to undertake this image-formation process using a computational technique. The crudest scattering experiment is to simply record the intensity of a diffraction pattern. Recent progress in so-called diffractive imaging has shown that it is possible to recover the phase of a scattered wavefield from its diffraction pattern alone, as long as the object (or the illumination on the object) is of finite extent. In this paper we present results from a very efficient phase retrieval method which can image infinitely large fields of view. It may have important applications in improving resolution in electron microscopy, or at least allowing low specification microscopes to achieve resolution comparable to state-of-the-art machines.
12. Scalable Algorithms for Large High-Resolution Terrain Data
DEFF Research Database (Denmark)
Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan
2010-01-01
In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...
13. High-resolution X-ray diffraction studies of multilayers
DEFF Research Database (Denmark)
Christensen, Finn Erland; Hornstrup, Allan; Schnopper, H. W.
1988-01-01
High-resolution X-ray diffraction studies of the perfection of state-of-the-art multilayers are presented. Data were obtained using a triple-axis perfect-crystal X-ray diffractometer. Measurements reveal large-scale figure errors in the substrate. A high-resolution triple-axis set up is required...
14. Achieving sensitive, high-resolution laser spectroscopy at CRIS
Energy Technology Data Exchange (ETDEWEB)
Groote, R. P. de [Instituut voor Kern- en Stralingsfysica, KU Leuven (Belgium); Lynch, K. M., E-mail: kara.marie.lynch@cern.ch [EP Department, CERN, ISOLDE (Switzerland); Wilkins, S. G. [The University of Manchester, School of Physics and Astronomy (United Kingdom); Collaboration: the CRIS collaboration
2017-11-15
The Collinear Resonance Ionization Spectroscopy (CRIS) experiment, located at the ISOLDE facility, has recently performed high-resolution laser spectroscopy, with linewidths down to 20 MHz. In this article, we present the modifications to the beam line and the newly-installed laser systems that have made sensitive, high-resolution measurements possible. Highlights of recent experimental campaigns are presented.
15. High resolution UV spectroscopy and laser-focused nanofabrication
NARCIS (Netherlands)
Myszkiewicz, G.
2005-01-01
This thesis combines two at first glance different techniques: High Resolution Laser Induced Fluorescence Spectroscopy (LIF) of small aromatic molecules and Laser Focusing of atoms for Nanofabrication. The thesis starts with the introduction to the high resolution LIF technique of small aromatic
16. Ionospheric Scintillation Effects on GPS
Science.gov (United States)
Steenburgh, R. A.; Smithtro, C.; Groves, K.
2007-12-01
. Ionospheric scintillation of Global Positioning System (GPS) signals threatens navigation and military operations by degrading performance or making GPS unavailable. Scintillation is particularly active, although not limited to, a belt encircling the earth within 20 degrees of the geomagnetic equator. As GPS applications and users increases, so does the potential for detrimental impacts from scintillation. We examined amplitude scintillation data spanning seven years from Ascension Island, U.K.; Ancon, Peru; and Antofagasta, Chile in the Atlantic/Americas longitudinal sector at as well as data from Parepare, Indonesia; Marak Parak, Malaysia; Pontianak, Indonesia; Guam; and Diego Garcia, U.K.; in the Pacific longitudinal sector. From these data, we calculate percent probability of occurrence of scintillation at various intensities described by the S4 index. Additionally, we determine Dilution of Precision at one minute resolution. We examine diurnal, seasonal and solar cycle characteristics and make spatial comparisons. In general, activity was greatest during the equinoxes and solar maximum, although scintillation at Antofagasta, Chile was higher during 1998 rather than at solar maximum.
17. High rate read-out of LaBr(Ce) scintillator with a fast digitizer
International Nuclear Information System (INIS)
Stevanato, L.; Cester, D.; Nebbia, G.; Viesti, G.; Neri, F.; Petrucci, S.; Selmi, S.; Tintori, C
2012-01-01
The energy resolution of a LaBr(Ce) detector has been studied as a function of the count rate up to 340 kHz by using a 12 bit 250 MS/s V1720 digitizer. The time resolution achieved by processing off line the digitized signals has been also determined. It appears that the energy resolution obtained with the digitizer is better than that achievable using standard NIM electronics. The time resolution yielded by the digitizer with a software CFTD is about δt=0.8 ns (FWHM), slightly worse with respect to δt=0.65 ns (FWHM) obtained from standard NIM. However, this time resolution lies well within the requirements for applications in Non-Destructive Analysis of large objects with tagged neutron beams.
18. High-Resolution Sonars: What Resolution Do We Need for Target Recognition?
Directory of Open Access Journals (Sweden)
Pailhas Yan
2010-01-01
Full Text Available Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms.We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition.
19. Status of timing with plastic scintillation detectors
International Nuclear Information System (INIS)
Moszynski, M.; Bengtson, B.
1979-01-01
Timing properties of scintillators and photomultipliers as well as theoretical and experimental studies of time resolution of scintillation counters are reviewed. Predictions of the theory of the scintillation pulse generation processes are compared with the data on the light pulse shape from small samples, in which the light pulse shape depends only on the composition of the scintillator. For larger samples the influence of the light collection process and the self-absorption process on the light pulse shape are discussed. The data on rise times, fwhm's, decay times and light yield of several commercial scintillators used in timing are collected. The next part of the paper deals with the properties of photomultipliers. The sources of time uncertainties in photomultipliers as a spread of the initial velocity of photoelectrons, emission of photoelectrons under different angles and from different points at the photocathode, the time spread and the gain dispersion introduced by electron photomultiplier are reviewed. The experimental data on the time jitter, single electron response and photoelectron yield of some fast photomultipliers are collected. As the time resolution of the timing systems with scintillation counters depends also on time pick-off units, a short presentation of the timing methods is given. The discussion of timing theories is followed by a review of experimental studies of the time resolution of scintillation counters. The paper is ended by an analysis of prospects on further progress of the subnanosecond timing with scintillation counters. (Auth.)
20. Ultra high resolution X-ray detectors
International Nuclear Information System (INIS)
Hess, U.; Buehler, M.; Hentig, R. von; Hertrich, T.; Phelan, K.; Wernicke, D.; Hoehne, J.
2001-01-01
CSP Cryogenic Spectrometers GmbH is developing cryogenic energy dispersive X-ray spectrometers based on superconducting detector technology. Superconducting sensors exhibit at least a 10-fold improvement in energy resolution due to their low energy gap compared to conventional Si(Li) or Ge detectors. These capabilities are extremely valuable for the analysis of light elements and in general for the analysis of the low energy range of the X-ray spectrum. The spectrometer is based on a mechanical cooler needing no liquid coolants and an adiabatic demagnetization refrigerator (ADR) stage which supplies the operating temperature of below 100 mK for the superconducting sensor. Applications include surface analysis in semiconductor industry as well material analysis for material composition e.g. in ceramics or automobile industry
1. Measurement methods for several properties of scintillator
International Nuclear Information System (INIS)
Luo Fengqun; Ji Changsong
1998-01-01
The current paper describes the experimental measurement methods for the relative light output, the relative energy conversion efficiency, the intrinsic amplitude resolution and the detection efficiency of the scintillators and their temperature effects
2. Textural Segmentation of High-Resolution Sidescan Sonar Images
National Research Council Canada - National Science Library
Kalcic, Maria; Bibee, Dale
1995-01-01
.... The high resolution of the 455 kHz sonar imagery also provides much information about the surficial bottom sediments, however their acoustic scattering properties are not well understood at high frequencies...
3. High resolution X-ray detector for synchrotron-based microtomography
CERN Document Server
Stampanoni, M; Wyss, P; Abela, R; Patterson, B; Hunt, S; Vermeulen, D; Rueegsegger, P
2002-01-01
Synchrotron-based microtomographic devices are powerful, non-destructive, high-resolution research tools. Highly brilliant and coherent X-rays extend the traditional absorption imaging techniques and enable edge-enhanced and phase-sensitive measurements. At the Materials Science Beamline MS of the Swiss Light Source (SLS), the X-ray microtomographic device is now operative. A high performance detector based on a scintillating screen optically coupled to a CCD camera has been developed and tested. Different configurations are available, covering a field of view ranging from 715x715 mu m sup 2 to 7.15x7.15 mm sup 2 with magnifications from 4x to 40x. With the highest magnification 480 lp/mm had been achieved at 10% modulation transfer function which corresponds to a spatial resolution of 1.04 mu m. A low-noise fast-readout CCD camera transfers 2048x2048 pixels within 100-250 ms at a dynamic range of 12-14 bit to the file server. A user-friendly graphical interface gives access to the main parameters needed for ...
4. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
Science.gov (United States)
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
5. Eu-activated fluorochlorozirconate glass-ceramic scintillators
International Nuclear Information System (INIS)
Johnson, J. A.; Schweizer, S.; Henke, B.; Chen, G.; Woodford, J.; Newman, P. J.; MacFarlane, D. R.
2006-01-01
Rare-earth-doped fluorochlorozirconate (FCZ) glass-ceramic materials have been developed as scintillators and their properties investigated as a function of dopant level. The paper presents the relative scintillation efficiency in comparison to single-crystal cadmium tungstate, the scintillation intensity as a function of x-ray intensity and x-ray energy, and the spatial resolution (modulation transfer function). Images obtained with the FCZ glass-ceramic scintillator and with cadmium tungstate are also presented. Comparison shows that the image quality obtained using the glass ceramic is close to that from cadmium tungstate. Therefore, the glass-ceramic scintillator could be used as an alternative material for image formation resulting from scintillation. Other inorganic scintillators such as single crystals or polycrystalline films have limitations in resolution or size, but the transparent glass-ceramic can be scaled to any shape or size with excellent resolution
6. High resolution integral holography using Fourier ptychographic approach.
Science.gov (United States)
Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Liu, Delian
2014-12-29
An innovative approach is proposed for calculating high resolution computer generated integral holograms by using the Fourier Ptychographic (FP) algorithm. The approach initializes a high resolution complex hologram with a random guess, and then stitches together low resolution multi-view images, synthesized from the elemental images captured by integral imaging (II), to recover the high resolution hologram through an iterative retrieval with FP constrains. This paper begins with an analysis of the principle of hologram synthesis from multi-projections, followed by an accurate determination of the constrains required in the Fourier ptychographic integral-holography (FPIH). Next, the procedure of the approach is described in detail. Finally, optical reconstructions are performed and the results are demonstrated. Theoretical analysis and experiments show that our proposed approach can reconstruct 3D scenes with high resolution.
7. A new plastic scintillator with large Stokes shift
International Nuclear Information System (INIS)
Destruel, P.; Taufer, M.
1989-01-01
We have developed a new plastic scintillator with the novel characteristic of highly localized light emission; scintillation and wavelength shifting take place within a few tens of micrometers of the primary ionization. The new scintillator consists of a scintillating polymer base [polyvinyl toluene (PVT) or polystyrene (PS)] doped with a single wavelength shifter, 1-phenyl-3-mesityl-2-pyrazoline (PMP), which has an exceptionally large Stokes shift and therefore a comparatively small self-absorption of its emitted light. In other characteristics (e.g. scintillation efficiency and decay time) the performance of the new scintillator is similar to a good quality commercial plastic scintillator such as NE110. (orig.)
8. Digital silicon photomultiplier readout of a new fast and bright scintillation crystal (Ce:GFAG)
Energy Technology Data Exchange (ETDEWEB)
Lee, Yong-Seok [Department of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Leem, Hyun-Tae [Molecular Imaging Research & Education (MiRe) Laboratory, Department of Electronic Engineering, Sogang University, Seoul (Korea, Republic of); Yamamoto, Seiichi [Department of Medical Technology, Nagoya University Graduate School of Medicine, Nagoya (Japan); Choi, Yong, E-mail: ychoi@sogang.ac.kr [Molecular Imaging Research & Education (MiRe) Laboratory, Department of Electronic Engineering, Sogang University, Seoul (Korea, Republic of); Kamada, Kei [New Industry Creation Hatchery Center (NICHe), Tohoku University, Sendai (Japan); C& A corporation, Sendai (Japan); Yoshikawa, Akira [New Industry Creation Hatchery Center (NICHe), Tohoku University, Sendai (Japan); C& A corporation, Sendai (Japan); Institute for Material Research, Tohoku University, Sendai (Japan); Park, Sang-Geon [Department of Electrical & Electronics, Silla University, Pusan (Korea, Republic of); Yeom, Jung-Yeol, E-mail: jungyeol@korea.ac.kr [Department of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); School of Biomedical Engineering, Korea University, Seoul (Korea, Republic of)
2016-10-01
A new Gadolinium Fine Aluminum Gallate (Ce:GFAG) scintillation crystal with both high energy resolution and fast timing properties has successfully been grown. Compared to Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG), this new inorganic scintillation crystal has a high luminosity similar to and a faster decay time. In this paper, we report on the timing and energy performance results of the new GFAG scintillation crystal read out with digital silicon photomultipliers (dSiPM) for positron emission tomography (PET) application. The best coincidence resolving time (FWHM) of polished 3×3×5 mm{sup 3} crystals was 223±6 ps for GFAG crystals compared to 396±28 ps for GAGG crystals and 131±3 ps for LYSO crystals respectively. An energy resolution (511 keV peak of Na-22) of 10.9±0.2% was attained with GFAG coupled to dSiPM after correcting for saturation effect, compared to 9.5±0.3% for Ce:GAGG crystals and 11.9±0.4% for LYSO crystals respectively. It is expected that this new scintillator may be competitive in terms of overall properties such as energy resolution, timing resolution and growing (raw material) cost, compared to existing scintillators for positron emission tomography (PET).
9. A high resolution global scale groundwater model
Science.gov (United States)
de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc
2014-05-01
As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater
10. Growth and Characterization of Nanostructured Glass Ceramic Scintillators for Miniature High-Energy Radiation Sensors
Science.gov (United States)
2013-10-01
rise time was resolved using Kerr gating technique with 8 ps resolution. Spectro -temporal dynamics was resolved using streak camera and tunable pump...mol% CeF3 doped glass under UV light irradiation). Fig. 5. Radioluminescence (RL) spectra of all the CeF3 doped glasses...technique with 8 ps resolution. Spectro -temporal dynamics was resolved using streak camera and tunable pump at second/third harmonic (400/267nm) and XUV
11. A new PET detector concept for compact preclinical high-resolution hybrid MR-PET
Science.gov (United States)
Berneking, Arne; Gola, Alberto; Ferri, Alessandro; Finster, Felix; Rucatti, Daniele; Paternoster, Giovanni; Jon Shah, N.; Piemonte, Claudio; Lerche, Christoph
2018-04-01
This work presents a new PET detector concept for compact preclinical hybrid MR-PET. The detector concept is based on Linearly-Graded SiPM produced with current FBK RGB-HD technology. One 7.75 mm x 7.75 mm large sensor chip is coupled with optical grease to a black coated 8 mm x 8 mm large and 3 mm thick monolithic LYSO crystal. The readout is obtained from four readout channels with the linear encoding based on integrated resistors and the Center of Gravity approach. To characterize the new detector concept, the spatial and energy resolutions were measured. Therefore, the measurement setup was prepared to radiate a collimated beam to 25 different points perpendicular to the monolithic scintillator crystal. Starting in the center point of the crystal at 0 mm / 0 mm and sampling a grid with a pitch of 1.75 mm, all significant points of the detector were covered by the collimator beam. The measured intrinsic spatial resolution (FWHM) was 0.74 +/- 0.01 mm in x- and 0.69 +/- 0.01 mm in the y-direction at the center of the detector. At the same point, the measured energy resolution (FWHM) was 13.01 +/- 0.05 %. The mean intrinsic spatial resolution (FWHM) over the whole detector was 0.80 +/- 0.28 mm in x- and 0.72 +/- 0.19 mm in y-direction. The energy resolution (FWHM) of the detector was between 13 and 17.3 % with an average energy resolution of 15.7 +/- 1.0 %. Due to the reduced thickness, the sensitivity of this gamma detector is low but still higher than pixelated designs with the same thickness due to the monolithic crystals. Combining compact design, high spatial resolution, and high sensitivity, the detector concept is particularly suitable for applications where the scanner bore size is limited and high resolution is required - as is the case in small animal hybrid MR-PET.
12. The quest for the ideal inorganic scintillator
International Nuclear Information System (INIS)
Derenzo, S.E.; Weber, M.J.; Bourret-Courchesne, E.; Klintenberg, M.K.
2002-01-01
The past half century has witnessed the discovery of many new inorganic scintillator materials and numerous advances in our understanding of the basic physical processes governing the transformation of ionizing radiation into scintillation light. Whereas scintillators are available with a good combination of physical properties, none provides the desired combination of stopping power, light output, and decay time. A review of the numerous scintillation mechanisms of known inorganic scintillators reveals why none of them is both bright and fast. The mechanisms of radiative recombination in wide-bandgap direct semiconductors, however, remain relatively unexploited for scintillators. We describe how suitably doped semiconductor scintillators could provide a combination of high light output, short decay time, and linearity of response that approach fundamental limits
13. Cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS flat panel detector: Visibility of simulated microcalcifications
OpenAIRE
Shen, Youtao; Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C.
2013-01-01
Purpose: To measure and investigate the improvement of microcalcification (MC) visibility in cone beam breast CT with a high pitch (75 μm), thick (500 μm) scintillator CMOS/CsI flat panel detector (Dexela 2923, Perkin Elmer).
14. High resolution microdiffraction studies using synchrotron radiation
Science.gov (United States)
Spolenak, R.; Tamura, N.; Valek, B. C.; MacDowell, A. A.; Celestre, R. S.; Padmore, H. A.; Brown, W. L.; Marieb, T.; Batterman, B. W.; Patel, J. R.
2002-04-01
The advent of third generation synchrotron light sources in combination with x-ray focusing devices such as Kirkpatrick-Baez mirrors make Laue diffraction on a submicron length scale possible. Analysis of Laue images enables us to determine the deviatoric part of the 3D strain tensor to an accuracy of 2×10-4 in strain with a spatial resolution comparable to the grain size in our thin films. In this paper the application of x-ray microdiffraction to the temperature dependence of the mechanical behavior of a sputtered blanket Cu film and of electroplated damascene Cu lines will be presented. Microdiffraction reveals very large variations in the strain of a film or line from grain to grain. When the strain is averaged over a macroscopic region the results are in good agreement with direct macroscopic stress measurements. However, the strain variations are so large that in some cases in which the average stress is tensile there are some grains actually under compression. The full implications of these observations are still being considered, but it is clear that the mechanical properties of thin film materials are now accessible with new visibility.
15. Cheetah: A high frame rate, high resolution SWIR image camera
Science.gov (United States)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
16. Optimization of the light extraction from heavy inorganic scintillators
CERN Document Server
Kronberger, Matthias; Lecoq, P
2008-01-01
Inorganic scintillators are widely used in modern medical imaging modalities as converter for the X- and gamma-radiation that is used to obtain information about the interior of the body. Likewise, they are applied in high-energy physics to measure the energy of particles that are produced in particle physics experiments. Their use is motivated by the very good detection efficiency of these materials for hard radiation which allows the construction of relatively compact and finely pixelised systems with a high spatial resolution. One key problem in the development of the next generation of particle detectors and medical imaging systems is the optimisation of the energy resolution of the detectors. This parameter is influenced by the statistical fluctuations of the light output of the scintillators, i.e. by the number of photons that are detected when a particle deposits its energy in the scintillator. The light output of the scintillator depends not only on the absolute number of generated photons but also on...
17. Achieving High Resolution Timer Events in Virtualized Environment.
Science.gov (United States)
2015-01-01
Virtual Machine Monitors (VMM) have become popular in different application areas. Some applications may require to generate the timer events with high resolution and precision. This however may be challenging due to the complexity of VMMs. In this paper we focus on the timer functionality provided by five different VMMs-Xen, KVM, Qemu, VirtualBox and VMWare. Firstly, we evaluate resolutions and precisions of their timer events. Apparently, provided resolutions and precisions are far too low for some applications (e.g. networking applications with the quality of service). Then, using Xen virtualization we demonstrate the improved timer design that greatly enhances both the resolution and precision of achieved timer events.
18. High resolution phoswich gamma-ray imager utilizing monolithic MPPC arrays with submillimeter pixelized crystals
Science.gov (United States)
Kato, T.; Kataoka, J.; Nakamori, T.; Kishimoto, A.; Yamamoto, S.; Sato, K.; Ishikawa, Y.; Yamamura, K.; Kawabata, N.; Ikeda, H.; Kamada, K.
2013-05-01
We report the development of a high spatial resolution tweezers-type coincidence gamma-ray camera for medical imaging. This application consists of large-area monolithic Multi-Pixel Photon Counters (MPPCs) and submillimeter pixelized scintillator matrices. The MPPC array has 4 × 4 channels with a three-side buttable, very compact package. For typical operational gain of 7.5 × 105 at + 20 °C, gain fluctuation over the entire MPPC device is only ± 5.6%, and dark count rates (as measured at the 1 p.e. level) amount to acrylic light guide measuring 1 mm thick, and with summing operational amplifiers that compile the signals into four position-encoded analog outputs being used for signal readout. Spatial resolution of 1.1 mm was achieved with the coincidence imaging system using a 22Na point source. These results suggest that the gamma-ray imagers offer excellent potential for applications in high spatial medical imaging.
19. Development of a Si-PM-based high-resolution PET system for small animals
International Nuclear Information System (INIS)
Yamamoto, Seiichi; Imaizumi, Masao; Watabe, Tadashi; Shimosegawa, Eku; Hatazawa, Jun; Watabe, Hiroshi; Kanai, Yasukazu
2010-01-01
A Geiger-mode avalanche photodiode (Si-PM) is a promising photodetector for PET, especially for use in a magnetic resonance imaging (MRI) system, because it has high gain and is less sensitive to a static magnetic field. We developed a Si-PM-based depth-of-interaction (DOI) PET system for small animals. Hamamatsu 4 x 4 Si-PM arrays (S11065-025P) were used for its detector blocks. Two types of LGSO scintillator of 0.75 mol% Ce (decay time: ∼45 ns; 1.1 mm x 1.2 mm x 5 mm) and 0.025 mol% Ce (decay time: ∼31 ns; 1.1 mm x 1.2 mm x 6 mm) were optically coupled in the DOI direction to form a DOI detector, arranged in a 11 x 9 matrix, and optically coupled to the Si-PM array. Pulse shape analysis was used for the DOI detection of these two types of LGSOs. Sixteen detector blocks were arranged in a 68 mm diameter ring to form the PET system. Spatial resolution was 1.6 mm FWHM and sensitivity was 0.6% at the center of the field of view. High-resolution mouse and rat images were successfully obtained using the PET system. We confirmed that the developed Si-PM-based PET system is promising for molecular imaging research.
20. Reproducible high-resolution multispectral image acquisition in dermatology
Science.gov (United States)
Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir
2015-07-01
Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.
1. Collimated trans-axial tomographic scintillation camera
International Nuclear Information System (INIS)
1980-01-01
The principal problem in trans-axial tomographic radioisotope scanning is the length of time required to obtain meaningful data. Patient movement and radioisotope migration during the scanning period can cause distortion of the image. The object of this invention is to reduce the scanning time without degrading the images obtained. A system is described in which a scintillation camera detector is moved to an orbit about the cranial-caudal axis relative to the patient. A collimator is used in which lead septa are arranged so as to admit gamma rays travelling perpendicular to this axis with high spatial resolution and those travelling in the direction of the axis with low spatial resolution, thus increasing the rate of acceptance of radioactive events to contribute to the positional information obtainable without sacrificing spatial resolution. (author)
2. High-resolution MRI in detecting subareolar breast abscess.
Science.gov (United States)
Fu, Peifen; Kurihara, Yasuyuki; Kanemaki, Yoshihide; Okamoto, Kyoko; Nakajima, Yasuo; Fukuda, Mamoru; Maeda, Ichiro
2007-06-01
Because subareolar breast abscess has a high recurrence rate, a more effective imaging technique is needed to comprehensively visualize the lesions and guide surgery. We performed a high-resolution MRI technique using a microscopy coil to reveal the characteristics and extent of subareolar breast abscess. High-resolution MRI has potential diagnostic value in subareolar breast abscess. This technique can be used to guide surgery with the aim of reducing the recurrence rate.
3. Detection of high energy gamma radiations with liquid rare gases as scintillators; Detection des rayonnements Gamma de grande energie avec les gaz rares liquides comme scintillateurs
Energy Technology Data Exchange (ETDEWEB)
Ho, Phan Xuan
1965-11-25
This research thesis reports the study of a sensor based on a liquid scintillator for the detection of high energy (10 to 30 MeV) gamma radiations. The scintillator is a liquefied argon or xenon rare gas. The author first studies the process of energy transfer from the particle to the sensing medium. He addresses the different involved elements and phenomena: electromagnetic radiations (Compton Effect, photoelectric effect, pair production, and total gamma absorption), charged particles (braking radiation, collisions) and application to gamma spectrometry. He describes and discusses the scintillation mechanisms (scintillation of organic and inorganic materials), the general characteristics of scintillators (impurities, converters), and then reports the practical realisation of the sensor. Results are presented and discussed [French] Dans ce travail, nous nous proposons d'etudier une technique. Il s'agit d'un detecteur a scintillateur liquide pour la detection des rayonnements gamma energiques (10 a 30 MeV). Le scintillateur utilise est un gaz rare liquefie argon ou xenon. Nous examinerons d'abord les processus de transfert de l'energie de la particule au milieu detecteur puis les mecanismes de scintillation en general pour pouvoir exploiter au mieux les phenomenes favorables. Nous presenterons ensuite la realisation pratique du detecteur. Ses qualites (et defauts) trouveront leur place dans la fin de ce memoire. Bien qu'a l'heure actuelle, par la methode de Kyropoulos, on puisse faire pousser des gros cristaux d'iodure de sodium, l'utilisation des 'gaz rares' liquefies comme scintillateurs est, grace a la brievete de la scintillation, tres utile lorsqu'on recherche un fort taux de comptage (jusqu'a 10 impulsions par seconde) ou lorsqu'on veut resoudre certains problemes de coincidence. Les cristaux NaI(Tl) de grandes dimensions sont d'un montage facile mais leur manipulation requiert beaucoup de precautions du fait qu'ils supportent tres mal les chocs thermiques
4. Cherenkov and scintillation light separation in organic liquid scintillators
Energy Technology Data Exchange (ETDEWEB)
Caravaca, J.; Descamps, F.B.; Land, B.J.; Orebi Gann, G.D. [University of California, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Yeh, M. [Brookhaven National Laboratory, Upton, NY (United States)
2017-12-15
The CHErenkov/Scintillation Separation experiment (CHESS) has been used to demonstrate the separation of Cherenkov and scintillation light in both linear alkylbenzene (LAB) and LAB with 2 g/L of PPO as a fluor (LAB/PPO). This is the first successful demonstration of Cherenkov light detection from the more challenging LAB/PPO cocktail and improves on previous results for LAB. A time resolution of 338 ± 12 ps FWHM results in an efficiency for identifying Cherenkov photons in LAB/PPO of 70 ± 3% and 63 ± 8% for time- and charge-based separation, respectively, with scintillation contamination of 36 ± 5% and 38 ± 4. LAB/PPO data is consistent with a rise time of τ{sub r} = 0.72 ± 0.33 ns. (orig.)
5. Cherenkov and scintillation light separation in organic liquid scintillators
International Nuclear Information System (INIS)
Caravaca, J.; Descamps, F.B.; Land, B.J.; Orebi Gann, G.D.; Yeh, M.
2017-01-01
The CHErenkov/Scintillation Separation experiment (CHESS) has been used to demonstrate the separation of Cherenkov and scintillation light in both linear alkylbenzene (LAB) and LAB with 2 g/L of PPO as a fluor (LAB/PPO). This is the first successful demonstration of Cherenkov light detection from the more challenging LAB/PPO cocktail and improves on previous results for LAB. A time resolution of 338 ± 12 ps FWHM results in an efficiency for identifying Cherenkov photons in LAB/PPO of 70 ± 3% and 63 ± 8% for time- and charge-based separation, respectively, with scintillation contamination of 36 ± 5% and 38 ± 4. LAB/PPO data is consistent with a rise time of τ r = 0.72 ± 0.33 ns. (orig.)
6. Depth of interaction resolution measurements for a high resolution PET detector using position sensitive avalanche photodiodes
International Nuclear Information System (INIS)
Yang Yongfeng; Dokhale, Purushottam A; Silverman, Robert W; Shah, Kanai S; McClish, Mickel A; Farrell, Richard; Entine, Gerald; Cherry, Simon R
2006-01-01
We explore dual-ended read out of LSO arrays with two position sensitive avalanche photodiodes (PSAPDs) as a high resolution, high efficiency depth-encoding detector for PET applications. Flood histograms, energy resolution and depth of interaction (DOI) resolution were measured for unpolished LSO arrays with individual crystal sizes of 1.0, 1.3 and 1.5 mm, and for a polished LSO array with 1.3 mm pixels. The thickness of the crystal arrays was 20 mm. Good flood histograms were obtained for all four arrays, and crystals in all four arrays can be clearly resolved. Although the amplitude of each PSAPD signal decreases as the interaction depth moves further from the PSAPD, the sum of the two PSAPD signals is essentially constant with irradiation depth for all four arrays. The energy resolutions were similar for all four arrays, ranging from 14.7% to 15.4%. A DOI resolution of 3-4 mm (including the width of the irradiation band which is ∼2 mm) was obtained for all the unpolished arrays. The best DOI resolution was achieved with the unpolished 1 mm array (average 3.5 mm). The DOI resolution for the 1.3 mm and 1.5 mm unpolished arrays was 3.7 and 4.0 mm respectively. For the polished array, the DOI resolution was only 16.5 mm. Summing the DOI profiles across all crystals for the 1 mm array only degraded the DOI resolution from 3.5 mm to 3.9 mm, indicating that it may not be necessary to calibrate the DOI response separately for each crystal within an array. The DOI response of individual crystals in the array confirms this finding. These results provide a detailed characterization of the DOI response of these PSAPD-based PET detectors which will be important in the design and calibration of a PET scanner making use of this detector approach
7. Detection of cosmic ray tracks using scintillating fibers and position sensitive multi-anode photomultipliers
International Nuclear Information System (INIS)
Atac, M.; Streets, J.; Wilcer, N.
1998-02-01
This experiment demonstrates detection of cosmic ray tracks by using Scintillating fiber planes and multi-anode photomultipliers (MA-PMTs). In a laboratory like this, cosmic rays provide a natural source of high-energy charged particles which can be detected with high efficiency and with nanosecond time resolution
8. Improved Growth Methods for LaBr3 Scintillation Radiation Detectors
International Nuclear Information System (INIS)
McGregor, Douglas S.
2011-01-01
The objective is to develop advanced materials for deployment as high-resolution gamma ray detectors. Both LaBr3 and CeBr3 are advanced scintillation materials, and will be studied in this research. Prototype devices, in collaboration Sandia National Laboratories, will be demonstrated along with recommendations for mass production and deployment. It is anticipated that improved methods of crystal growth will yield larger single crystals of LaBr3 for deployable room-temperature operated gamma radiation spectrometers. The growth methods will be characterized. The LaBr3 and CeBr3 scintillation crystals will be characterized for light yield, spectral resolution, and for hardness.
9. X-ray detection capabilities of plastic scintillators incorporated with hafnium oxide nanoparticles surface-modified with phenyl propionic acid
Science.gov (United States)
Hiyama, Fumiyuki; Noguchi, Takio; Koshimizu, Masanori; Kishimoto, Shunji; Haruki, Rie; Nishikido, Fumihiko; Yanagida, Takayuki; Fujimoto, Yutaka; Aida, Tsutomu; Takami, Seiichi; Adschiri, Tadafumi; Asai, Keisuke
2018-01-01
We synthesized plastic scintillators incorporated with HfO2 nanoparticles as detectors for X-ray synchrotron radiation. Nanoparticles with sizes of less than 10 nm were synthesized with the subcritical hydrothermal method. The detection efficiency of high-energy X-ray photons improved by up to 3.3 times because of the addition of the nanoparticles. Nanosecond time resolution was successfully achieved for all the scintillators. These results indicate that this method is applicable for the preparation of plastic scintillators to detect X-ray synchrotron radiation.
10. A new lutetia-based ceramic scintillator for X-ray imaging
CERN Document Server
Lempicki, A; Szupryczynski, P; Lingertat, H; Nagarkar, V V; Tipnis, S V; Miller, S R
2002-01-01
We report a new scintillator based on a transparent ceramic of Lu sub 2 O sub 3 :Eu. The material has an extremely high density of 9.4 g/cm sup 3 , a light output comparable to CsI:Tl, and a narrow band emission at 610 nm that falls close to the maximum of the response curve of CCDs. Pixelation of the scintillator to prevent lateral spread of light enhances the spatial and contrast resolution, providing imaging performance that equals or surpasses all other currently known scintillators. Upon further development of readout technologies to take full advantage of its transparency, the new scintillator should play a major role in digital radiographic systems.
11. Dynamics of High-Resolution Networks
DEFF Research Database (Denmark)
Sekara, Vedran
the unprecedented amounts of information collected by mobile phones to gain detailed insight into the dynamics of social systems. This dissertation presents an unparalleled data collection campaign, collecting highly detailed traces for approximately 1000 people over the course of multiple years. The availability...... are we all affected by an ever changing network structure? Answering these questions will enrich our understanding of ourselves, our organizations, and our societies. Yet, mapping the dynamics of social networks has traditionally been an arduous undertaking. Today, however, it is possible to use...... of such dynamic maps allows us to probe the underlying social network and understand how individuals interact and form lasting friendships. More importantly, these highly detailed dynamic maps provide us new perspectives at traditional problems and allow us to quantify and predict human life....
12. A Forward-Looking High-Resolution GPR System
National Research Council Canada - National Science Library
Kositsky, Joel; Milanfar, Peyman
1999-01-01
A high-resolution ground penetrating radar (GPR) system was designed to help define the optimal radar parameters needed for the efficient standoff detection of buried and surface-laid antitank mines...
13. High-resolution seismic wave propagation using local time stepping
KAUST Repository
Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul
2017-01-01
High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step
14. Impact of high resolution land surface initialization in Indian summer ...
The direct impact of high resolution land surface initialization on the forecast bias in a regional climate model in recent years ... surface initialization using a regional climate model. ...... ization of the snow field in a cloud model; J. Clim. Appl.
15. High spectral resolution X-ray observations of AGN
NARCIS (Netherlands)
Kaastra, J.S.
2008-01-01
brief overview of some highlights of high spectral resolution X-ray observations of AGN is given, mainly obtained with the RGS of XMM-Newton. Future prospects for such observations with XMM-Newton are given.
16. NOAA High-Resolution Sea Surface Temperature (SST) Analysis Products
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This archive covers two high resolution sea surface temperature (SST) analysis products developed using an optimum interpolation (OI) technique. The analyses have a...
17. High-resolution SPECT for small-animal imaging
International Nuclear Information System (INIS)
Qi Yujin
2006-01-01
This article presents a brief overview of the development of high-resolution SPECT for small-animal imaging. A pinhole collimator has been used for high-resolution animal SPECT to provide better spatial resolution and detection efficiency in comparison with a parallel-hole collimator. The theory of imaging characteristics of the pinhole collimator is presented and the designs of the pinhole aperture are discussed. The detector technologies used for the development of small-animal SPECT and the recent advances are presented. The evolving trend of small-animal SPECT is toward a multi-pinhole and a multi-detector system to obtain a high resolution and also a high detection efficiency. (authors)
18. Towards high-resolution positron emission tomography for small volumes
International Nuclear Information System (INIS)
McKee, B.T.A.
1982-01-01
Some arguments are made regarding the medical usefulness of high spatial resolution in positron imaging, even if limited to small imaged volumes. Then the intrinsic limitations to spatial resolution in positron imaging are discussed. The project to build a small-volume, high resolution animal research prototype (SHARP) positron imaging system is described. The components of the system, particularly the detectors, are presented and brief mention is made of data acquisition and image reconstruction methods. Finally, some preliminary imaging results are presented; a pair of isolated point sources and 18 F in the bones of a rabbit. Although the detector system is not fully completed, these first results indicate that the goals of high sensitivity and high resolution (4 mm) have been realized. (Auth.)
19. A temperature-compensated high spatial resolution distributed strain sensor
International Nuclear Information System (INIS)
Belal, Mohammad; Cho, Yuh Tat; Ibsen, Morten; Newson, Trevor P
2010-01-01
We propose and demonstrate a scheme which utilizes the temperature dependence of spontaneous Raman scattering to provide temperature compensation for a high spatial resolution Brillouin frequency-based strain sensor | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7351993322372437, "perplexity": 6405.468877180577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590127.2/warc/CC-MAIN-20180718095959-20180718115959-00044.warc.gz"} |
https://www.albert.io/ie/ap-statistics/statistical-significance-at-various-dollaralphadollar-levels | Free Version
Easy
# Statistical Significance at Various $\alpha$ Levels
APSTAT-OMKXLL
In a hypothesis test for a population mean, results were found to be statistically significant at the $\alpha = 0.02$ level.
Which of the following statements would also be true about the results of this test?
A
The results would be statistically significant at the $\alpha = 0.01$ level.
B
The p-value obtained for this test was $p = 0.02$.
C
The results would be statistically significant at the $\alpha = 0.05$ level.
D
Since the results were statistically significant at the $\alpha = 0.02$ level, we have sufficient evidence to prove that our null hypothesis was false.
E
A mistake was made in considering the results to be statistically significant. We can only consider our results to be statistically significant if $\alpha > 0.05$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342887997627258, "perplexity": 371.58509063575366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186353.38/warc/CC-MAIN-20170322212946-00405-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Sciences_Digital_Library/Active_Learning/Contextual_Modules/End_Creek%3A_Spotted_Frogs_and_Aquatic_Snails_in_Wetlands_%E2%80%93_A_Water_Quality_Investigation/01_Identifying_the_Problem | # Identifying the Problem
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
Columbia Spotted frogs (Rana luteiventris) are found from Alaska and most of British Columbia to Washington east of the Cascades, Idaho, and portions of Wyoming, Nevada, and Utah. The Great Basin population range includes eastern Oregon, southwestern Idaho, and the northern drainages of Nevada. They live in spring seeps, meadows, marshes, ponds and streams, and other areas where there is abundant vegetation. They often migrate along riparian corridors between habitats used for spring breeding, summer foraging and winter hibernation. Columbia spotted frogs typically have a light-colored stripe along the jaw and are light to dark brown or olive on their backs with varying numbers of irregular black spots. The coloration of their underside ranges from white to yellow, and mottling is present to varying degrees. The species is currently a ‘candidate species’ for listing under the Endangered Species Act. The largest known threat to spotted frogs is habitat alteration and loss, specifically loss of wetlands used for feeding, breeding, hibernating, and migrating. Reduction or loss of habitat can be attributed at least in part to recent drought conditions, wetland degradation, and poor water quality. Inorganic nitrogen, particularly in the form of nitrate (NO3-), poses a threat because of its known toxicity. Its main toxic action on aquatic animals is due to the conversion of oxygen-carrying pigments (e.g., hemoglobin, hemocyanin) to forms that are incapable of carrying oxygen (e.g., methemoglobin) (1). Other threats include predation by non-native species such as bull frogs and diseases such as Chytrid fungus (2).
## Columbia Spotted Frog Breeding Survey:
Adult Columbia Spotted frogs are occasionally found at End Creek and egg masses were discovered in one pond for the first time in April 2008. In 2009, 2010, and 2011 all of the ponds at End Creek were surveyed for breeding activity of Columbia spotted frogs. In 2009, 56 egg clusters were identified in the East Pond, and no breeding activity in any of the other ponds. In 2010, a total of 18 egg clusters were found, with 13 clusters in the East Pond, one in the South Pond, and four in the North Pond. In 2011 14 egg clusters were found in the East Pond and 2 in the South Pond. In 2012 only 3 egg clusters were located, suggesting that the spotted frog population is quickly declining (Fig. 1).
Figure 1. End Creek wetland area.
## Aquatic Snails at End Creek:
Three families of pulmonate snails occur in the ponds at End Creek, Physidae, Lymnaeidae, and Planorbidae (Figs. 2-4). Pulmonate snails contain a lung-like organ that allows them to store oxygenated air and remain under water for long periods of time. They also use this air reserve to adjust their buoyancy in the water. Because of their capacity to store air, they are better adapted to survive in water with low oxygen content than other types of snails.
Fig. 2. Physidae
Fig. 3. Lymnaeidae
Fig. 4. Planorbidae
These snails are annual species. Adults lay eggs in late winter; eggs hatch in early spring, and the snails grow throughout the summer months. They reach adult size by fall, lay eggs in late winter, and then die. Because of this seasonal life cycle, the time of sampling appears to influence how many snails are found in the samples. In spring, snails presumably are still so small that they are easily missed in the sampling process. By fall, they are larger and are more easily located.
Snails require calcium in order to build their shells; consequently, their growth rates may be influenced by calcium availability (3). Some snails may be more tolerant of low-calcium environments than others. Other aspects of water quality that may influence snail distribution include pH and water buffering capacity. Acidic water can cause loss of calcium from snail shells. Some species may be more tolerant of low pH than others as long as sufficient environmental calcium is available. Sources of acidification may include atmospheric deposition of sulfur and nitrogen oxides from automobiles and industrial emissions. Atmospheric CO2 also can diffuse into water, creating carbonic acid and lowering pH. At pH levels below 5.7, many organisms cannot survive (3). Nitrate toxicity needs to be considered as well. Much like noted for spotted frogs, nitrate can be toxic to aquatic invertebrates and its toxicity increases with concentration and exposure time (1).
Samples of aquatic invertebrates, including snails were collected from two ponds in May 2008 and October 2008 (Tables 1 and 2). In May, a large number of Planorbid snails were found in the North Pond, but in October, only Physid snails were identified in that pond. In May, few snails were found in the South Pond, and Lymnaeids were the most abundant. In October, all three snail families were present in higher numbers. In spring, on average, snails represented about 11% of the total number of invertebrates collected from both ponds. In fall, in both ponds, snails represented about 16% of the total aquatic invertebrates collected in our samples.
Table 1. Snails collected in two ponds in May 2008.
North Pond
South Pond
Family
Number
% of total sample
Number
% of total sample
Physidae
2
0.39
1
1.72
Lymnaeidae
1
0.20
4
6.90
Planorbidae
67
13.11
0
0.0
Total
70
13.7%
5
8.62%
Table 2. Snails collected in two ponds in October 2008.
North Pond
South Pond
Family
Number
% of total sample
Number
% of total sample
Physidae
46
16.4
16
4.73
Lymnaeidea
0
0.0
25
7.4
Planorbidae
0
0.0
15
4.44
Total
46
16.4%
56
16.57%
In summary, these data suggest that Lymnaeid snails are most abundant in the South Pond and Physid snails are most abundant in the North Pond. It is difficult to understand the dynamics of the Planorbid snails. They were abundant in the North Pond in spring, but did not appear in any samples from that pond in fall. In the South Pond, they did not appear in any samples in spring, but were well-represented in fall sampling. Obviously, further studies will be necessary to better assess snail population distribution in different ponds and over time.
Q1. What are some possible water quality parameters that could affect invertebrate and amphibian populations in a fresh water environment? You may want to research if any information is available on recommended levels of specific ions that may positively or negatively impact these populations.
We could then ask the following questions:
1. Is water quality at the End Creek ponds potentially responsible for the decrease observed in spotted frog population?
2. Are there differences in water chemistry that influence snail family distribution in the ponds at End Creek?
Following are some water quality parameters that you may want to consider as you undertake this investigation.
Nutrients. Nutrients such as nitrogen and phosphorous are essential to plants and aquatic organisms. However, in larger quantities, they can vastly reduce water quality by triggering accelerated plant growth and algae blooms. These, in turns, may lead to large amounts of decaying plant material causing low dissolved oxygen and, ultimately, death of fish and other aquatic species. Nitrogen species can be particularly toxic to freshwater invertebrates and amphibians (1). Due to the agricultural practices around End Creek, one can speculate that nitrogen and phosphorous sources may come from residual fertilizers. Possible forms of nitrogen can include ammonia (NH3), nitrates (NO3-) and nitrites (NO2-). Phosphorous is typically found as phosphate (PO43-) and can be in the form of organic phosphorous (associated with carbon-based molecules) or inorganic phosphorous (4).
Water hardness. Calcium and magnesium compounds along with other metals contribute to water hardness. Typical guidelines classify water with 0-60 mg/L calcium carbonate as soft; 61-120 mg/L as moderately hard; 121-180 mg/L as hard; and more than 180 mg/L as very hard (5). Snails require calcium to grow their shell and differences in calcium concentrations may play a role as to why snails may be found preferentially in certain ponds (3).
Dissolved oxygen. The amount of oxygen dissolved in water is measured as Dissolved Oxygen (DO). It is typically reported in mg/L or % saturation. The level of dissolved oxygen varies with temperature and altitude and fluctuates seasonally and over a 24-hour period. In ponds dissolved oxygen can fluctuate greatly due to photosynthetic oxygen production by algae during the day and the continuous consumption of oxygen due to respiration. As a result of these processes, dissolved oxygen typically reaches a maximum during the late afternoon and a minimum around sunrise (6). The solubility of oxygen also increases in colder water and decreases at higher altitudes. Decreased availability of dissolved oxygen negatively impacts aquatic organisms and in extreme conditions can cause death.
pH. pH affects many chemical and biological processes in the water. For example, different organisms flourish within different ranges of pH. The largest variety of aquatic animals prefers a range of 6.5-8.0. pH conditions outside this range reduce the diversity in the stream because it stresses the physiological systems of most organisms and can reduce reproduction. Low pH can also allow toxic elements and compounds to become mobile and “available” for uptake by aquatic plants and animals. This can produce conditions that are toxic to aquatic life, particularly to invertebrates such as snails. Changes in acidity can be caused by atmospheric deposition (acid rain), the surrounding rock, and certain wastewater discharges.
Total solids. Total solids include dissolved solids (typically cations and anions of salts that will pass through a filter with 2 micron pores) and suspended solids (silt and clay particles, algae, small debris and other particulate matter with size larger than 2 micron). Because the particles absorb heat, greater amounts of suspended solids contribute to increased water temperature thus decreasing availability of dissolved oxygen. Suspended particles can be carriers of toxins, particularly pesticides and herbicides, in water bodies in proximity to agricultural land. They can clog fish gills and affect egg development. Dissolved solids affect the water balance in cells. If the amount of dissolved solids is too low, aquatic organisms will tend to swell as water moves inside the cells. Levels of dissolved solids that are too high will cause organisms to shrink due to water moving out of cells (4).
## Identifying possible analysis methods
Water quality tests can be performed on site using inexpensive field kits that provide fast response with minimal sample preparation. Alternatively, water can be collected and brought back to a laboratory where analyses can be conducted using more sophisticated instrumentation.
The purpose of this module is to lead you to identify appropriate methods of analysis best suited to test possible differences in the water quality of the ponds located at End Creek. As you research these methods, assess them in terms of their sensitivity, ease of implementation, cost, speed, etc.
## Experimental Design for this Project
Once you decide on the most appropriate methods of analysis, the next step will be to design an experiment that will provide meaningful data so that sound conclusions can ultimately be drawn. It may be helpful as you embark in this investigation to refer to guidelines and protocols such as those provided by the Environmental Protection Agency (EPA) (4) or the American Public Health Association (APHA) (7). You may want to specifically refer to the EPA Volunteer Stream Monitoring: A Methods Manual (http://water.epa.gov/type/rsl/monitoring/stream_index.cfm). Often one may have to adapt a method based on available equipment or other limitations. Therefore, it is important to understand all aspects of a given method and be able to show that your method is valid.
## Analysis of Samples from End Creek ponds
In our case study we will develop a sampling plan for water samples to be collected at three ponds where egg clusters and snails have been identified. Some tests could be conducted on site and compared to more sophisticated analyses conducted in the laboratory. For this purpose water will be collected, brought back to the lab and stored for further analyses. Due to time and cost constraints, there are limitations on the number of samples that can be processed (limited to 30).
The subsequent parts of this module will examine the following two questions:
How do you decide where and when to collect samples to ensure they are representative?
How will you process the water for storage to ensure that analytes are not degraded or lost during storage and how long can you store your water sample?
A series of assignments are presented to help you understand how to design a water quality monitoring experiment and address questions such as those posed above. Through this process, you will also be introduced to important analytical chemistry concepts that underlie the analytical methods discussed.
## References
1. Camargo, J.A., Alonso, A., Salamanca, A. Nitrate toxicity to aquatic animals: a review with new data for freshwater invertebrates. Chemosphere 58 (2005) 1255–1267.
2. Columbia spotted frogs -http://ecos.fws.gov/speciesProfile/profile/speciesProfile.action?spcode=D027 accessed on 07-25-2013.
3. Ewald, M.L., Feminella, J.W., Lenertz, K.K., Henry, R.P. Acute physiological responses of the freshwater snail Elimia flava (Mollusca: Pleuroceridae) to environmental pH and calcium. Comparative Biochemistry and Physiology, Part C 150 (2009) 237- 245.
4. Volunteer Stream Monitoring: A Methods Manual – http://water.epa.gov/type/rsl/monitoring/stream_index.cfm accessed on 07-25-2013
5. Water hardness – http://water.usgs.gov/owq/hardness-alkalinity.html accessed on 07-25-2013
6. Caduto, M.J. 1990. Pond and Brook: a guide to nature in freshwater environments. Prentice-Hall, Inc. Englewood Cliffs, NJ.
7. APHA. 1992. Standard methods for the examination of water and wastewater. 18th ed. American Public Health Association, Washington, DC.
This page titled Identifying the Problem is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Contributor. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528138279914856, "perplexity": 4491.086263693409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00643.warc.gz"} |
https://codereview.stackexchange.com/questions/141187/given-some-text-and-a-word-list-print-the-7-most-common-correctly-spelled-words | # Given some text and a word list, print the 7 most common correctly spelled words
I've got a basic python function that I've been tasked with finishing in a class.
It's already fulfilling all the requirements for an A as it's an introductory course to Python, but I'd like to get some advice on it still as it's taking quite some time to execute.
I've got a few years of experience with general programming, but I'm still quite new to Python. As such any sort of advice on what might be taking up time would be appreciated.
As it stands at the moment, it takes about .5 seconds to run through the code and print all the data.
A bit of detail to the program:
1. A part of Alice in the Wonderland's first chapter (alice-ch1.txt)
2. A list of common words (common-words.txt)
3. And a list of correctly spelt words.
• Finally it prints the top 7 words that have passed through the filters.
def analyse():
#import listdir
from os import listdir
#import counter
from collections import Counter
files = "*************************************\n"
for file in listdir():
if file.endswith('.txt'):
files = files + file + "\n"
choice = input("These are the files: \n" + files + "*************************************\nWhat file would you like to analyse?\n")
if choice.strip() == "":
choice = "alice-ch1.txt"
elif choice.endswith(".txt"):
print(choice)
else:
choice = choice + ".txt"
print(choice)
with open(choice) as readFile, open('common-words.txt') as common, open('words.txt') as correct:
common_words = list(map(lambda s: s.strip(), common_words))
correct_words = list(map(lambda s: s.strip(), correct_words))
words = [word for line in readFile for word in line.split()]
for word in list(words):
if word in common_words or not word in correct_words:
words.remove(word)
print("There are " + str(len(words)))
c = Counter(words)
#for word, count in c.most_common():
# print (word, count)
nNumbers = list(c.most_common(7))
out = ""
print("*************************************\nThese are the 7 most common:")
for word, count in nNumbers:
out = out + word + "," + str(count) + "\n"
print(out + "\n*************************************")
input("\nPress enter to continue...")
### List Comprehension
I'm not at all sure it'll run much faster (though it might--it's a little faster under Python 2.7, anyway), but I think a more Pythonesque approach would be to replace your loop:
for word in list(words):
if word in common_words or not word in correct_words:
words.remove(word)
... with a list comprehension, something like:
words = [word for word in words if not word in common_words and word in correct_words]
### Algorithm
To gain substantial speed, you probably want to rearrange your operations. Right now you're looking at each word in the input separately (and looking at all of them). Then, after you've found all the words that aren't common and are spelled correctly, you choose the 7 most common.
I'd reverse that: start by creating a Counter of all the input words. Then print those filtering words that are common or aren't spelled correctly. When you've printed seven of them, stop:
words = [ word for line in readFile for word in line.split() ]
c = Counter(words)
counter = 0
for word, count in c.most_common():
if not word in common_words and word in correct_words:
print word + ", " + str(count)
counter = counter + 1
if counter == 7:
break;
You could simplify that inner loop a little by by doing a little preprocessing. Instead of testing against both the common and correctly spelled lists, you could start by removing all the common words from the correctly spelled list, to get a single list of the words that are acceptable. Then when you're printing out your results, you'd check only against that one list. Given the sizes of the lists, this would be a win primarily if you did it once and saved the result so you can re-use it. If you re-did the preprocessing every time you ran the program, you'd probably use more time on the preprocessing than you'd save on the output loop.
There's probably more than can be done to make this neater as well, but nothing occurs to me immediately. At least for me in a quick test, this seems to run around ten to fifteen (or so) times as fast as the code in the question. The exact difference in speed will probably depend (heavily) on the size of input file though. In particular, I believe this is changing from $O(N^2)$ complexity to an expected complexity around $O(N)$1.
As an aside, I did consider (and test with) using a set instead of a list for common_words and correct_words, but at least in my testing, with the updated algorithm this didn't seem to make a difference that I could replicate dependably. With the original algorithm, however, changing these from list to set can improve performance considerably.
### Logic
As it stands right now, your if/then chain:
if choice.strip() == "":
choice = "alice-ch1.txt"
elif choice.endswith(".txt"):
print(choice)
else:
choice = choice + ".txt"
print(choice)
... prints out choice twice if it starts out ending with .txt. I suspect you really want something closer to:
if choice.strip() == "":
choice = "alice-ch1.txt"
elif not choice.endswith(".txt")
choice = choice + ".txt"
print(choice)
### Magic number
It would probably be better to use something on the order of:
mostCommonLimit = 7
# ...
if counter == mostCommonLimit
break;
1. If you want to get technical, it probably is still $O(N^2)$. The Counter presumably uses a hash table, which is $O(1)$ expected complexity, but can be $O(N)$ in the worst case (where all keys produce equivalent hashes). This is, however, so rare that in practice it's often ignored. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2644144296646118, "perplexity": 2436.549170909538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649232.14/warc/CC-MAIN-20191014052140-20191014075140-00477.warc.gz"} |
http://musingsandcomplaints.blogspot.com/2011/01/bfs-in-prolog-ai.html | ## Wednesday, January 5, 2011
### BFS in Prolog (AI)
I've mentioned elsewhere that I recently had to write a BFS in LISP. Well, this is kinda a follow up to that. Kinda.
I thought it made for a nice toy problem, and I wanted to learn some real Prolog, so I put the two together. It worked, but it was a tad quirky, mostly in the way it stored the solution paths.
In an odd twist of fate, someone on Stack Overflow wanted to see an AI search like BFS implemented in Prolog. I didn't want to post the solution to what originated as an assignment, so I changed the problem. Like Knuth's conjecture, which is what the original solver solved, it has a given starting integer and a given goal integer. However, the operations are simpler: increment, decrement, and multiply by +2 (no negative 2). Additionally, I'm allowing for any starting integer, instead of positive four as in Knuth's conjecture. The quirkiness is gone as well (which is why my bfs definition is slightly different from the original).
Without further ado, I post the code. I'm sure that there are better ways to do certain things, especially if I had used some of SWI-Prolog's higher order predicates, but I think I learned more with the given definition.
% given a goal integer, it tries to determine the shortest
% series of actions needed to get to this integer given any other
% integer. The actions allowed are increment, decrement, and
% multiply by two
% states are represented as two element lists
% the first is a number, and the second is a path
% gets the successors of the given state
% note that it must be redone via backtracking in order to
% get all of the successors
successors( [N,Path], [NewN, [Function|Path]] ) :-
( Function = increment, NewN is N + 1 ;
Function = decrement, NewN is N - 1 ;
Function = multiply, NewN is N * 2 ).
% gets all successors as a list
successors_list( State, Result ) :-
findall( X, successors( State, X ), Result ).
% records results that have already been seen
:- dynamic seen/1.
% given a list of states, it will add each state to the table
% of states that have already been seen
assertz( seen( N ) ),
% removes all states that have already been seen
% returns a new list
remove_seen( [], [] ).
remove_seen( [[N|_]|Rest], Result ) :-
seen( N ), !,
remove_seen( Rest, Result ).
remove_seen( [State|Rest], [State|Result] ) :-
!, remove_seen( Rest, Result ).
% performs a BFS, with the given goal and queue
bfs( Goal, [[Goal|[Path]]|_], FinalPath ) :-
% note that operations are added from the front, and it's
% more natural to read them left to right
!, reverse( Path, FinalPath ).
bfs( Goal, [State|Rest], Result ) :-
successors_list( State, Successors ),
remove_seen( Successors, NewStates ),
append( Rest, NewStates, Queue ),
bfs( Goal, Queue, Result ).
% runs the BFS for the given start integer and goal integer
% returns the path to reach the goal in "Path"
go( Start, Goal, Path ) :-
retractall( seen( _ ) ),
bfs( Goal, [[Start,[Start]]], Path ).
?- go( 4, 7, X ).
X = [4, multiply, decrement].
4 * 2 = 8; 8 - 1 = 7. Cool. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028732538223267, "perplexity": 4996.482692490743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500836106.97/warc/CC-MAIN-20140820021356-00008-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.circuitbread.com/textbooks/introduction-to-electricity-magnetism-and-circuits/introduction/electric-dipoles | # Electric Dipoles
#### LEARNING OBJECTIVES
By the end of this section, you will be able to:
• Describe a permanent dipole
• Describe an induced dipole
• Define and calculate an electric dipole moment
• Explain the physical meaning of the dipole moment
Earlier we discussed, and calculated, the electric field of a dipole: two equal and opposite charges that are “close” to each other. (In this context, “close” means that the distance
between the two charges is much, much less than the distance of the field point
, the location where you are calculating the field.) Let’s now consider what happens to a dipole when it is placed in an external field
. We assume that the dipole is a permanent dipole; it exists without the field, and does not break apart in the external field.
#### Rotation of a Dipole due to an Electric Field
For now, we deal with only the simplest case: The external field is uniform in space. Suppose we have the situation depicted in Figure 1.7.1, where we denote the distance between the charges as the vector
, pointing from the negative charge to the positive charge. The forces on the two charges are equal and opposite, so there is no net force on the dipole. However, there is a torque:
(Figure 1.7.1)
The quantity
(the magnitude of each charge multiplied by the vector distance between them) is a property of the dipole; its value, as you can see, determines the torque that the dipole experiences in the external field. It is useful, therefore, to define this product as the so-called dipole moment of the dipole:
(1.7.1)
We can therefore write
(1.7.2)
Recall that a torque changes the angular velocity of an object, the dipole, in this case. In this situation, the effect is to rotate the dipole (that is, align the direction of) so that it is parallel to the direction of the external field.
#### Induced Dipoles
Neutral atoms are, by definition, electrically neutral; they have equal amounts of positive and negative charge. Furthermore, since they are spherically symmetrical, they do not have a “built-in” dipole moment the way most asymmetrical molecules do. They obtain one, however, when placed in an external electric field, because the external field causes oppositely directed forces on the positive nucleus of the atom versus the negative electrons that surround the nucleus. The result is a new charge distribution of the atom, and therefore, an induced dipole moment (Figure 1.7.2).
(Figure 1.7.2)
An important fact here is that, just as for a rotated polar molecule, the result is that the dipole moment ends up aligned parallel to the external electric field. Generally, the magnitude of an induced dipole is much smaller than that of an inherent dipole. For both kinds of dipoles, notice that once the alignment of the dipole (rotated or induced) is complete, the net effect is to decrease the total electric field
in the regions outside the dipole charges (Figure 1.7.3). By “outside” we mean further from the charges than they are from each other. This effect is crucial for capacitors, as you will see in Capacitance.
(Figure 1.7.3)
Recall that we found the electric field of a dipole in Equation 1.4.5. If we rewrite it in terms of the dipole moment we get:
The form of this field is shown in Figure 1.7.3. Notice that along the plane perpendicular to the axis of the dipole and midway between the charges, the direction of the electric field is opposite that of the dipole and gets weaker the further from the axis one goes. Similarly, on the axis of the dipole (but outside it), the field points in the same direction as the dipole, again getting weaker the further one gets from the charges. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806087017059326, "perplexity": 261.6072560198978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00003.warc.gz"} |
http://www.newworldencyclopedia.org/p/index.php?title=Russell's_paradox&oldid=682848 | (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Part of the foundation of mathematics, Russell's paradox (also known as Russell's antinomy), discovered by Bertrand Russell in 1901, showed that the naive set theory of Frege leads to a contradiction.
Consider the set R of all sets that do not contain themselves as members. In set-theoretic notation:
$R=\{A\mid A\not\in A\}.$
Assume, as in Frege's Grundgesetze der Arithmetik, that sets can be freely defined by any condition. Then R is a well-defined set. The problem arises when it is considered whether R is an element of itself. If R is an element of R, then according to the definition, R is not an element of R; if R is not an element of R, then R has to be an element of R, again by its very definition: Hence a contradiction.
Russell's paradox was a primary motivation for the development of set theories with a more elaborate axiomatic basis than simple extensionality and unlimited set abstraction. The paradox drove Russell to develop type theory and Ernst Zermelo to develop an axiomatic set theory, which evolved into the now-canonical Zermelo–Fraenkel set theory.
### Informal presentation
An informal explanation of Russell's paradox may be given in the following way. A set can be called "normal" if it does not contain itself as a member. For example, take the set of all squares. That set is not itself a square, and therefore is not a member of the set of all squares. So it is "normal." On the other hand, if one takes the complementary set of all non-squares, that set is itself not a square and so should be one of its own members. It is "abnormal."
Now consider the set of all normal sets—give it the name R—and ask the question: Is R a "normal" set? If it is "normal," then it is a member of R, since R contains all "normal" sets. But if that is the case, then R contains itself as a member, and therefore is "abnormal." On the other hand, if R is "abnormal," then it is not a member of R, since R contains only "normal" sets. But if that is the case, then R does not contains itself as a member, and therefore is "normal." Clearly, this is a paradox: If one supposes R is "normal," one can prove it is "abnormal," and one we supposes R is "abnormal," one can prove it is "normal." Hence, R is neither "normal" nor "abnormal," which is a contradiction.
### Formal presentation
More formally, the paradox is expressed as follows. The following derivation of the paradox [1] reveals that the paradox requires nothing more than first-order logic with the unrestricted use of set abstraction.
Definition: The set $\{x : \Phi(x)\}\,\!$, in which $\Phi(x)\,\!$ is any predicate of first-order logic in which $x\,\!$ is a free variable, denotes the set $A\,\!$ satisfying $\forall x\,[x \in A \leftrightarrow \Phi(x)]\,\!$.
Theorem: Defining a set $R\,\!$ by $R=\{x : x \notin x\}\,\!$ is contradictory.
Proof: Replace $\Phi(x)\,\!$ in the definition of collection with $x \notin x\,\!$ and obtain for $R\,\!$ as defined: $\forall x\,[x \in R \leftrightarrow x \notin x]\,\!$. Instantiating $x\,\!$ by $R\,\!$ now yields the contradiction $R \in R \leftrightarrow R \notin R. \ \square\,\!$
### Remark
#### Reciprocation
The force of this argument cannot be evaded by simply deeming $x \notin x\,\!$ an invalid substitution for $\Phi(x)\,\!$. In fact, there are denumerably many formulae $\Phi(x)\,\!$ giving rise to the paradox.[2]
For example, if one takes $\Phi(x) = \neg(\exists z: x\in z\wedge z\in x)$, one gets a similar paradox; there is no set P of all x with this property. For convenience, refer to a set S reciprocated if there is a set T with $S\in T\wedge T\in S$; then P, the set of all non-reciprocated sets, does not exist. If $P\in P$, one would immediately have a contradiction, since P is reciprocated (by itself) and so should not belong to P. But if $P\not\in P$, then P is reciprocated by some set Q, so that we have $P\in Q\wedge Q\in P$, and then Q is also a reciprocated set, and so $Q\not\in P$, another contradiction.
#### Independence from excluded middle
Often, as is done above, the set $R=\{A\mid A\not\in A\}$ is shown to lead to contradiction based upon the law of excluded middle, by showing that absurdity follows from assuming P true and from assuming it false. Thus, it may be tempting to think that the paradox is avoidable by avoiding the law of excluded middle, as with intuitionistic logic. However, the paradox still occurs using the law of non-contradiction:
From the definition of R, we have that RR ↔ ¬(RR). Then RR → ¬(RR) (biconditional elimination). But also RR → RR (the law of identity), so RR → (RR ∧ ¬(RR)). But, the law of non-contradiction tells us ¬(RR ∧ ¬(RR)). Therefore, by modus tollens, we conclude ¬(RR).
But since RR ↔ ¬(RR), one also has that ¬(RR) → RR, and so one also concludes RR by modus ponens. So using only intuitionistically valid methods we can still deduce both RR and its negation.
More simply, it is intuitionistically impossible for a proposition to be equivalent to its negation. Assume P ↔ ¬P. Then P → ¬P. Hence ¬P. Symmetrically, one can derive ¬¬P, using ¬P → P. So one has inferred both ¬P and its negation from our assumption, with no use of excluded middle.
## History
Exactly when Russell discovered the paradox is not known. It seems to have been May or June 1901, probably as a result of his work on Cantor's theorem that the number of entities in a certain domain is smaller than the number of subclasses of those entities. (In modern terminology, the cardinality of a set is strictly less than that of its power set.) He first mentioned the paradox in a 1901, paper in the International Monthly, entitled "Recent work in the philosophy of mathematics." He also mentioned in Cantor's proof that there is no greatest cardinal, adding that "the master" had been guilty of a subtle fallacy that he would discuss later. Russell also mentioned the paradox in his Principles of Mathematics (not to be confused with the later Principia Mathematica), calling it "The Contradiction."[3] Again, he said that he was led to it by analyzing Cantor's "no greatest cardinal" proof.
Famously, Russell wrote to Frege about the paradox in June 1902, just as Frege was preparing the second volume of his Grundgesetze der Arithmetik.[4] Frege hurriedly wrote an appendix admitting to the paradox, and proposed a solution that was later proved unsatisfactory. In any event, after publishing the second volume of the Grundgesetze, Frege wrote little on mathematical logic and the philosophy of mathematics.
Zermelo, while working on the axiomatic set theory he published in 1908, also noticed the paradox but thought it beneath notice, and so never published anything about it. Zermelo's system avoids the paradox thanks to replacing arbitrary set comprehension with weaker existence axioms, such as his axiom of separation (Aussonderung).
Russell and Alfred North Whitehead wrote the three volumes of Principia Mathematica (PM) hoping to succeed where Frege had failed. They sought to banish the paradoxes of naive set theory by employing a theory of types they devised for this purpose. While they succeeded in grounding arithmetic in a fashion, it is not at all evident that they did so by logic alone. In any event, Kurt Gödel in 1930-31, proved that the logic of much of PM, now known as first order logic, is complete, but that Peano arithmetic is necessarily incomplete if it is consistent. There and then, the logicist program of Frege-PM died.
## Applied versions
There are some versions of this paradox that are closer to real-life situations and may be easier to understand for non-logicians. For example, the Barber paradox supposes a barber who shaves men if and only if they do not shave themselves. When one thinks about whether the barber should shave himself or not, the paradox begins to emerge.
As another example, consider five lists of encyclopedia entries within the same encyclopedia:
List of articles about people: Ptolemy VII of Egypt Hermann Hesse Don Nix Don Knotts Biography of Nikola Tesla Sherlock Holmes Emperor Kōnin List of articles starting with the letter L: L L!VE TV L&H ... List of articles starting with the letter K List of articles starting with the letter L List of articles starting with the letter M ... List of articles about places: Leivonmäki Katase River Enoshima List of articles about Japan: Emperor Kōnin Katase River Enoshima List of all lists that do not contain themselves: List of articles about Japan List of articles about places List of articles about people ... List of articles starting with the letter K List of articles starting with the letter M ... List of all lists that do not contain themselves?
If the "List of all lists that do not contain themselves" contains itself, then it does not belong to itself and should be removed. However, if it does not list itself, then it should be added to itself.
While appealing, these layman's versions of the paradox share a drawback: An easy refutation of the Barber paradox seems to be that such a barber does not exist. The whole point of Russell's paradox is that the answer "such a set does not exist" means the definition of the notion of set within Frege's system is unsatisfactory. This motivated the investigation of axiomatic set theory that does not suffer from the paradox of the kind.
## Set-theoretic responses
Russell, together with Alfred North Whitehead, sought to banish the paradox by developing type theory. The culmination of this research is the work, Principia Mathematica. While Principia Mathematica avoided the known paradoxes and allows the derivation of a great deal of mathematics, other challenges to predominant set theory arose.
In 1908, Ernst Zermelo proposed an axiomatization of set theory that avoided Russell's and other related paradoxes. Modifications to this axiomatic theory proposed in the 1920s, by Abraham Fraenkel, Thoralf Skolem, and by Zermelo himself resulted in the axiomatic set theory called ZFC. This theory became widely accepted once Zermelo's axiom of choice ceased to be controversial, and ZFC has remained the canonical axiomatic set theory down to the present day. ZFC does not assume that, for every property, there is a set of all things satisfying that property. Rather, it asserts that given any set X, any subset of X definable using first order logic exists. The object R discussed above cannot be constructed in this fashion, and is therefore not a ZFC set. In some extensions of ZFC, objects like R are called proper classes. ZFC is silent about types, although some contend that Zermelo's axioms tacitly presupposes a background type theory.
Through the work of Zermelo and others, especially John von Neumann, the structure of what some see as the "natural" objects described by ZFC eventually became clear; they are the elements of the von Neumann universe, V, built up from the empty set by transfinitely iterating the power set operation. It is thus now possible again to reason about sets in a non-axiomatic fashion without running afoul of Russell's paradox, namely by reasoning about the elements of V. Whether it is appropriate to think of sets in this way is a point of contention among the rival points of view on the philosophy of mathematics.
Other resolutions to Russell's paradox, more in the spirit of type theory, include the axiomatic set theories New Foundations (by Quine) and Scott-Potter set theory.
## Notes
1. Potter, 2004: 24-25.
2. Quine, 1938.
3. Bertrand Russell, Principles of Mathematics (Cambridge: Cambridge University Press, 1903). ISBN 0-393-31404-9
4. Jean van Heijenoort, 1967.
## References
• Link, Godehard. One Hundred Years of Russell's Paradox: Mathematics, Logic, Philosophy. Berlin: Walter de Gruyter, 2004. ISBN 3110174383
• Potter, Michael. 2004. Set Theory and its Philosophy. Oxford Univ. Press.
• Quine, Willard. 1938. "On the theory of types," Journal of Symbolic Logic 3.
• Russell, Bertrand. 1903. Principles of Mathematics. Cambridge: Cambridge University Press. ISBN 0-393-31404-9
• Sorensen, Roy A. 2003. A Brief History of the Paradox: Philosophy and the Labyrinths of the Mind. Oxford: Oxford University Press. ISBN 0195159039
• Van Heijenoort, Jean. 1967. From Frege to Gödel; A Source Book in Mathematical Logic, 1879-1931. Cambridge: Harvard University Press. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762943148612976, "perplexity": 724.6013513441368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697772439/warc/CC-MAIN-20130516094932-00007-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/sigma-confidence-rating.618616/ | # Sigma confidence rating
1. Jul 5, 2012
### g.lemaitre
I'm trying to figure out what the odds of 5 sigma confidence rating has of being wrong. According to one website it is
0.000028% which is 1 in 35,000 but I've seen so many divergent answers as to what the odds of 5 sigma being wrong are that I want to be sure. I've seen people say it is as high as 1 in 3 million or as low as 1 in 700
2. Jul 5, 2012
### chiro
Hey g.lemaitre and welcome to the forums.
For this problem, I'm assuming you have a standard normal and wish to figure out the probability of being greater than 5 standard deviations outside of the mean.
If you are using different distributions, different assumptions, or you have a specific problem then please inform the rest of the readers here so that we can give you better advice.
Using R, I got the answer to be 2 * 2.866515718791939118515e-07 = 5.733031437583878237117e-07 = 0.000000573303.. which is really small. Taking the inverse of this gives us: 1744277.89 which equates roughly to a 1 in 1744278 chance or say a 1 in 1.7 million chance.
If you only considered one tail it would be just under a 1 in 3.5 million chance.
The thing is though that this is misleading if you don't provide more information, and this assumes that the distribution relating to what you are measuring has a Gaussian distribution. If it doesn't, or if you need to use another model, then this assumption will be wrong.
To get the calculation in R I used pnorm(-5.0,0,1) and multiplied that by 2 to get final probability (because of symmetry).
3. Jul 5, 2012
### Number Nine
You're not being very precise here (e.g. "odds of being wrong" is more complex than you realize, as is sigma); the best I can tell you is that, for normally distributed data, approximately 99.99995% of the data lie within 5 standard deviations of the mean.
4. Jul 5, 2012
### g.lemaitre
let me give an exact quote
5. Jul 5, 2012
### chiro
This quote looks like it assumes a normal distribution and refers to the quantities P(Z < z) where z = -3 and -5 respectively, but the figure for -5 is off by two decimal places according to R with the pnorm function, if they are assuming a Gaussian distribution.
What this means is that there is a cutoff value for the probability and they are saying that if goes below some cutoff or above some cutoff, then it is considered more than -3 or -5 standard deviations in the respective direction.
Can you point the readers to the article?
6. Jul 6, 2012
### cmos
It looks like you're trying to make sense of the numbers being thrown around regarding the Higgs boson, so I'll just throw out a summary.
For starters, the integral of a normal distribution from -n*sigma to n*sigma is equal to erf(n/sqrt(2)), where n is any real number and erf is the error function. We can therefore say that a 5-sigma result has a probability of erf(5/sqrt(2)) = 0.9999994 (i.e. 99.99994 %).
Now, a lot of the news reports are saying that this indicates that there is a "1 in 3.5 million" chance that there was no Higgs detection. This number is equal to
0.5 - erf(5/sqrt(2))/2.
Why do they divide by 2? Because they are looking for "bumps" above a "noise" level. In other words, they are only considering one side of the distribution. The 0.5 is just the integral over half of the normal distribution.
But you want to know about "odds." By definition, odds = P(failure)/P(success) where P means "probability of." Therefore, the odds of the Higgs result being a fluke is
[1 - erf(5/sqrt(2))] / erf(5/sqrt(2)) = "1 to 1.75 million."
7. Jul 9, 2012
### haruspex
No, you've missed the '%'. It's 1 in 3.5 million.
Btw, this is not the chance of being wrong in rejecting the null hypothesis. It is the chance that the observed data was merely by chance, i.e. the chance of the data being thus given the null hypothesis. This is not the same thing as the chance that the null hypothesis is correct.
Similar Discussions: Sigma confidence rating | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900797426700592, "perplexity": 603.2359220232273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00075.warc.gz"} |
https://math.stackexchange.com/questions/2064144/use-continuity-to-show-that-fx-x3-is-uniformly-continuous-on-0-1-but-no | # Use continuity to show that $f(x)=x^3$ is uniformly continuous on $[0,1]$ but not $[0,\infty]$
I'm trying to use continuity to show that $f(x)=x^3$ is uniformly continuous on $[0,1]$ but not $[0,\infty)$.
I've tried setting up an epsilon-delta proof, but I'm struggling a little:
By definition of uniform continuity, we know that $\forall \epsilon >0, \exists \delta >0$ such that
$|x-y|<\delta \Rightarrow |f(x) - f(y)| < \epsilon$.
And so, forcing
$\delta = \min\{1, \frac{\epsilon}{p_x}\}$ where $p_x = (x^2+xy+y^2)$
And so, we havve that
$|(x)^3 - (y)^3|=|(x-y)(x^2+xy+y^2)| < |\delta (x^2 + xy+y^2)| < \epsilon$
I'm not sure if this is the correct way to go about proving it, or if I landed myself into a circular argument. Furthermore, intuitively I'm guessing we only have uniform continuity on $[0,1]$ but not [0,$\infty)$ because our $p_x$ would get too large?
– user399481
Dec 19 '16 at 1:36
• Note that in general, ANY continuous function on a compact set is uniformly continuous. Dec 19 '16 at 1:36
• @AlexR. Ah yes. And we have compactness because we have a closed set (by the Heine Borel theorem)? Dec 19 '16 at 1:45
• @Nikitau $[0,1]$ is closed and bounded and hence compact by Heine Borel. Dec 19 '16 at 2:42
Note that in the definition of uniform continuity, given $\varepsilon > 0$, you need to provide a $\delta > 0$ that works for all $x,y \in [0,1]$. In particular, you cannot have a $\delta$ that depends on $x$. This is in contrast to proving that $f$ is continuous at a specific $x$ where the $\delta$ you provide can depend on $x$.
$$|x^3 - y^3| = |(x - y)(x^2 + xy + y^2)| \leq |x - y||x^2 + xy + y^2|.$$
Now, if $x,y \in [0,1]$, we have $|x^2 + xy + y^2| \leq 3$ so we can deduce that
$$|x^3 - y^3| \leq 3|x - y|.$$
Hence, given $\varepsilon > 0$, we can take $\delta = \frac{\varepsilon}{3}$ and then if $|x - y| < \delta$ then $|x^3 - y^3| \leq 3|x - y| < 3\delta = \varepsilon$.
Note that the same argument would work if you wanted to prove uniform continuity on $[0,L]$ with the constant $3$ replaced by a different constant $C_L$ which bounds $|x^2 + xy + y^2|$ on $[0,L]$ (for example, $C_L$ can be $3L^2$).
Expressed in this way, we see that your basic intuition is correct. As $L$ gets larger, our constant $C_L$ also gets larger and hence our $\delta = \frac{\varepsilon}{C_L}$ gets smaller and smaller. In the limit, this should lead us to suspect (but this is not a formal proof!) that uniform continuity will fail.
• I'm a little confused on the part where you say that "this is in contrast to proving that f is continuous at a specific x". In an example provided in class, the professor wanted to show $f(x) = \frac{x^3}{1+x^2}$ was continuous on $\mathbb{R}$. She then fixed $x_0 \in \mathbb{R}$, and the delta included a function $P(x_0)$. This is might be a fairly silly question, but I'm confused why we were allowed do to that for the whole of $\mathbb{R}$ but not an interval [0,1]? Dec 19 '16 at 1:49
• @Nikitau: You are allowed to do it on an interval or on the whole of $\mathbb{R}$, but this only proves that your function is continuous, not uniformly continuous. Being uniformly continuous is a stronger property. If you look at the definition, this is expressed precisely by the fact that to show uniform continuity over some domain $D$, you need to provide a $\delta$ that works for all points in $D$ simultaneously. This is in contrast to proving that $f$ is continuous on $D$ where you can fix a point $x_0 \in D$ and then provide a $\delta = \delta(x_0)$ that depends on $x_0$. Dec 19 '16 at 1:52
• Oh thanks for the clarification! So if I was just proving continuity, the original post would have been OK? Or would having 'y' in my delta pose a problem? (Sorry for all the questions! I have an exam coming up and I just want to make sure I'm understanding all the nuances). Dec 19 '16 at 1:56
• @Nikitau: That is a problem. Even if you are only interested in regular continuity at a point $x_0$, you cannot have your $\delta$ depend on the argument of the function, only on $x_0$ (and $\varepsilon$). Dec 19 '16 at 1:58
• Ah, that makes sense. Thank you for pointing that out :) Dec 19 '16 at 2:04
To show that $$x^3$$ fails to be uniformly continuous on $$[0,\infty)$$, we take $$\epsilon=\frac{3}{2}$$. Then, for all $$\delta>0$$, and for $$x=\frac{1}{\sqrt\delta}$$ and $$y=\frac{1}{\sqrt \delta}+\frac{\delta}{2}$$ we have $$|x-y|<\delta$$ and
\begin{align}|x^3-y^3|&=\left|\left(\frac{1}{\sqrt\delta}+\frac{\delta}{2}\right)^3-\left(\frac{1}{\sqrt\delta}\right)^3\right|\\\\ &\ge \frac32\\\\ &=\epsilon \end{align}
• I can see why this works, but how does one know what values to set for $\epsilon$ and x,y? Dec 19 '16 at 2:38
• You need only find one such $\epsilon>0$ such that for any $\delta>0$ there are points $x$ and $y$ such that $|x-y|<\delta$ and $|x^3-y^3|\ge \epsilon$. Certainly, any $\epsilon$ smaller than $3/2$ would have worked also. Dec 19 '16 at 2:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9376607537269592, "perplexity": 108.40075885234836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585916.29/warc/CC-MAIN-20211024081003-20211024111003-00522.warc.gz"} |
https://www.researchpad.co/article/elastic_article_9761 | Biomechanics and Modeling in Mechanobiology
Springer Berlin Heidelberg
Efficient materially nonlinear μFE solver for simulations of trabecular bone failure
Volume: 19, Issue: 3
DOI 10.1007/s10237-019-01254-x
•
•
•
• Altmetric
Notes
Abstract
An efficient solver for large-scale linear μFE simulations was extended for nonlinear material behavior. The material model included damage-based tissue degradation and fracture. The new framework was applied to 20 trabecular biopsies with a mesh resolution of 36μm. Suitable material parameters were identified based on two biopsies by comparison with axial tension and compression experiments. The good parallel performance and low memory footprint of the solver were preserved. Excellent correlation of the maximum apparent stress was found between simulations and experiments (R2>0.97). The development of local damage regions was observable due to the nonlinear nature of the simulations. A novel elasticity limit was proposed based on the local damage information. The elasticity limit was found to be lower than the 0.2% yield point. Systematic differences in the yield behavior of biopsies under apparent compression and tension loading were observed. This indicates that damage distributions could lead to more insight into the failure mechanisms of trabecular bone.
Keywords
Stipsitz, Zysset, and Pahr: Efficient materially nonlinear \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu$$\end{document}μFE solver for simulations of trabecular bone failure
Introduction
In-silico modeling of bone can help get a better insight into the biomechanical behavior of bone (Keaveny et al. 2001). Especially bone failure is not yet well understood, due to the highly complex hierarchical composition of bone. Understanding bone failure could aid in reducing bone fractures due to better diagnostics or in the development of improved treatments. Simulations based on computed tomography (CT) scans provide information on the internal failure progression of bone under loading in more detail than which is currently possible with experiments. The development of high-resolution $\mu$CT scanners made simulations on real bone structures possible. Scan resolutions are high enough to uncover local damage patterns on the micro-scale, i.e., on the level of single trabeculae. Thus, seen as a complementary approach to experiments, simulations can aid in unveiling the invisible failure processes within bones.
Two different modeling approaches for bone structures are commonly used: homogenized, continuum-level methods, and high-resolution microstructural models (Engelke et al. 2013). Homogenized models are based on coarse meshes which do not resolve the trabecular network. Instead, the internal substructure is usually taken into account via density-dependent material laws. Complex material models can be applied at the homogenized material point, due to the small model sizes. In contrast, $\mu \text{FE}$ analyses are performed at the microstructural level where the trabecular network is visible. Huge model sizes lead to high computational demands. Thus, only relatively simple material models are feasible. The high resolution of $\mu \text{FE}$ models leads to detailed results while keeping the modeling effort low (van Rietbergen and Ito 2015) when compared to hFE.
The challenge of using nonlinear $\mu \text{FE}$ analyses in basic research consists of two parts (Nawathe et al. 2014): (1) Whole bones at sufficiently high resolutions need to be simulated. If smaller regions of interest are chosen, results may depend strongly on the actual segment (Mueller et al. 2011) and the chosen boundary conditions (Panyasantisuk et al. 2016). For reliable results, voxel sizes need to be smaller than a third of the mean trabecular diameter, typically around $40\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ (Bevill and Keaveny 2009). This leads to huge model sizes. (2) A material model that captures the main features of tissue-level failure is required (Nawathe et al. 2014). Thus, $\mu \text{FE}$ simulations are always a tradeoff between the computational demands and the complexity of the material model.
Different $\mu \text{FE}$ solvers were applied in the literature depending on the model size: smaller $\mu \text{FE}$ models (up to a few million degrees of freedom (mio DOF)) are usually solved with commercial or in-house software packages. These solvers are often general-purpose FE tools that are not very efficient (Wolfram et al. 2012; Hambli 2013; Harrison et al. 2013; Baumann et al. 2016; Verhulp et al. 2008). Larger models are commonly solved using specialized research software based on a linear-elastic constitutive law. A number of highly parallel HPC solvers were developed which were able to process models containing hundreds of millions of elements (Adams et al. 2004; Flaig and Arbenz 2012; Mueller et al. 2011). A few of these codes were extended for the use of nonlinear material models at high resolutions. Simulations with more than 200 mio elements were presented (Fields et al. 2012; Christen 2012; Nawathe et al. 2014; Zhou et al. 2016). However, these solvers have not been able to establish themselves in the community, probably due to the high computational demands. For instance, for a model consisting of 120 mio elements, over 4000 CPUs and 120 TB of memory were required (Nawathe et al. 2014). So although it has been proven that nonlinear simulations of whole bones are possible, there is still no nonlinear $\mu \text{FE}$ solver capable of analyzing large-scale models on standard HPC clusters.
Another challenge is that a nonlinear material model is required for the investigation of the failure mechanisms of bone. There is no agreement on what features have to be included to accurately model bone failure: in commercial FE packages, simplified micro-level material models are available, where the nonlinearity often consists of a bilinear form in maximum principal stress (Niebur et al. 2000; Verhulp et al. 2008) or a cast iron model (Wolfram et al. 2012). Special user-defined material laws were developed which use, e.g., a quadric yield surface (Schwiedrzik et al. 2013) or a modified von Mises criterion combined with ideal plasticity (Sanyal et al. 2012; Nawathe et al. 2013). In the large-scale simulations, typically no softening mechanisms are present. Thus, the failure behavior cannot be studied directly. Only one large-scale study including tissue failure exists. In this study, bone was modeled as a fully brittle tissue (Nawathe et al. 2015). However, an efficient large-scale $\mu \text{FE}$ solver incorporating effects beyond the yield limit is still missing.
The aim of this work is to develop such a nonlinear $\mu \text{FE}$ solver for large-scale applications. We follow two main objectives:
• An existing $\mu \text{FE}$ solver is extended to a damage-based material model including a fracture mechanism. We start from ParOSol (Flaig and Arbenz 2012) which was shown to efficiently perform linear analyses on whole bones. By carefully adapting ParOSol to a simple nonlinear material behavior, we expect that the excellent performance can be preserved.
• The potential of the new solver for biomechanical applications is demonstrated by studying the axial failure behavior of trabecular bone biopsies. The possible areas of application for the high level of detail obtained in the results are investigated.
Materials and methods
For objective (1), ParOSol (Flaig 2012) was extended to nonlinear material behavior. ParOSol was chosen because it is a highly parallel, efficient $\mu \text{FE}$ solver (Flaig and Arbenz 2012). It has a much lower memory footprint compared to standard $\mu \text{FE}$ solvers (Flaig and Arbenz 2011). The linear equations are solved by a preconditioned conjugate gradient algorithm based on a geometric multigrid preconditioner. The mesh is stored in an octree. However, only a linear-elastic constitutive law was included in the original ParOSol. A simple material model was required to extend the solver without loosing its good parallel performance.
Material model
The proposed material model (Fig. 1, top) consisted of (1) an isotropic, linear-elastic region (initial Young’s modulus ${E}_{0}$, Poisson’s ratio $\nu$), (2) a nonlinear region where the material degraded based on a scalar damage quantity D , and (3) a failure region. The transition from the linear to the nonlinear regime was determined by an isotropic, quadric damage onset surface (adapted from Schwiedrzik et al. (2013), Fig. 1, bottom). It is formulated in terms of the nominal stress tensor ${\sigma }_{\mathit{ij}}$ as
$\begin{array}{c}\hfill Y\left({\sigma }_{\mathit{ij}}\right)=\frac{1}{\mathcal{H}}\left(\sqrt{{\sigma }_{\mathit{ij}}{\mathcal{F}}_{\mathit{ijkl}}{\sigma }_{\mathit{kl}}},+,{F}_{\mathit{ij}},{\sigma }_{\mathit{ij}}\right)-1=0,\end{array}$
with the tensors
$\begin{array}{cc}\hfill {F}_{\mathit{ij}}& =\frac{1}{2}\left(\frac{1}{{\sigma }_{0}^{+}},-,\frac{1}{{\sigma }_{0}^{-}}\right){\delta }_{\mathit{ij}},\hfill \end{array}$
$\begin{array}{cc}\hfill {\mathcal{F}}_{\mathit{ijkl}}& =-\frac{{\zeta }_{0}}{4}{\left(\frac{1}{{\sigma }_{0}^{+}},+,\frac{1}{{\sigma }_{0}^{-}}\right)}^{2}{\delta }_{\mathit{ij}}{\delta }_{\mathit{kl}}\hfill \\ \hfill & \phantom{\rule{1em}{0ex}}+\frac{1+{\zeta }_{0}}{4}{\left(\frac{1}{{\sigma }_{0}^{+}},+,\frac{1}{{\sigma }_{0}^{-}}\right)}^{2}\frac{1}{2}\left({\delta }_{\mathit{ik}}{\delta }_{\mathit{jl}}+{\delta }_{\mathit{il}}{\delta }_{\mathit{jk}}\right).\hfill \end{array}$
Einstein sum convention is used. ${\delta }_{\mathit{ij}}$ is the Kronecker delta. The shape of the damage surface was defined via a parameter ${\zeta }_{0}$. It can be adapted to approximate commonly used yield criteria, like Drucker-Prager, von Mises, or Tsai-Wu criterion (Schwiedrzik et al. 2013). The damage onset surface took into account the tension–compression asymmetry of trabecular bone via different tensile and compressive yield stresses (${\sigma }_{0}^{+}$, ${\sigma }_{0}^{-}$). An equivalent formulation in the damage onset strains ${\epsilon }_{0}^{±}$ was applied. No manual distinction between tension and compression loading was required. Hardening was included via an isotropic hardening modulus ${E}_{\mathrm{hard}}$. The factor $\mathcal{H}$ determines the extent of hardening (compare Eq. 1) and depends on the current damage D:
$\begin{array}{c}\hfill \mathcal{H}=\frac{1-{E}_{\mathrm{hard}}/{E}_{0}}{1-{E}_{\mathrm{hard}}/\left({E}_{0}\left(1-D\right)\right)}.\end{array}$
Implementation details
The material degraded locally if the local stress reached the current damage onset surface. In case of material degradation, the modulus was reduced to $E=\left(1-D\right){E}_{0}$. D was found numerically by back-projecting the current stress state onto the damage onset surface. Local tissue failure occurred when D exceeded a critical value ${D}_{\mathrm{c}}$. Failure was modeled by reducing the modulus to a small residual value ${E}_{\mathrm{f}}$. The material model did not include plasticity or rate dependency. The nonlinear material model was incorporated into ParOSol using a displacement-based, incremental-iterative solving procedure. The details are given in “Appendix C” and Stipsitz et al. (2018). A geometrically linear FE formulation was employed. The FE and material formulation allowed retrospective scaling of the results with the initial modulus ${E}_{0}$.
Fig. 1
Top: one-dimensional material model showing the (1) linear-elastic, (2) damage, and (3) fracture region. Pure damage-based unloading (dotted, red line) and tension–compression asymmetry are visible. Bottom: exemplary damage onset surface in principal stress space
Trabecular biopsies
For objective (2), the extended solver (ParOSolNL) was applied to 20 trabecular bone biopsies. The biopsies were taken from a previous study (Schwiedrzik et al. 2016): the data set consisted of 21 samples from 11 human donors. During experimental testing, 10 samples were loaded until failure in compression and 11 samples were loaded until failure in tension. The samples were cylindrical cores with 8 mm in diameter and 10 mm (tension samples) or 13 mm (compression samples) in height. In this study, one sample was excluded after visual inspection of the microstructure. Segmented $\mu$CT images at a resolution of $36\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ were available from different locations (9 Femur, 2 Radius, 9 Vertebra). Biopsies from different anatomic sites were included because the aim is a framework that can be applied universally to any trabecular biopsy. An FE mesh was created by converting each voxel of the scans to a linear hexahedral element. Displacement boundary conditions were applied to mimic tension and compression experiments: Nodes on the top plane were displaced in axial direction in strain increments of 0.1%. Nodes on the bottom plane were fully fixed, and all lateral displacements on the top plane were constrained. Analyses were stopped at the first drop in apparent force. Post-ultimate tissue behavior was not suitably modeled due to the lack of large deformation formulation and self-contact constraints.
Identification of material parameters
The suitable values for the material parameters of the damage model were identified (Table 1). The Poisson’s ratio $\nu$ and ${\zeta }_{0}$ were chosen from the literature (Schwiedrzik and Zysset 2015). Following (Schwiedrzik et al. 2013) and using the damage onset strains identified here, ${\zeta }_{0}=0.3$ corresponded to an ellipsoidal damage onset surface. A residual stiffness of bone tissue of ${E}_{\mathrm{f}}={10}^{-5}{E}_{0}$ was used. The residual modulus has only marginal effects on the results. ${E}_{\mathrm{f}}=0$ is also possible but may lead to slightly decreased performance. The tissue modulus ${E}_{0}$ is reported to vary greatly depending on bone type, anatomic location, and age (Carretta et al. 2013). In this study, a homogeneous tissue ${E}_{0}$ was calculated for each biopsy individually so that the apparent modulus matched the experiment. The remaining parameters (${\epsilon }_{0}^{±}$, ${D}_{\mathrm{c}}$, ${E}_{\mathrm{hard}}$) could not be taken directly from the literature since they depended on the material model and mesh resolution. Instead, the parameters were identified using two biopsy samples, one under compression and one under tension boundary conditions. The parameters were identified by repeatedly performing nonlinear simulations of the two samples. Depending on the simulation results, the parameters were adapted manually to best reproduce the maximum force of the experiments. Only two samples were chosen for the identification process because an optimization routine using all samples would have been computationally demanding and unique results could not be ensured. The results obtained with the identified parameters for all 20 samples justified this practical approach.
Table 1
Identified material parameter set for $36\phantom{\rule{0.166667em}{0ex}}\mu \text{m}$ mesh resolution
PredefinedIdentified on 2 samplesIndividual
(%) (%) (–) (GPa)
0.30.30.680.890.9150.05
Post-processing
The resulting apparent stress was defined as the sum of the axial force on the top plane divided by the initial cross-sectional area of the biopsy. Apparent strain was evaluated as the applied displacement divided by the initial height of the cylinders. The apparent yield stress, ${\sigma }_{\mathrm{y}}$, was identified via the 0.2% strain-offset criterion and the maximum stress, ${\sigma }_{\mathrm{max}}$, was the maximum absolute stress in the apparent stress–strain curve. During post-processing, the hexahedral meshes were smoothed for better visualization with ParaView. Additionally, 2D projections of the damage zones were generated to compare the internal fracture patterns. Linear regression analyses were performed for tension and compression samples separately (including the two calibration samples). The intercept of the linear regression function was set to zero. The deviations between simulations and experiments were evaluated for apparent ${\sigma }_{\mathrm{y}}$ and ${\sigma }_{\mathrm{max}}$. The relative errors were obtained by scaling the deviations by the experimental value.
A novel elasticity limit was defined in terms of the percentage of damaged elements ${\mathcal{N}}_{\mathrm{D}}$. ${\mathcal{N}}_{\mathrm{D}}$ was computed as the number of elements with $\left(D>0\right)$ divided by the total number of elements in the structure. The inelastic start point ${\epsilon }_{\mathrm{ie}}$ was determined by least-square fitting of the following piece-wise quadratic function ${\overline{\mathcal{N}}}_{\mathrm{D}}$ to the individual ${\mathcal{N}}_{\mathrm{D}}$ versus total strain curves from simulations (see “Appendix D”):
$\begin{array}{c}\hfill {\overline{\mathcal{N}}}_{\mathrm{D}}\left(\epsilon \right)=\left\{\begin{array}{cc}0\hfill & \epsilon \le {\epsilon }_{\mathrm{ie}}\hfill \\ b{\left(\epsilon -{\epsilon }_{\mathrm{ie}}\right)}^{2}\hfill & \epsilon \ge {\epsilon }_{\mathrm{ie}},\hfill \end{array}\right)\end{array}$
Note that this point is not the apparent 0.2% strain-offset yield point.
During post-processing, damaged elements were categorized by the stress at initial damage onset (Fig. 2). Compression damage was present if all principal stress components of an element were negative and tension if all principal stress components were positive. The remaining damaged elements were classified by the sign of the hydrostatic stress part (negative corresponded to ‘hydrostatic’ compression, positive to ‘hydrostatic’ tension).
Fig. 2
Illustration of the damage mode classification: regions are highlighted on the initial damage onset surface in principal stress space in 3D (left) and on a 2D projection with ${\sigma }_{3}={\sigma }_{1}$ (right). Elements are damaged under local tension (red) or compression (blue) if all principal stress components are positive or negative, respectively. Additionally, hydrostatic damage modes are defined by the sign of the hydrostatic stress: positive hydrostatic stress corresponds to hydrostatic tension (light red) and negative to hydrostatic compression (light blue)
Results
The simulation time of an individual biopsy sample was between 0.4 and 2 h of real time on a standard shared memory server (2$\phantom{\rule{0.166667em}{0ex}}×\phantom{\rule{0.166667em}{0ex}}$14 cores Intel Xeon E5-2697 v2 @2.70GHz, 384 GB RAM). The variation in simulation times was mainly due to the structure sizes; the biopsy models had 3-15 mio DOF. ParOSolNL was very memory efficient; at most 3 GB of total RAM was required.
Agreement with experiments
Fig. 3
Linear regression analysis for compression (top) and tension (bottom): the apparent 0.2% yield stress (left) and maximum stress (right) obtained in the simulations (on the x-axis) are compared to the experimental values (on the y -axis, from Schwiedrzik et al. (2016))
The simulation results showed excellent correlation with experiments (Fig. 3). Specifically, apparent 0.2% yield stresses and maximum stresses correlated with a coefficient of determination of ${R}^{2}\ge 0.97$. Maximum stresses were generally overestimated in the simulations (slope of the regression was 0.89 in compression and 0.94 in tension). No correlations in apparent yield strains and ultimate strains were found.
Fig. 4
Selected stress–strain curves of experiments (black, solid, from Schwiedrzik et al. (2016)) and simulations (blue, dotted). The green and red rectangles mark the apparent 0.2% yield point and the maximum stress point, respectively. For these two points, the local damage pattern is shown (small pictures: bone structure in gray, damage in red)
Table 2
Average relative deviations in the apparent 0.2% yield stress (${\sigma }_{\mathrm{y}}$) and maximum stress (${\sigma }_{\mathrm{max}}$) obtained from simulations and experiments (from Schwiedrzik et al. (2016)) for tension and compression samples. Additionally, standard deviations of the relative errors are given
Tension
Compression
Apparent stress–strain curves showed good qualitative agreement with the experiments (Fig. 4). The local damage patterns differed significantly from sample to sample depending on the individual microstructure (smaller images in Fig. 4). High deviations in apparent ${\sigma }_{\mathrm{y}}$ and ${\sigma }_{\mathrm{max}}$ obtained from simulations and from experiments were observed (Table 2). The maximum stress depended strongly on the critical damage ${D}_{\mathrm{c}}$. In a parameter study using the two calibration samples, $2%$ variation in ${D}_{\mathrm{c}}$ led to relative deviations in the maximum stress of approx. $15%$. The $0.2%$ yield point was nearly unaffected (deviations $<2%$). For more details, see “Appendix A”. The influence of the other calibrated material parameters was much smaller; $10%$ variation in a parameter resulted in less than $10%$ deviations in the apparent yield and maximum point.
Local damage pattern
Fig. 5
Development of local damage regions over the simulation (pseudo-) time. One sample under compression (left) and one under tension (right) are shown. The first columns give the local damage at the apparent 0.2% yield point and the third columns at the maximum point. The structures are shown in 3D (top row) and in two perpendicular projections (middle and bottom row). The bone structure is depicted in grayscales, the damage D in red
The local development of damage regions was observable due to the nonlinear nature of the simulations. Damage patterns differed depending on the individual microstructures. Two selected local results are given in Fig. 5. At the $0.2%$ yield point (left columns), already some sizable damage regions existed. As the applied load increased, initially damaged regions degraded further and damage regions grew. The maximum sustainable load was reached in the right columns of Fig. 5. In most samples, a diffuse damage pattern dominated.
Fig. 6
The percentage of elements that are damaged ($D>0$) shows a sudden increase in the inelastic start point ${\epsilon }_{\mathrm{ie}}$. The colored triangles denote ${\epsilon }_{\mathrm{ie}}$ of the individual samples and the dotted line marks the average ${\epsilon }_{\mathrm{ie}}$. The inelastic region starts later for compression (left) than for tension boundary conditions (right)
The local damage allowed the definition of a novel elasticity limit directly from simulation results. The inelastic start point ${\epsilon }_{\mathrm{ie}}$ was defined as the strain where a pronounced nonlinearity occurred, manifesting in an increase in damaged elements ${\mathcal{N}}_{\mathrm{D}}$ (Fig. 6). The individually fitted curves ${\overline{\mathcal{N}}}_{\mathrm{D}}$ and inelastic start points ${\epsilon }_{\mathrm{ie}}$ are depicted in Fig. 6. On average, ${\epsilon }_{\mathrm{ie}}$ was $0.27±0.04%$ for tension and $-0.34±0.04%$ for compression samples. No systematic difference between low- and high-density samples was found. For comparison, the apparent $0.2%$ yield strain was determined as $0.67±0.16%$ in tension and $-0.81±0.13%$ in compression. The $0.02%$ yield strain was $0.26±0.18%$ in tension and $-0.2±0.17%$ in compression.
Damage distributions revealed qualitative differences between the tension and the compression group (Fig. 7). At the maximum stress point, samples under tension showed a peak which was not present in damage distributions of compression samples. At this point, in samples under apparent compression, a larger amount of tension damage was present than vice versa ($2.61±0.71%$ compared to $0.12±0.06%$).
Fig. 7
Damage distribution of one femur biopsy under compression (the corresponding local damage pattern is shown in Fig. 5, left) and one femur biopsy under tension (local damage pattern see Fig. 5, right) at the maximum stress point. Contributions of the different initial damage modes are shown
Discussion
The aims of this study were (1) the development of an efficient, materially nonlinear $\mu \text{FE}$ solver including tissue-level failure and (2) the investigation of its potential based on trabecular bone biopsies. A simple, damage-based material model was successfully incorporated in ParOSol while preserving its good performance. Material parameters were identified and resulted in excellent correlation between the maximum stress in simulations and experiments. The development of local damage regions was observable due to the nonlinear nature of the simulations. This led to the definition of a new elasticity limit based on the evolution of the number of damaged elements. Damage distributions allowed more insight into the internal processes of the trabecular bone biopsies.
Regarding objective (1), the excellent parallel performance of the original solver was successfully ported to the nonlinear material regime. Simulation times using ParOSolNL were 10 times lower compared to ParFEAP (Schwiedrzik et al. 2016). For the same samples and on the same machine, they reported solving times between 4 and 22 hours. While many solvers can simulate models with a couple of mio DOF, ParOSolNL has already been successfully applied to huge bone structures as well. For details on the performance, the reader is referred to Stipsitz et al. (2018). To summarize, bone simulations of more than 5 billion DOF were feasible on a standard HPC cluster. ParOSolNL used the computational resources efficiently and scaled well with at least 1024 CPUs, while maintaining a low memory footprint. No convergence issues were encountered for any sample. The geometric multigrid preconditioner used in solving the linear equations is robust against modulus jumps (Flaig 2012). Thus, setting $E=0$ in failed elements does not deteriorate the performance. For solving the nonlinear problem, an incremental-adaptive procedure was used. Since the material formulation is not continuous at tissue failure, damage was not allowed to decrease if it exceeded the critical damage in any iteration.
The material parameters identified for objective (2) compare well to the parameters used in the literature (Table 3). A range of tissue moduli is reported here, since ${E}_{0}$ was identified for each biopsy individually. The tensile yield strain is in the range of values reported in the literature. The tension–compression asymmetry in the tissue yield strains is slightly higher. The value for the hardening modulus matches the one reported in Bayraktar et al. (2004). The material parameters were calibrated on two samples only. Although they led to good agreement of simulation results with experiments, it cannot be assumed that the parameters are generally valid or the best possible parameters. However, the goal of this work was not the identification of physical tissue properties but the development of a framework that can be universally applied to any trabecular biopsy. It is assumed that on the tissue level, i.e., at around $30\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ resolution, the tissue properties are the same irrespective of anatomic site. To account for stiffness variations, the results are scaled to match the experimental stiffness. The variations in the stiffness could be caused among others by the degree of mineralization or errors in representing the exact boundary conditions from experiments.
Table 3
Trabecular tissue properties reported in the literature and identified in this study: initial tissue modulus ${E}_{0}$, initial tensile yield strain, yield strain asymmetry, and hardening modulus ${E}_{\text{hard}}$
PropertyLiteratureReference(s)This study
(GPa)1–15 Lucchinetti et al. (2000)7.3–13.5
(%)0.4–2.6 Frank et al. (2018)0.68
Carretta et al. (2013)
0.4, 2 / 3 Schwiedrzik et al. (2016)0.76
0.62 Bayraktar et al. (2004)
0.05 Bayraktar et al. (2004)0.05
Simulation results showed excellent correlation with experiments. The apparent $0.2%$ yield stress and ultimate stress fit well to the experiments for a wide range of different trabecular structures (different anatomic locations, bone densities) under axial tension and compression. This high correlation with the experiments confirms the simple material identification procedure. Good correlation in apparent-level yield stress is generally reported in the literature for nonlinear $\mu \text{FE}$ simulations (Schwiedrzik et al. 2016; Sanyal et al. 2012; Hambli 2013). However, most $\mu \text{FE}$ material models do not include tissue fracture. In one of the few exceptions, a comparable correlation for the maximum stress is reported (Hambli 2013). However, their solver was exclusively applied to small biopsies.
The maximum stress found in the simulations is very sensitive to small variations in the critical damage ${D}_{\mathrm{c}}$. Different values for ${D}_{\mathrm{c}}$ in tension and compression have been found to improve the results (see “Appendix A”). However, in each global loading condition, a mixture of different local loading conditions occurred. Thus, different global values for ${D}_{\mathrm{c}}$ for tension and compression samples are not consistently possible. In the material model, different values for ${D}_{\mathrm{c}}$ for local tensile or compressive loading would require a tensor formulation for the damage to account for mixed loading cases.
The ultimate strength of low-density samples under compression is systematically overestimated (a representative stress–strain curve is compression—R14 in Fig. 4). Slender structures under compression, which are common in low-density samples, are liable to buckling and extensive bending (Cowin 2001; Stölken and Kinney 2003). However, in this study, a linear geometric FE formulation is applied, which cannot reproduce these mechanisms. Thus, slender trabeculae seem to withstand much higher strains than physically possible, leading to an overestimation of the overall strength.
The location of damaged regions obtained in the simulations looks plausible. However, further experiments are required to validate the location of failure and to study the reliability of local results. Rather diffuse, non-localized damage was visible up to the ultimate point where a more localized failure occurred. Local information is not easily obtained in experiments, especially during loading (Carretta et al. 2013). Mostly, the crack pattern is studied by staining the structure (Moore and Gibson 2002) or by simultaneous $\mu$CT scanning (Thurner et al. 2006). With the recent developments in digital volume correlation, a direct local comparison between displacements in experiments and simulations could become possible soon (Costa et al. 2017).
A considerable amount of elements were damaged already very early in the simulations. At the apparent $0.2%$ yield point on average, (2–6)% of the elements were damaged. Thus, the novel elasticity limit determined directly from the simulations was lower than the apparent $0.2%$ yield point. The ${\epsilon }_{\mathrm{ie}}$ fits well with the physiologic strains reported in the literature (0.05–0.6%) (Yang et al. 2011; Di Palma et al. 2003). The elasticity limit reflected the tension–compression asymmetry found in bone due to the asymmetric tensile and compressive damage onset strains.
As expected, simulation results showed no systematic differences between low- and high-density samples. This agrees well with the assumption that strain at fracture is comparable in different bones (apart from the tension–compression asymmetry) while fracture stress can vary largely (Morgan and Keaveny 2001).
Damage distributions suggested systematic differences in the damage mechanism between external tension or compression loads. Under applied compression, a larger amount of tension damage was present than vice versa. One reason could be the higher compression than tension tissue yield strain. Additionally, it could indicate different predominant loading modes under tension and compression. Compression samples show mainly compression damage but also higher amounts of tension damage. This could be due to a mixture of local compression and bending. A similar bending behavior has been reported in Harrison et al. (2013) and Shi et al. (2010).
The good performance comes with a number of limitations due to the simple modeling approach: First, static analyses were performed which included material nonlinearity only. No large deformation or contact mechanisms were applied. It is well known that a linear geometric formulation may lead to decisive errors in samples with a bone volume density of less than $20%$ (Bevill et al. 2006). This is in accordance with the high deviations found in this study for low-density samples under compression. In the future, it needs to be reconsidered if an extension to overcome geometric nonlinearity is possible without deteriorating the good performance. Second, the material model did not include plasticity and strain rate dependency. Third, the results are mesh dependent (see “Appendix B”). Thus, the material parameters identified here are only valid for the mesh resolution of $36\phantom{\rule{0.166667em}{0ex}}\mu \text{m}$. Three aspects concerning mesh accuracy have to be discussed: (1) Meshing a structure with aligned hexahedral elements leads to ragged surfaces. Thus, the results, especially stresses, oscillate on curved surfaces (Guldberg et al. 1998). However, hexahedral elements enable an efficient and highly parallel HPC implementation with a low memory footprint. (2) Local continuum damage is known to be strongly mesh size dependent since a strong localization of damage occurs. However, in this case, the diffuse damage behavior opposes this effect. It was found that the damage does not localize to single elements. Instead, larger damage regions form. (3) The nonlinearity of the system makes the results strongly dependent on small structural deviations as introduced by coarsening the structure. Thus, local results should be viewed with caution. In the future, a local validation study has to be performed to check the local accuracy of the chosen approach. The simple material model and FE formulation were chosen because the main focus of this work was the development of a fast and efficient solver which can be readily applied to large-scale biomechanical problems.
Conclusion
Although a very simple material model and algorithm were used, quite good agreement between simulations and experiments was achieved. The new framework, ParOSolNL, enables nonlinear simulations of large structures with suitably high resolution in reasonable simulation times. The development of damage regions can be traced in detail due to the nonlinear nature of the simulations. Additionally, a new elasticity limit is proposed which requires only information obtained directly from the simulations. Interesting differences in damage distributions between tension and compression were found. Further investigations of these differences in the course of future nonlinear applications, for instance on whole bones, may help to gain more insight into the internal mechanisms of bone failure.
Appendices
Appendix A: Implementation details
The nonlinear solver is an extension of the open-source solver ParOSol (Flaig, 2012) which is able to solve linear-elastic FE problems in a highly parallel and efficient way. Additionally, it has a very low memory footprint which is important for the application on standard high-performance clusters.
An incremental-iterative procedure was applied to include nonlinear material behavior into ParOSol. At the start of each increment, an explicit step was performed to obtain a start value for the damage of each element at the current loading state. The initial solution was then iteratively corrected until a converged solution was found. During these iterations, damage was allowed to decrease compared to the initial solution, but not below the solution of the previous increment. To assure convergence, elements that were once fractured ($D>{D}_{\mathrm{c}}$) remained fractured in all succeeding iterations.
The actual solving of the FE equations was performed by the linear solver within the original ParOSol. In each iteration (of each increment), the Young’s moduli in the structure were kept constant. The resulting linear system of equations was solved using the linear solver of the original ParOSol. For the solving process, fully integrated linear hexahedral elements were applied. For the evaluation of the damage onset criterion, the interpolated centroid stress of each element was used. The modulus was reduced element-by-element since this best fitted the design of the original ParOSol.
The solution of an increment was taken to be sufficiently converged if the change in damage was small enough. Two convergence criteria were defined: (1) The local change in damage, ${}^{⟨\mathrm{e}⟩}{R}_{1}$, from iteration $\left(i-1\right)$ to iteration (i) was defined as
$\begin{array}{c}\hfill {}^{⟨\mathrm{e}⟩}{R}_{1}{=}^{⟨\mathrm{e}⟩}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{D}_{n}^{\left(i\right)}{-}^{⟨\mathrm{e}⟩}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}{D}_{n}^{\left(i-1\right)}\le 100\delta .\end{array}$
(2) The average change in damage, ${R}_{2}$, was
$\begin{array}{c}\hfill {R}_{2}=\frac{{\sum }_{\mathrm{all}\phantom{\rule{4pt}{0ex}}\mathrm{elements}}{}^{⟨\mathrm{e}⟩}{R}_{1}}{N}<\delta .\end{array}$
$\delta$ is the convergence tolerance, and ${\sum }_{\mathrm{all}\phantom{\rule{4pt}{0ex}}\mathrm{elements}}$ denotes the sum over all elements. N is the number of elements for which the damage has changed from iteration $\left(i-1\right)$ to iteration (i).
Appendix B: ${D}_{\mathrm{c}}$ sensitivity
The maximum stress found in the simulations depended strongly on the choice of ${D}_{\mathrm{c}}$ (Fig. 8). The lower ${D}_{\mathrm{c}}$ was chosen, the earlier global failure occurred. However, the slope of the stress–strain curve was only changed minimally and primarily in the vicinity of the ultimate point. A lower value for ${D}_{\mathrm{c}}$ led to a smaller number of damaged elements at apparent fracture (local images in Fig. 8, right). The material degradation started in the same regions, independently of ${D}_{\mathrm{c}}$.
The strong influence of ${D}_{\mathrm{c}}$ was one reason for the poor prediction of the maximum stress values in compression samples. The maximum stress was systematically overestimated for the compression biopsies, especially in low-density samples. Decreasing ${D}_{\mathrm{c}}$ by $2%$ and re-simulating all compression samples led to reduced errors in the maximum stress (Table 4). The slope of the linear regression curve for the maximum stress reached nearly unity (1.02), while the coefficient of determination was not affected.
Table 4
Average relative deviations in the apparent $0.2%$ yield stress (${\sigma }_{\mathrm{y}}$) and maximum stress (${\sigma }_{\mathrm{max}}$) obtained from simulations with $2%$ reduced ${D}_{\mathrm{c}}$
(%)
Compression (reduced )
Fig. 8
The maximum stress point depends strongly on slight variations (here: 2%) in the critical damage ${D}_{\mathrm{c}}$ (left). The results for one vertebra biopsy under tension are shown. A higher ${D}_{\mathrm{c}}$ corresponds to more damaged elements at apparent failure (right)
Appendix C: Mesh resolution
Simulation results depended strongly on the mesh resolution. The mesh dependency stems from the purely local damage formulation of the material model. In Figs. 9 and 10, the influence of the mesh resolution can be observed. The amount of damaged elements decreased with lower voxel size. However, the location of maximum damage and fracture in the trabecular structures remained approximately the same. As expected, the width of the failed regions was much thinner when the bone structure was finer resolved. Global results were strongly affected by the mesh resolution. Thus, the identified material parameters are only suitable for a resolution of $36\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ for which they were identified.
Fig. 9
Dependency of the simulation results on the mesh resolution: apparent stress–strain curve for one biopsy from a femur ($\text{BV}/\text{TV}=22.57%$) under compression with $36\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ and $12\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ resolution (left) and corresponding local results (middle and right) at the maximum stress point
Fig. 10
Dependency of the simulation results on the mesh resolution: apparent stress–strain curve for one biopsy from a vertebra ($\text{BV}/\text{TV}=11.41%$) under tension with $36\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ and $12\phantom{\rule{0.166667em}{0ex}}\mathrm{\mu }\text{m}$ resolution (left) and corresponding local results (middle and right) at the maximum stress point
Appendix D: Elasticity limit
For the definition of an elasticity limit, the evolution of damage over the applied strain is used (Fig. 11).
Fig. 11
Evaluation of the elasticity limit for one individual sample under compression: the number of damaged elements is defined as the sum of all elements with $\left(D>0\right)$ divided by the total number of elements in the structure. A piece-wise quadratic function is fitted to the initial part of the curve. The obtained elasticity limit (blue triangle) is usually lower than the 0.2% yield point (green rectangle). At the 0.2% yield point, already a sizeable amount of damage is present (see small images)
Acknowledgements
Open access funding provided by TU Wien (TUW).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
References
1
Adams MF, Bayraktar HH, Keaveny TM, Papadopoulos P (2004) Ultrascalable implicit finite element analyses in solid mechanics with over a half a billion degrees of freedom. In: Proceedings of the 2004 ACM/IEEE conference on supercomputing. IEEE Computer Society, p 34
2
Baumann AP, Shi X, Roeder RK, Niebur GL. The sensitivity of nonlinear computational models of trabecular bone to tissue level constitutive model. Comput Methods Biomech Biomed Eng 2016. 19: 5, pp.465-473, doi: 10.1080/10255842.2015.1041022
3
Bayraktar HH, Gupta A, Kwon RY, Papadopoulos P, Keaveny TM. The modified super-ellipsoid yield criterion for human trabecular bone. J Biomech Eng 2004. 126: 6, pp.677-684, doi: 10.1115/1.1763177
4
Bevill G, Keaveny TM. Trabecular bone strength predictions using finite element analysis of micro-scale images at limited spatial resolution. Bone 2009. 44: 4, pp.579-584, doi: 10.1016/j.bone.2008.11.020
5
Bevill G, Eswaran SK, Gupta A, Papadopoulos P, Keaveny TM. Influence of bone volume fraction and architecture on computed large-deformation failure mechanisms in human trabecular bone. Bone 2006. 39: 6, pp.1218-1225, doi: 10.1016/j.bone.2006.06.016
6
Carretta R, Lorenzetti S, Müller R. Towards patient-specific material modeling of trabecular bone post-yield behavior. Int J Numer Methods Biomed Eng 2013. 29: 2, pp.250-272, doi: 10.1002/cnm.2516
7
8
Costa MC, Tozzi G, Cristofolini L, Danesi V, Viceconti M, Dall’Ara E, Dall’Ara E. Micro Finite Element models of the vertebral body: validation of local displacement predictions. PloS one 2017. 12: 7, pp.e0180151, doi: 10.1371/journal.pone.0180151
9
Cowin SCSCBone mechanics handbook 2001. Boca Raton CRC Press
10
Di Palma F, Douet M, Boachon C, Guignandon A, Peyroche S, Forest B, Alexandre C, Chamson A, Rattner A. Physiological strains induce differentiation in human osteoblasts cultured on orthopaedic biomaterial. Biomaterials 2003. 24: 18, pp.3139-3151, doi: 10.1016/S0142-9612(03)00152-2
11
Engelke K, Libanati C, Fuerst T, Zysset P, Genant HK. Advanced CT based in vivo methods for the assessment of bone density, structure, and strength. Curr Osteoporos Rep 2013. 11: 3, pp.246-255, doi: 10.1007/s11914-013-0147-2
12
Fields AJ, Nawathe S, Eswaran SK, Jekir MG, Adams MF, Papadopoulos P, Keaveny TM. Vertebral fragility and structural redundancy. J Bone Min Res 2012. 27: 10, pp.2152-2158, doi: 10.1002/jbmr.1664
13
Flaig C (2012) A highly scalable memory efficient multigrid solver for $\mu$-finite element analyses. Ph.D. thesis. Eidgenössische Technische Hochschule ETH Zürich
14
Flaig C, Arbenz P. A scalable memory efficient multigrid solver for micro-finite element analyses based on CT images. Parallel Comput 2011. 37: 12, pp.846-854, doi: 10.1016/j.parco.2011.08.001
15
Flaig C, Arbenz P (2012) A highly scalable matrix-free multigrid solver for $\mu$FE analysis based on a pointer-less octree. In: Lirkov I, Margenov S, Waśniewski J (eds) Large-scale scientific computing, LSSC 2011. Lecture notes in computer science, vol 7116. Springer, Berlin, pp 498–506. 10.1007/978-3-642-29843-1_56
16
Frank M, Marx D, Nedelkovski V, Fischer JT, Pahr DH, Thurner PJ. Dehydration of individual bovine trabeculae causes transition from ductile to quasi-brittle failure mode. J Mech Behav Biomed Mater 2018. 87: , pp.296-305, doi: 10.1016/J.JMBBM.2018.07.039
17
Guldberg RE, Hollister SJ, Charras GT. The accuracy of digital image-based finite element models. J Biomech Eng 1998. 120: 2, pp.289, doi: 10.1115/1.2798314
18
Hambli R. Micro-CT finite element model and experimental validation of trabecular bone damage and fracture. Bone 2013. 56: 2, pp.363-374, doi: 10.1016/J.BONE.2013.06.028
19
Harrison NM, McDonnell P, Mullins L, Wilson N, O’Mahoney D, McHugh PE, O’Mahoney D, McHugh PE. Failure modelling of trabecular bone using a non-linear combined damage and fracture voxel finite element approach. Biomech Model Mechanobiol 2013. 12: 2, pp.225-241, doi: 10.1007/s10237-012-0394-7
20
Keaveny TM, Morgan EF, Niebur GL, Yeh OC. Biomechanics of trabecular bone. Ann Rev Biomed Eng 2001. 3: , pp.307-333, doi: 10.1146/annurev.bioeng.3.1.307
21
Lucchinetti E, Thomann D, Danuser G. Review Micromechanical testing of bone trabeculae-potentials and limitations. J Mater Sci 2000. 35: 24, pp.6057-6065, doi: 10.1023/A:1026748913553
22
Moore TLA, Gibson LJ. Microdamage accumulation in bovine trabecular bone in uniaxial compression. J Biomech Eng 2002. 124: 1, pp.63, doi: 10.1115/1.1428745
23
Morgan EF, Keaveny TM. Dependence of yield strain of human trabecular bone on anatomic site. J Biomech 2001. 34: 5, pp.569-577, doi: 10.1016/S0021-9290(01)00011-2
24
Mueller TL, Christen D, Sandercott S, Boyd SK, van Rietbergen B, Eckstein F, Lochmüller EM, Müller R, van Lenthe GH. Computational finite element bone mechanics accurately predicts mechanical competence in the human radius of an elderly population. Bone 2011. 48: 6, pp.1232-1238, doi: 10.1016/J.BONE.2011.02.022
25
Nawathe S, Juillard F, Keaveny TM. Theoretical bounds for the influence of tissue-level ductility on the apparent-level strength of human trabecular bone. J Biomech 2013. 46: 7, pp.1293-1299, doi: 10.1016/j.jbiomech.2013.02.011
26
Nawathe S, Akhlaghpour H, Bouxsein ML, Keaveny TM. Microstructural failure mechanisms in the human proximal femur for sideways fall loading. J Bone Min Res 2014. 29: 2, pp.507-515, doi: 10.1002/jbmr.2033
27
Nawathe S, Yang H, Fields AJ, Bouxsein ML, Keaveny TM. Theoretical effects of fully ductile versus fully brittle behaviors of bone tissue on the strength of the human proximal femur and vertebral body. J Biomech 2015. 48: 7, pp.1264-1269, doi: 10.1016/j.jbiomech.2015.02.066
28
Niebur GL, Feldstein MJ, Yuen JC, Chen TJ, Keaveny TM. High-resolution finite element models with tissue strength asymmetry accurately predict failure of trabecular bone. J Biomech 2000. 33: 12, pp.1575-1583, doi: 10.1016/S0021-9290(00)00149-4
29
Panyasantisuk J, Pahr DH, Zysset PK. Effect of boundary conditions on yield properties of human femoral trabecular bone. Biomech Model Mechanobiol 2016. 15: 5, pp.1043-1053, doi: 10.1007/s10237-015-0741-6
30
van Rietbergen B, Ito K. A survey of micro-finite element analysis for clinical assessment of bone strength: the first decade. J Biomech 2015. 48: 5, pp.832-841, doi: 10.1016/J.JBIOMECH.2014.12.024
31
Sanyal A, Gupta A, Bayraktar HH, Kwon RY, Keaveny TM. Shear strength behavior of human trabecular bone. J Biomech 2012. 45: 15, pp.2513-2519, doi: 10.1016/J.JBIOMECH.2012.07.023
32
Schwiedrzik J, Gross T, Bina M, Pretterklieber M, Zysset P, Pahr D. Experimental validation of a nonlinear $\mu$FE model based on cohesive-frictional plasticity for trabecular bone. Int J Numer Methods Biomed Eng 2016. 32: 4, pp.e02739, doi: 10.1002/cnm.2739
33
Schwiedrzik JJ, Zysset PK. The influence of yield surface shape and damage in the depth-dependent response of bone tissue to nanoindentation using spherical and Berkovich indenters. Comput Methods Biomech Biomed Eng 2015. 18: 5, pp.492-505, doi: 10.1080/10255842.2013.818665
34
Schwiedrzik JJ, Wolfram U, Zysset PK. A generalized anisotropic quadric yield criterion and its application to bone tissue at multiple length scales. Biomech Model Mechanobiol 2013. 12: 6, pp.1155-1168, doi: 10.1007/s10237-013-0472-5
35
Shi X, Sherry Liu X, Wang X, Edward Guo X, Niebur GL. Type and orientation of yielded trabeculae during overloading of trabecular bone along orthogonal directions. J Biomech 2010. 43: 13, pp.2460-2466, doi: 10.1016/J.JBIOMECH.2010.05.032
36
Stipsitz M, Zysset P, Pahr DH (2018) An efficient solver for large-scale simulations of voxel-based structures using a nonlinear damage material model. In: Conference proceeding; 6th European conference on computational mechanics (ECCM), 7th European conference on computational fluid dynamics (ECFD 7), ECCM-ECFD, Glasgow
37
Stölken JS, Kinney JH. On the importance of geometric nonlinearity in finite-element simulations of trabecular bone failure. Bone 2003. 33: 4, pp.494-504, doi: 10.1016/S8756-3282(03)00214-X
38
Thurner P, Wyss P, Voide R, Stauber M, Stampanoni M, Sennhauser U, Müller R. Time-lapsed investigation of three-dimensional failure and damage accumulation in trabecular bone using synchrotron light. Bone 2006. 39: 2, pp.289-299, doi: 10.1016/J.BONE.2006.01.147
39
Verhulp E, Van Rietbergen B, Müller R, Huiskes R. Micro-finite element simulation of trabecular-bone post-yield behaviour—effects of material model, element size and type. Comput Methods Biomech Biomed Eng 2008. 11: 4, pp.389-395, doi: 10.1080/10255840701848756
40
Wolfram U, Gross T, Pahr DH, Schwiedrzik J, Wilke HJ, Zysset PK. Fabric-based TsaiWu yield criteria for vertebral trabecular bone in stress and strain space. J Mech Behav Biomed Mater 2012. 15: , pp.218-228, doi: 10.1016/J.JMBBM.2012.07.005
41
Yang PF, Brüggemann GP, Rittweger J. What do we currently know from in vivo bone strain measurements in humans?. J Musculoskelet NeuronalInteract 2011. 11: 1, pp.8-20
42
Zhou B, Wang J, Yu YE, Zhang Z, Nawathe S, Nishiyama KK, Rosete FR, Keaveny TM, Shane E, Guo XE. High-resolution peripheral quantitative computed tomography (HR-pQCT) can assess microstructural and biomechanical properties of both human distal radius and tibia: ex vivo computational and experimental validations. Bone 2016. 86: , pp.58-67, doi: 10.1016/J.BONE.2016.02.016
Citing articles via
https://www.researchpad.co/tools/openurl?pubtype=article&doi=10.1007/s10237-019-01254-x&title=Efficient materially nonlinear μFE solver for simulations of trabecular bone failure&author=Monika Stipsitz,Philippe K. Zysset,Dieter H. Pahr,&keyword=Nonlinear material,Micro finite element,Trabecular bone,Yield strength,&subject=Original Paper, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 145, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7763640880584717, "perplexity": 2791.3889332123745}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00107.warc.gz"} |
http://events.berkeley.edu/index.php/?event_ID=111342&date=2017-09-12&tab=academic | ## Commutative Algebra and Algebraic Geometry: Evolutions and Second Symbolic Powers
Seminar | September 12 | 5-6 p.m. | 939 Evans Hall
Jana Sotakova, IS MU
Department of Mathematics
Let K be a field and let R be a local K-algebra. An evolution of R is a surjective homomorphism T- >R of K-algebras such that the induced map between the modules of differentials is an isomorphism. We will see that the question of the existence of non-trivial evolutions is related to the Eisenbud-Mazur conjecture on (second) symbolic powers.
de@math.berkeley.edu | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9805693030357361, "perplexity": 1382.442770107503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00323.warc.gz"} |
http://mathoverflow.net/questions/49918/who-uses-radicial-morphisms | In editing the algebraic geometry notes posted here, prompted by Brian Conrad, I am introducing the notion of radicial morphism. This seems to me to not be a notion that absolutely everyone should see in a first serious schemes course, given the volume of definitions a student must digest (even if this idea is wafer-thin). So I would like to make clear to the learner who this notion is for. But the problem is: I don't know. I have two bits of data: I've never used it professionally, and I know some arithmetic geometers (e.g. in my department) have used it. Thus I ask:
In the answers, I expect to hear multiple interesting answers to the implicit question:
What are radicial morphisms good for?
-
Your "This seems to me to not be a notion that absolutely everyone should see" had me rereading the phrase fve times until it made sense. Good work :) – Mariano Suárez-Alvarez Dec 20 '10 at 0:09
I don't know, but I'm sure I'd prefer the term "universally injective" or even "geometrically injective" over radicial anything. – Allen Knutson Dec 20 '10 at 1:47
Anyone who uses etale topology or characteristic-$p$ geometry or char-$p$ alg. groups (e.g., ab. varieties or linear algebraic groups) in a serious way will find it useful. For example, "etale + radicial = open immersion" and "radicial integral surjections have no effect on the etale topos" (e.g., passing from sep. closed ground field to an alg. closed ground field when doing an etale cohom. or fundamental gp calculation). As Allen notes, it's often called "universally injective" (never heard of "geom. injective"), though if you do then you need to prove the lemma that justifies the name... – BCnrd Dec 20 '10 at 1:58
Allow me to add an example to BCnrd's comment (which I learned from Olsson). Let G be a group scheme of finite type over a field k of char. p, but not smooth over k. Let BG be the classifying stack of G. To compute the etale cohom of BG we'd like to use cohom descent, but pt --> BG is not a presentation (not smooth). The max reduced G_{red} is a group scheme, and is more likely to be smooth (and will be if we make a finite extension of k). The morphism BG_{red} --> BG is rep and radicial, so they have the same cohom, and we can apply descent. This works over a general base, using devissage. – shenghao Dec 20 '10 at 11:15
Dear shenghao: As tiny correction, for imperfect $k$, $G_{\rm{red}}$ may not be subgp; see Ex. A.8.3 in "Pseudo-reductive groups" for an example over any imperfect field (with $G$ finite). As you say, if we first make a suitable finite purely insep. (!) extn on $k$ and then kill the nilpotents we get a smooth subgp. That's good enough for purposes of etale cohomology, but for other purposes can be a nuisance. Here's a version avoiding change of $k$: the quotient of $G$ by inf'tml kernel of (perhaps not-flat!) $n$-fold rel. Frob. is smooth for suff. large $n$; see SGA3, VII_A, 8.3. – BCnrd Dec 20 '10 at 15:53
This isn't exactly the same thing, but it's obviously related (and its a special case of radicial), so it might be a place to look. Brian Conrad pointed this relation out to me several months ago (here on mathoverflow). Radicial morphisms are closely related to (but slightly weaker than) the following notion which appears in algebraic geometry and especially in commutative algebra:
An extension of rings $R \subseteq S$ is called weakly subintegral if the induced map on Spec is a bijection (this is where it differs from radicial) and if the extensions of residue fields is inseparable. This is the same thing as being a universal homeomorphism if I recall correctly.
Having no finite bijective (on Spec) birational radicial extensions is called being weakly normal. Weak normality probably has a lot of papers written about it over the years.
Especially in equal-characteristic zero, being weakly normal is also sometimes called being semi-normal (although semi-normality is a distinct notion in general). Both these conditions are used in the study of various moduli spaces of algebraic varieties.
-
A reasonable prerequisite for a course in Algebraic Geometry is a course in Galois Theory including some characteristic $p$ results. Every finite algebraic extension of fields $E|K$ (meaning $K \subset E$) factors trough an intermediate separable extension $E_s|K$ such that $E|E_s$ is purely inseparable. This theorem sheds light on the structure of field extensions.
I consider finite separable extensions of fields as the one important idea behind the concept of etale map. An etale map is a flat family of finite separable extensions. The counterpart of purely inseparable extensions in geometry are radical morphisms. The topological characterization as universally injective morphisms makes them interesting also. I would not discuss etale maps without treating radical morphisms.
Another reason is that the Frobenius maps (relative, absolute...) are the essential tools for understanding the cohomology of varieties in characteristic $p$ and this is probably the main example of radical morphisms. For these two reasons I would say that radical morphisms belong in an introductory course on schemes.
-
Grothendieck's version of the Ax-Grothendieck theorem is that if you have an $S$-scheme $X$ of finite type, then any radicial $S$-endomorphism $f: X\to X$ is surjective: EGA IV_3, Prop. (10.4.11).
Such a wonderfully unexpected theorem certainly justifies the investment in learning what radicial means...
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8966102600097656, "perplexity": 782.6906645854804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659483.51/warc/CC-MAIN-20150417045739-00070-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.eurotrib.com/story/2007/11/26/74510/565 | Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
## NIMBYism: a global obstacle to a renewable energy future
by a siegel Mon Nov 26th, 2007 at 07:45:10 AM EST
NIMBYism (Not In My Back Yard driven opposition to some form of change) is a challenge to moves to a sensible energy future not just in America but around the globe. Whether solar panels, drying clothes outdoors, white roofing, subways, or otherwise, a good number of paths toward a better energy future face opposition from those outraged over perceived impositions on their way of life, or at least their views in some way. Perhaps the most visible battles: over wind turbine installations.
Yesteady, the New York Times traveled to the Greek isles and a battle over a renewable energy future.
THE tiny Greek island of Serifos, a popular tourist destination, depends on its postcard views of sandy beaches, Cycladic homes and sunsets that blend sea and sky into a clean wash of color. So when a mining and energy company floated a plan earlier this year to build 87 industrial wind turbines on more than a third of the island, the Serifos mayor, Angeliki Synodinou, called it her "worst nightmare."
She imagined supersize wind towers looming over the island, destroying romantic vistas, their turbines chopping the quiet like a swarm of helicopters. The project is now stalled, and Ms. Synodinou doesn't regret it. "No one would come here," she said. "Our island would be destroyed."
One of the realities of the 21st century, NIMBYism is no longer a backyard activity. Greek opponents to wind turbines have easy (and immediate) access to battles over, for example, Cape Wind in Massachusetts. And, they have an active ally in the Industrial Wind Action Group (IWAG), ready to provide information and support to opponents of wind projects anywhere, anytime ... including in the New York Times
"These are not just one or two turbines spinning majestically in the blue sky and billowing clouds," said Lisa Linowes, executive director of Industrial Wind Action Group, an international advocacy group based in New Hampshire that opposes wind farms.
As an aside, for a moment, "Industrial" is a very carefully chosen part of the title and quite directly derived from the heavily funded ($3.3 million in 2005 alone, with one-third from fossil-fuel magnate Bill Koch) anti-Cape Wind efforts: "The phrase 'industrial' was the direct result of focus groups ... It frightened people who thought they lived in a pristine environment." But, back to IWAG's complaints and comments. No, these are not isolated towers as a modern industrial wind farm is likely to have 10s to 100s of wind turbines, spread over an extended period. And, yes, these turbines do have an impact. They can kill birds, although well-sited and modern turbines kill very few, far fewer than would be killed by the avoided fossil-fuel pollution and far-far fewer than opponents' language suggests/claims. Yes, turbines can cause noise. Far less than a diesel generator or, well, a gasoline fueled car driving by the house or, well, even the normal noise level of a modern office. And, yes, 100 meter high turbines are, well, big (actually, BIG) and can be intrusive on sightlines. While many (most) view these spinning turbines as a welcome sight, a beautiful evocation of a cleaner, more prosperous future, there are those NIMBYists who call for a cleaner future, just as long as none of the cleaning is occurring from their back yards. They see the wind turbines, have their blood boiling in anger, and then flip the switch for fossil fuel powered electricity, blind to their direct link to the pollution of all of our backyards. Now, the challenge I receive, would you take one of these in your backyard? Well, yes. (Actually, YES!!!) But, it seems to me that there is a value toward looking to compensating people quite directly for this visual BY (back-yard) impact. Near Serifos, on Skyros, a low-key isle known for its diminutive Skyrian horse, the construction company EN.TE.KA has partnered with a local monastery to build between 70 and 85 turbines on a barren stretch in the island's south owned by the Greek Orthodox Church. EN.TE.KA's managing director, Constantinos Philippidis, said the turbines were expected to bring in yearly revenues of at least 2.5 million euros (about$3.73 million) for the island.
Yes, assure that the local community receives a portion of the funds. But, as a step further, wind turbine products should provide some share of their generated electricity for free to those whose 'back yard' has a visual impact. (Honestly, a limited amount of energy so not as to discourage an energy smart / energy efficient future.)
The NY Times article is reasonably good, but it is frustrating that it doesn't cite from the serious literature developing around these issues, such as the 190 page Investigation into the Potential Impact of Wind Farms on Tourism in Scotland which found both positives and negatives, providing paths for controlling the second through thoughtful placement of wind turbines. And, around the world, actual impacts seem to be on the positive side of the equation. In Northern Greece, "the 41-turbine wind park on Panachaiko Mountain near the northern Peloponnesian city of Patras has even become a much-photographed landmark." This is typical of wind turbines around the world.
But, back to Serifos, where
opponents started rallying against the proposed wind farm this past summer, arguing that the turbines are unsightly and noisy.
It's an argument that irritates Mr. Tsipouridis of the Hellenic Wind Energy Association. "We're living in the most polluted era of humanity," he said, "and it's sheer hypocrisy to spend so much time talking about wind turbines' noise and aesthetics."
Sheer hypocrisy. Hmmm. I wonder whether Mr. Tsipouridis is being too polite. Jeff McIntire-Strasburg over at Sustainablog
Wind energy opponents are a pretty stubborn lot, and I doubt anyone will convince them that wind turbines in the Greek islands would ultimately benefits residents and tourists. Given the most likely alternative of more coal power, it's a little hard to understand their thinking. As much of that coal likely has to be shipped to at least some islands, it's hard to imagine that wind wouldn't be a more cost-effective option in the long run.
Putting aside that direct financial cost of coal, without considering its 'external' costs, there is no question that that coal's CO2 will waft over the islands, sooner or later.
We can all
help make
America
Energy Smart.
Are you doing
ENERGIZE THE GLOBE?
Treehugger also has a post on the NYT stories: New York Times Trashes Wind Power. Twice. This quote they've taken from the NYT is breathtakingly stupid: Yet Sweden's gleaming wind park is entering service at a time when wind energy is coming under sharper scrutiny, not just from hostile neighbors, who complain that the towers are a blot on the landscape, but from energy experts who question its reliability as a source of power. For starters, the wind does not blow all the time. When it does, it does not necessarily do so during periods of high demand for electricity. That makes wind a shaky replacement for more dependable, if polluting, energy sources like oil, coal and natural gas. Moreover, to capture the best breezes, wind farms are often built far from where the demand for electricity is highest. The power they generate must then be carried over long distances on high-voltage lines, which in Germany and other countries are strained and prone to breakdowns. Stop the presses! The wind does not blow all the time! World exclusive, must credit NYT! Developing... So wind power requires a bit more investment in the energy grid. Big deal. I seem to remember a few news items here showing that wind power actually led to lower energy prices and that with a better developed grid, could cover most of the energy demand.
Aren't there proposals in development to store wind energy by raising weights within the turbine or compressing air? Anyway, is there anything uglier than a coal-fired or nuclear power station?
It's all a matter of taste. There can be no doubt, however, that roads, especially highways, are the most disruptive interventions we make in the countryside, as a commentator on treehugger noted.
When hiking across the Cévennes I met people opposed to a wind turbine project ; in some rather isolated mountains. One of their points was that indeed, building dozens of 100-meter high wind turbines requires building a road. Which means, along with the destruction of nature this implies, easier access for 4-wheel drive cars later on... Un roi sans divertissement est un homme plein de misères
Ouch. It's a funny world. Still, the dilemmas of wind power should really be manageable with just a bit of common sense.
To be fair, NYT also included a reason why it matters little that the wind does not blow all the time.Of course, Sweden does not need to build wind parks to get wind power. It could simply buy more surplus wind power from Denmark, which it uses, as does Norway, to pump underground water into elevated reservoirs. The water is later released during periods of peak electric demand to drive hydroelectric stations.In this way, hydro acts as a form of storage for wind energy -- addressing one of wind power's biggest shortcomings. Sweden's strength in hydro makes it a good candidate for greater development of wind power, according to analysts. And being dependent on high-voltage lines is no news for a country with lots and lots of hydro in a sparsely populated area, i.e. the northern half of Sweden. Sweden's finest (and perhaps only) collaborative, leftist e-newspaper Synapze.se
And there's always The Great Battery of Kimberley "The future is already here -- it's just not very evenly distributed" William Gibson
I think the important thing is compensating local residents who have to live with these massive industrial (yes, industrial) installations. If I had bought a house on a remote greek island, I didn't do it to have the calmness and serenity ruined by a dozen massive towers with immense spinning blades. Compensation is crucial. Peak oil is not an energy crisis. It is a liquid fuel crisis.
External costs must be internalised. Peak oil is not an energy crisis. It is a liquid fuel crisis.
how spinning wind turbines evoke anything but calm and serenity. They are grand, graceful and simply beautiful. And the absolute proof that wind turbines are a great sight is how companies with no obvious link to wind energy will find ways to put one on the cover of their annual report - ie there is no object with more positive symbolism - no image that people would rather see, as per the amrketing and PD departments of all these corporations. And people with actual wind farms near thir houses overwhelmingly agree. In the long run, we're all dead. John Maynard Keynes
Sure, they are positive symbols on the front of corporate reports, far less for those who get them built close to their homes. They are still a very large and very obvious intrusion into nature. But of course, if they are a positive external cost, maybe the people living close to them should actually pay the power company for the improved view? Though I'm not sure how well that would be recieved. You might find them beautiful, well actually so do I, granted that they stay far away from my back yard. Peak oil is not an energy crisis. It is a liquid fuel crisis.
I agree completely with Jerome on this. I took the train from Kyoto to the west coast of Japan a couple of years ago, and, across an enormously long lake (Ban, I think), there was a huge, gleaming white, graceful wind machine. It looked like it belonged just as much as the egrets in the rice paddy canals. In October I was driving in central Washington state, and, across the Columbia River, there was a double array of turbines with the blades turning in random orientations. It was a magnificent sight. The normal operating sound is sort of soothing. Can't wait to get one or more of "my own". paul spencer
by the head of RTE (the French network operator), which used to be nastily opposed to wind power, and which was surprisingly positive. Amongst tidbits: Wind power is well diversified in France (3 different climate areas) and thus available wind production is actually quite stable over time; wind kWh now pop up in the network all over the place, and actually help stabilise the network, by reducing the need to transport electricity around; in addition, wind turbines are actually quite sophisticated machines and can provide frequency stabilisation all over the place even when they don't produce; In the long run, we're all dead. John Maynard Keynes
Serifos sounds awfully like San Serriffe I'd never heard of it: are you being serios? "The future is already here -- it's just not very evenly distributed" William Gibson
# Top Diaries
by Oui - Jan 28
by Oui - Jan 23
## The narcissism of minor differences
by Frank Schnittger - Jan 14
by Oui - Jan 19
by Oui - Jan 17
## Impeachment gets real
by ARGeezer - Jan 17
by Oui - Jan 10
by Oui - Jan 13
# Recent Diaries
by Oui - Jan 28
by Oui - Jan 28
by Oui - Jan 27
by Oui - Jan 24
by Oui - Jan 23
by Oui - Jan 19
by Oui - Jan 17
## Impeachment gets real
by ARGeezer - Jan 17
## The narcissism of minor differences
by Frank Schnittger - Jan 14
by Oui - Jan 13
by Oui - Jan 13
1 comment
by Oui - Jan 10
by Oui - Jan 9
by Oui - Jan 8
## More Spanish repression
by IdiotSavant - Jan 6
by Oui - Jan 6
## Revered Martyrs? Holy Hell!
by ARGeezer - Jan 5
by Oui - Jan 4 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3529265522956848, "perplexity": 4041.5045215383125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00369.warc.gz"} |
https://www.mtsolitary.com/20210302200349-k-means-clustering/ | ## k-means clustering #
$$k$$ -means clustering is an unsupervised learning technique whereby $$n$$ data points are partitioned into $$k$$ clusters, where the association of a data-point is to the cluster with the closest mean. This gives a Voronoi partitionining of the space.
### Problem description #
Given $$n$$ observations $$x_1,\ldots,x_n$$ and a non-negative integer $$k$$, partition the observations into $$k$$ disjoint sets $$\mathbf{S}=S_1,\ldots,S_k$$ such that the total within-cluster sum of squares (variance) is minimal, i.e. $\mathrm{argmin}_{\mathbf{S}}\sum_{i=1}^k\sum_{x\in S_i}\left\|x-\mu_i\right\|^2$ where $$\mu_i$$ is the mean of all the points in $$S_i$$.
### Algorithm #
This optimisation problem is NP-hard in general.
#### Naive k-means #
• Initiate a set of “means” $$m_1,\ldots,m_k$$. This can be done randomly or with some heuristic.
• At each “assignment step”, assign each observation to the cluster with the nearest mean, or more precisely, define a partition $$S_1,\ldots,S_k$$ where a point is assigned to $$S_i$$ if $$m_i$$ is the nearest mean (break ties arbitrarily).
• Recalculate the means as the centroids of the new clusters and continue until the algorithm converges.
### Drawbacks #
• $$k$$ must be chosen as an input parameter and finding the “right” value for a specific problem is nontrivial.
• The algorithm may converge to some local minimum which gives unintuitive clusters.
• It also assumes some “spherical” and separable clusters of roughly equal size. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965593159198761, "perplexity": 739.333549037173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00594.warc.gz"} |
https://www.ocellaris.org/user_guide/input/mesh.html | # Mesh¶
You can specify simple geometries using FEniCS DOLFIN built in mesh generators, and also load a mesh from file. For realistic cases using something like gmsh to generate meshes is recommended. The meshio program can be used to convert between different mesh file formats and also loading these formats directly, see below.
## Simple geometries¶
Example: 2D rectangle
mesh:
type: Rectangle
Nx: 64
Ny: 64
diagonal: left/right # defaults to 'right'
startx: 0 # defaults to 0
endx: 2 # defaults to 1
# you can also give starty and endy
Example: 3D box
mesh:
type: Box
Nx: 64
Ny: 64
Nz: 15
startx: 0 # defaults to 0
endx: 2 # defaults to 1
# you can also give starty and endy, startz and endz
Example: 2D disc
mesh:
type: UnitDisc
N: 20
degree: 1 # defaults to 1 (degree of mesh elements)
startx, starty, startz, endx, endy, endz
Geometry descriptions for the simple geometries. Defaults coordinate ranges are [0, 1] in each direction.
Nx, Ny, Nz, N
The number of mesh cells in each direction. N is used for the radial direction for UnitDisc meshes.
degree
Mesh cell degree. Using higher order (order 2) will be slower to assemble, the default is order 1 (facets are straight lines or flat planes).
diagonal
The same options as accepted by FEniCS dolfin mesh generators: right, left, crossed, right/left, left/right. This controls how squares are split into triangles in 2D.
## Mesh file formats¶
Example: using meshio to load all its supported formats (RECOMMENDED)
mesh:
type: meshio
mesh_file: mesh.msh
meshio_type: gmsh
The supported format specifiers in meshio as of January 2019 are (from the meshio source code on github): ansys, ansys-ascii, ansys-binary, gmsh, gmsh-ascii, gmsh-binary, gmsh2, gmsh2-ascii, gmsh2-binary, gmsh4, gmsh4-ascii, gmsh4-binary, med, medit, dolfin-xml, permas, moab, off, stl, stl-ascii, stl-binary, vtu-ascii, vtu-binary, vtk-ascii, vtk-binary, xdmf, exodus, abaqus, mdpa.
Example: legacy DOLFIN XML format
mesh:
type: XML
mesh_file: mesh.xml
facet_region_file: regions.xml # not required
Ocellaris will look for the xml files first as absolute paths, then as paths relative to the current working directory and last as paths relative to the directory of the input file. If it cannot find the file in any of these places you will get an error message and Ocellaris will quit.
A sample mesh xml file and facet marker file is included in the demo/files directory. The mesh ocellaris_mesh.xml.gz and the facet regions ocellaris_facet_regions.xml.gz. You can load these files without unzipping them. The flow around Ocellaris demo shows how it is done.
Example: XDMF format
mesh:
type: XDMF
mesh_file: mesh.xdmf
Example: Ocellaris HDF5 restart file format
mesh:
type: HDF5
mesh_file: ocellaris_savepoint000010.h5
This will only load the mesh and (possibly) facet regions. You can also start the simulation from a restart file instead of an input file. Then the mesh and the function values from that save point are used, allowing you to restart the simulation more or less like it was never stopped.
## Moving the mesh¶
Ocellaris can move the mesh right after it has been created or read from file. To move the mesh in order to refine, skew, scale, rotate or translate it you must specify a C++ description of the mesh displacement from the initial position (which was specified in the input file or in the loaded mesh file).
An example is the following 140 meter long 2D wave tank which is 10 m high. To refine the mesh in the y-direction such that it is finest around x[1] = 7 meters—where the free surface is to be located—a function is specified which is zero on the boundaries (to avoid changing the domain size) and non-zero in the interior in order to move the nodes closer to the free surface. No’ refinement is performed in the x-direction (x[0]).
mesh:
type: Rectangle
Nx: 140
Ny: 20
endx: 140
endy: 20
move: ['0', '0.0297619048*pow(x[1], 3) - 0.520833333*pow(x[1], 2) + 2.23214286*x[1] + 3.55271368e-15']
In order to develop and check the mesh refinement function it can be beneficial to generate and plot it, e.g., using matplotlib in jupyter or using similar interactive tools. The above refinement was developed using polynomial fitting in numpy:
from matplotlib import pyplot
import numpy
# Find a polynomial that refines the mesh
y_target = [0, 4, 7.5, 10]
dy_target = [0, 2.5, 0, 0] # zero at the boundary
P = numpy.polyfit(y_target, dy_target, 3)
# Realise the polynomial
y = numpy.linspace(0, 10, 20)
dy = numpy.polyval(P, y)
# Plot the results
for ypos in (y + dy):
pyplot.plot([0, 1], [ypos, ypos], '-k', lw=1)
pyplot.axhline(7, c='b', ls=':')
pyplot.axhline(6, c='b', ls=':', lw=1)
pyplot.axhline(8, c='b', ls=':', lw=1)
print('%.9g*pow(x[1], 3) + %.10g*pow(x[1], 2) + %.10g*x[1] + %.10g' % tuple(P))
For more complicated meshes it is recommended to perform mesh grading and other mesh operation in an external mesh generator such as gmsh.
There is also some (not much used, hence possibly buggy) support for ALE where the mesh moves every timestep, but that is not covered by the mesh section of the input file. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32751262187957764, "perplexity": 7545.960338606008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155529.97/warc/CC-MAIN-20210805095314-20210805125314-00100.warc.gz"} |
https://brilliant.org/discussions/thread/natural-numbers/ | ×
# Natural numbers
I'm a little curious how you define the set of natural numbers, $$\mathbb{N}$$. That all the positive integers are members of $$\mathbb{N}$$ everyone agrees on, but what about $$0$$? What I learned in school here in Sweden is that $$\mathbb{N}=\left\{0,1,2,...\right\}$$, and then for the positive numbers $$\mathbb{Z}_{+}=\left\{1,2,3,...\right\}$$. But what I have heard from many other people from other countries is that the define the natural numbers without $$0$$, that is $$\mathbb{N}=\left\{1,2,3,...\right\}$$. Is one more correct than the other? Not that it really matters if you define your notation first, but it can be annoying sometimes. Also, any ideas on why you would prefer one way over the other?
I guess my question is really: would you define $$0$$ as a natural number or not, and why?
Note by Mattias Olla
4 years, 8 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
This is a well known case of when mathematical notation can differ in various regions. As you mentioned, $$\mathbb{N}$$ can have different meanings for different people, and there is no global standardization. As such, I tend to avoid using $$\mathbb{N}$$ where possible, and instead say "non-negative integers" or "positive integers".
Even though natural numbers are supposed to represent the counting number system, the concept of 0 as a number (as opposed to a placeholder) only came about in 9th century AD in India. That's pretty late, considering the amazing amount of math that happened before.
Staff - 4 years, 8 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822595119476318, "perplexity": 1547.3294076380275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587496.62/warc/CC-MAIN-20171216084601-20171216110601-00696.warc.gz"} |
https://www.stata.com/meeting/6uk/ | » Home » Resources & support » Users Group meetings » 2000 UK Stata Users Group meeting
## 2000 UK Stata Users Group meeting
### 15 May 2000
Royal Statistical Society
12 Errol Street
London EC1Y 8LX
### Fitting complex random effect models with Stata using data augmentation: an application to the study of male and female fecundability
David Clayton (MRC Biostatatistics Unit, Cambridge), and René Ecochard (DIM Hospices Civils de Lyon)
We discuss fitting of a complex random effect model using Stata to carry out block-wise Gibbs sampling within a multi-processor computing environment. The application involves a dataset concerning artificial insemination by donor (AID). Success or failure at each of 12,100 menstrual cycles is modelled with a mixed model with random effects due to woman, conception attempt within woman, semen donor, donation within donor, and the treating physician. Given the availability of software within Stata to fit a model with a single random effect, the full model can be fitted by an alternating imputation algorithm (Clayton and Rasbash, 1999) implemented with five copies of Stata running on separate processors and communicating via disk files. Each process fits one random effect plus all the fixed effects. The five processes may run in synchronous or asynchronous mode. Process synchronisation and file locking are implemented in a toolkit' of Stata programs.
### Nonparametric regression modelling using MCMC methods
Gareth Ambler (Medical Statistics and Evaluation, Imperial College)
Nonparametric regression modelling may be used to estimate the relationship between a response and a predictor when one wants to make few assumptions about the form of the relationship. One approach is to estimate the regression function using piecewise polynomials that are non-zero only between adjacent knot points. A drawback of this approach is that the number and location of the knots usually has to be chosen.
Denison and colleagues (1998) suggested a methodology that does not require us to make this choice. They proposed treating the number and location of the knots as random variables and using MCMC simulation techniques to sample from their distribution. An average of the corresponding fits provides an estimate of the regression function.
I will describe bcf which implements this method and will illustrate its potential in both real and simulated data.
Denison, D., Mallick, B.K. and Smith, A.F.M. 1998. Automatic Bayesian curve fitting. Journal, Royal Statistical Society Series B 60, 333-350.
### Analysis of cancer survival with Stata
Andy Sloggett (Epidemiology and Population Health,
London School of Hygiene and Tropical Medicine)
Two years ago, at the 4th User Group meeting, I presented a purpose-written Stata routine for calculating relative survival in follow-up studies, usually cancer survival studies. The routine strel (name registered with Stata) has been further developed and also adapted for use with Stata v5 or v6. A brief re-cap will be presented.
Some comparisons with the hitherto "gold standard" routine written by Timo Hakulinen, and the advantages and disadvantages of each approach, will be presented. The Esteve methodology has some foibles which will be mentioned.
For many cancers the relative survival curve flattens to an asymptote after some years. When relative survival estimates are available for a series of times post-diagnosis the curve can be modelled with the Stata non-linear procedure, specifying a mixture model which provides the proportion "cured" — the proportion at asymptote, whose survival is no worse than the general population. With a bit of magic the procedure also provides the mean survival time of those who have died before "cure" was attained.
These two measures — proportion cured and mean survival of fatal cases — can give interesting extra insights into trends in cancer survival. Some results will be presented.
### Enhancing access to statistical software tools and datasets for research and instruction
Christopher F. Baum (Economics, Boston College)
Statistical software tools have become more extensible, readily permitting their users to extend functionality, while widespread access to the Internet has made it possible to exchange those materials within the research community. Stata has become particularly supportive of these trends with its .ado architecture, in which user commands properly installed are indistinguishable from built-in commands, and its net-aware facilities for installation and archive access (such as net describe and webseek).
This paper describes an initiative to enhance information flow in the discipline of economics — the RePEc project — which has been expanded from its original focus on preprints and published articles to incorporate "metadata", or bibliographic information, on "software components" such as user-authored additions to Stata. The use of a RePEc archive to house these metadata provides greater visibility for these materials, and integrates them into a broader set of software components that may be referenced to enhance Stata's facilities. The SSC-IDEAS archive provides Web browser access to over 400 Stata components, incorporating those published on Statalist, and is mirrored by the new webseek facility. The archive's Stata-oriented contents are accompanied by automatically generated package (.pkg) files that render them installable in web-aware Stata.
The RePEc metadata structures may be used to integrate a researcher's preprints, her software, and her datasets that are to be shared with the research community. This facility has clear advantages for instruction as well as research. This paper demonstrates how three sets of instructional data, made available by econometrics textbook authors, may be catalogued and made directly accessible within web-aware Stata for classroom use.
### A web-based "Survival Analysis using Stata" course to accompany a lecture course: what is it and was it worth doing?
Stephen Jenkins (Institute for Social and Economic Research,
University of Essex)
I teach a 10 hour lecture course on Survival Analysis to M.Sc. students in Economics (though others sit in on it too). This year, for the first time, the lectures were supplemented by web-based Survival Analysis with Stata materials.
Topics covered are:
• Introduction to Stata
• The shapes of hazard and survival functions
• Preparing survival time data for analysis and estimation
• Estimation of the empirical (KM) hazard and survivor functions
• Estimation: (i) continuous time models and
• Estimation: (ii) discrete time models.
Downloadable lessons provide worked examples plus exercises. This short talk reviews the advantages and disadvantages of this venture, and hopes to stimulate suggestions for improvements, as well as more general discussion about teaching methods.
### Confidence intervals for rank order statistics: Somers' D, Kendall's tau_a and their differences
Roger Newson (Department of Public Health Sciences, Guy's, King's and St Thomas' School of Medicine)
So-called "non-parametric" methods are in fact based on population parameters, which are zero under the null hypothesis. Two of these parameters are Kendall's taua and Somers' D, defined respectively by
tauXY = E[sign(X1-X2) sign(Y1-Y2)], DYX = tauXY / tauXX
where (X1,Y1) and (X2,Y2) are sampled independently from the same bivariate population. If X is a binary variable, then Somers' D is the parameter tested by a Wilcoxon rank-sum test.
It is more informative to have confidence limits for these parameters than P-values alone, for three main reasons:
1. It might discourage people from arguing that a high P-value proves a null hypothesis.
2. For continuous data, Kendall's taua is often related to the classical Pearson correlation by Greiner's relation rho=sin((pi/2)tau), so we can use Kendall's taua to define robust confidence limits for Pearson's correlation.
3. We might want to know confidence limits for differences between two Kendall's tauas or Somers' Ds, because a larger Kendall's tauas or Somers' D cannot be secondary to a smaller one. That is to say, if Y is an outcome variable, and W and X are two competing predictor variables, then the difference tauXY-tauWY is positive or negative, depending on the most likely direction of the difference between two Y-values, assuming that the larger of the two W-values is associated with the smaller of the two X-values. Therefore, if tauXY>tauWY>0, then the positive correlation of X with Y cannot be caused by a positive relationship of both variables with W.
The program somersd, which I have submitted to STB, calculates confidence intervals for Somers' D or Kendall's taua, using jackknife variances. There is a choice of transformations, including Fisher's z, Daniels' arcsine, Greiner's rho, and the z-transform of Greiner's rho. A cluster() option is available, intended for measuring intra-class correlation (such as exists between measurements on pairs of sisters). The estimation results are saved as for a model fit, so that differences can be estimated using lincom.
### Parametrizing Regression Models
Michael Hills (consultant) and David Clayton (MRC Biostatatistics Unit, Cambridge)
The Mantel–Haenszel commands in Stata are still popular with epidemiologists even though they are less efficient than their maximum likelihood counterparts. The reason lies in the way the parameters are chosen: they show effects of variables of interest (exposures') by potential confounding variables, possibly controlled for other stratifying variables, followed by combined effects based on the assumption of no interaction. In the conventional parametrization the parameters show the sizes of the interaction terms — these create confusion and fill the screen without being of any practical value.
We will present a series of linked commands which makes it possible to combine any single equation regression model with declared stratifying and confounding variables to produce maximum likelihood estimates of Mantel-Haenszel parameters. The output is based on the kind of table required for publications in the epidemiological literature. For much epidemiological analysis these commands could replace the use of xi.
### Quantile plots for right-censored data
Tony Brady (Medical Statistics and Evaluation, Imperial College) and Patrick Royston (MRC Clinical Trials Unit, London)
Parametric survival models make assumptions about the distribution of survival times that are not straightforward to check in practice. This might be one reason why Cox regression is so often used to analyse survival data despite some advantages of parametric models, such as the ability to make inferences directly about survival times in addition to the hazard. We propose a simple tool for checking the distribution of survival times, analogous to a normal plot. Quantiles of the (right) censored survival times are estimated using the method of Kaplan and Meier to account for censoring. These are plotted against quantiles of the proposed parametric survival distribution. Departure of the plotted points from the line of equality indicates departure from the proposed distribution. We will illustrate cqplot using simulated datasets from known survival distributions to show that it works in principle, and then go on to demonstrate its use on real data.
### Plotting and fitting univariate distributions with long or heavy tails
Nicholas J. Cox (Geography, University of Durham)
Distributions with long or heavy tails are commonplace and in many fields are more frequently encountered than (say) approximately Gaussian (normal) distributions. Data examples for this presentation come from environmental statistics, in which assessing the character of heavy tails of distributions for such variables as rainfall or river discharge is a central problem. I will survey some graphical and estimation programs written in the Stata language for such distributions. Some of these programs are of use for many kinds of data.
distplot and quantil2 (STB-51) show cumulative distribution functions, survival functions, or quantile functions. skewplot (SSC-IDEAS) is a Tukey-style plot for examining the degree and character of skewness. mexplot and hillplot are more specific to data on extreme events.
Last year Patrick Royston and I reported on a Stata program for calculating L-moments (lmoments, SSC-IDEAS). This approach will be revisited briefly and it will be shown how L-moments provide easily calculated parameter estimates for fitting distributions such as the generalised Pareto distribution and the generalised extreme value distribution. Quantile-quantile plots can also be produced easily given such estimates. In addition, plotting the third and fourth L-moments is an alternative to plotting skewness and kurtosis.
### Report to users
William Gould (StataCorp, College Station, TX)
Here are some of the highlights from Bill Gould's report to users and the subsequent discussion of user "wishes and grumbles".
This summary was prepared by Nicholas Cox. Long-time Stata users will know that StataCorp does not make promises or predictions about what will appear when: it promises only to listen very carefully.
StataCorp has been suffering growing pains: the number of technical developers has doubled in the last year. In the short term, output of new code has slowed while developers come up to speed, but thereafter will come faster.
StataCorp will be moving into a new, larger custom-built building, probably early next year.
Sales have been good!
The web-aware features of Stata 6.0 have had a major impact. Since 6.0 was released in January 1999, there have been 51 updates to .ado files and 9 updates to executables. On average, an .ado file is updated every 2.7 days. (One insight into StataCorp's practices is that it takes about 1.5 weeks to certify a new executable.) The net command, which allows most users to update over the internet, allows quick bug fixes and addition of new features. The latter have included improvements in regression accuracy and a much revised xtgee. The new webseek command has led to greater trading of user-written programs.
The Stata Technical Bulletin is growing more slowly than Stata itself, despite improved quality. In due course, the STB will be made available on the web, but precisely in what way is not yet fixed.
Ventures like icd9 (STB-54) for handling disease codes are quite easy for StataCorp and apparently useful for large groups. Suggestions of others? [Audience members suggested zip and SIC.]
Net courses are going well. The latest course on Survival analysis is much more statistical than any previous net course, but has had very good feedback to date.
The programming language will remain stable into future releases. But more structured programming commands will be added, such as a foreach. Improving graphics is one major project under way. Requests from the audience included
• being able to link C code to Stata
• being able to use more than one data set within Stata
• GMM
• data paths (like ado paths)
• more flexible merge
• better error diagnostics, better debugging
### Scientific organizers
Nicholas J. Cox, Durham University
Patrick Royston, MRC Clinical Trials Unit
### Logistics organizers
Timberlake Consultants, the official distributor of Stata in the United Kingdom. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4161610007286072, "perplexity": 3232.4348228915974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00749.warc.gz"} |
http://mathhelpforum.com/business-math/57274-sinking-fund-compounded-semiannually.html | # Math Help - Sinking Fund - Compounded SemiAnnually
1. ## Sinking Fund - Compounded SemiAnnually
Parents have set up a sinking fund in order to have $120,000 in 15 years for their children’s college education. How much should be paid semiannually into an account paying 6.8% compounded semiannually? 2. Originally Posted by ndcruz Parents have set up a sinking fund in order to have$120,000 in 15 years for their children’s college education. How much should be paid semiannually into an account paying 6.8% compounded semiannually?
I replace PMT (payment) with R and FV(Future Value) with S in my formula.
The periodic payment R required to accumulate a sum of S dollars over n periods with interest charged at the rate of i per period is:
$R=\frac{iS}{(1+i)^n-1}$
S = \$120,000
n = 30 (semi-annual periods in 15 yrs)
i = .068 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24549515545368195, "perplexity": 5358.128504243574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134514.93/warc/CC-MAIN-20140914011214-00129-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://pysph.readthedocs.io/en/latest/using_pysph.html | # Using the PySPH library¶
In this document, we describe the fundamental data structures for working with particles in PySPH. Take a look at A more detailed tutorial for a tutorial introduction to some of the examples. For the experienced user, take a look at The PySPH framework for some of the internal code-generation details and if you want to extend PySPH for your application.
## Working With Particles¶
As an object oriented framework for particle methods, PySPH provides convenient data structures to store and manipulate collections of particles. These can be constructed from within Python and are fully compatible with NumPy arrays. We begin with a brief description for the basic data structures for arrays.
### C-arrays¶
The cyarray.carray.BaseArray class provides a typed array data structure called CArray. These are used throughout PySPH and are fundamentally very similar to NumPy arrays. The following named types are supported:
Some simple commands to work with BaseArrays from the interactive shell are given below
>>> import numpy
>>> from cyarray.carray import DoubleArray
>>> array = DoubleArray(10) # array of doubles of length 10
>>> array.set_data( numpy.arange(10) ) # set the data from a NumPy array
>>> array.get(3) # get the value at a given index
>>> array.set(5, -1.0) # set the value at an index to a value
>>> array[3] # standard indexing
>>> array[5] = -1.0 # standard indexing
### ParticleArray¶
In PySPH, a collection of BaseArrays make up what is called a ParticleArray. This is the main data structure that is used to represent particles and can be created from NumPy arrays like so:
>>> import numpy
>>> from pysph.base.utils import get_particle_array
>>> x, y = numpy.mgrid[0:1:0.1, 0:1:0.1] # create some data
>>> x = x.ravel(); y = y.ravel() # flatten the arrays
>>> pa = get_particle_array(name='array', x=x, y=y) # create the particle array
In the above, the helper function pysph.base.utils.get_particle_array() will instantiate and return a ParticleArray with properties x and y set from given NumPy arrays. In general, a ParticleArray can be instantiated with an arbitrary number of properties. Each property is stored internally as a cyarray.carray.BaseArray of the appropriate type.
By default, every ParticleArray returned using the helper function will have the following properties:
• x, y, z : Position coordinates (doubles)
• u, v, w : Velocity (doubles)
• h, m, rho : Smoothing length, mass and density (doubles)
• au, av, aw: Accelerations (doubles)
• p : Pressure (doubles)
• gid : Unique global index (unsigned int)
• pid : Processor id (int)
• tag : Tag (int)
The role of the particle properties like positions, velocities and other variables should be clear. These define either the kinematic or dynamic properties associated with SPH particles in a simulation.
In addition to scalar properties, particle arrays also support “strided” properties i.e. associating multiple elements per particle. For example:
>>> pa.A
This will add a new property with name 'A' but which has 2 elements associated with each particle. When one adds/remove particles this is taken into account automatically. When accessing such a particle, one has to be careful though as the underlying array is still stored as a one-dimensional array.
PySPH introduces a global identifier for a particle which is required to be unique for that particle. This is represented with the property gid which is of type unsigned int. This property is used in the parallel load balancing algorithm with Zoltan.
The property pid for a particle is an integer that is used to identify the processor to which the particle is currently assigned.
The property tag is an integer that is used for any other identification. For example, we might want to mark all boundary particles with the tag 100. Using this property, we can delete all such particles as
>>> pa.remove_tagged_particles(tag=100)
This gives us a very flexible way to work with particles. Another way of deleting/extracting particles is by providing the indices (as a list, NumPy array or a LongArray) of the particles to be removed:
>>> indices = [1,3,5,7]
>>> pa.remove_particles( indices )
>>> extracted = pa.extract_particles(indices, props=['rho', 'x', 'y'])
A ParticleArray can be concatenated with another array to result in a larger array:
>>> pa.append_parray(another_array)
To set a given list of properties to zero:
>>> props = ['au', 'av', 'aw']
>>> pa.set_to_zero(props)
Properties in a particle array are automatically sized depending on the number of particles. There are times when fixed size properties are required. For example if the total mass or total force on a particle array needs to be calculated, a fixed size constant can be added. This can be done by adding a constant to the array as illustrated below:
>>> print(pa.total_mass, pa.total_force)
In the above, the total_mass is a fixed DoubleArray of length 1 and the total_force is a fixed DoubleArray of length 3. These constants will never be resized as one adds or removes particles to/from the particle array. The constants may be used inside of SPH equations just like any other property.
The constants can also set in the constructor of the ParticleArray by passing a dictionary of constants as a constants keyword argument. For example:
>>> pa = ParticleArray(
... name='test', x=x,
... constants=dict(total_mass=0.0, total_force=[0.0, 0.0, 0.0])
... )
Take a look at ParticleArray reference documentation for some of the other methods and their uses.
## Nearest Neighbour Particle Searching (NNPS)¶
To carry out pairwise interactions for SPH, we need to find the nearest neighbours for a given particle within a specified interaction radius. The NNPS object is responsible for handling these nearest neighbour queries for a list of particle arrays:
>>> from pysph.base import nnps
>>> pa1 = get_particle_array(...) # create one particle array
>>> pa2 = get_particle_array(...) # create another particle array
>>> particles = [pa1, pa2]
The above will create an NNPS object that uses the classical linked-list algorithm for nearest neighbour searches. The radius of interaction is determined by the argument radius_scale. The book-keeping cells have a length of $$\text{radius_scale} \times h_{\text{max}}$$, where $$h_{\text{max}}$$ is the maximum smoothing length of all particles assigned to the local processor.
Note that the NNPS classes also support caching the neighbors computed. This is useful if one needs to reuse the same set of neighbors. To enable this, simply pass cache=True to the constructor:
>>> nps = nnps.LinkedListNNPS(dim=3, particles=particles, cache=True)
Since we allow a list of particle arrays, we need to distinguish between source and destination particle arrays in the neighbor queries.
Note
A destination particle is a particle belonging to that species for which the neighbors are sought.
A source particle is a particle belonging to that species which contributes to a given destination particle.
With these definitions, we can query for nearest neighbors like so:
>>> nbrs = UIntArray()
>>> nps.get_nearest_particles(src_index, dst_index, d_idx, nbrs)
where src_index, dst_index and d_idx are integers. This will return, for the d_idx particle of the dst_index particle array (species), nearest neighbors from the src_index particle array (species). Passing the src_index and dst_index every time is repetitive so an alternative API is to call set_context as done below:
>>> nps.set_context(src_index=0, dst_index=0)
If the NNPS instance is configured to use caching, then it will also pre-compute the neighbors very efficiently. Once the context is set one can get the neighbors as:
>>> nps.get_nearest_neighbors(d_idx, nbrs)
Where d_idx and nbrs are as discussed above.
If we want to re-compute the data structure for a new distribution of particles, we can call the NNPS.update() method:
>>> nps.update()
### Periodic domains¶
The constructor for the NNPS accepts an optional argument (DomainManager) that is used to delimit the maximum spatial extent of the simulation domain. Additionally, this argument is also used to indicate the extents for a periodic domain. We construct a DomainManager object like so
>>> from pysph.base.nnps import DomainManager
>>> domain = DomainManager(xmin, xmax, ymin, ymax, zmin, zmax,
periodic_in_x=True, periodic_in_y=True,
periodic_in_z=False)
where xmin … zmax are floating point arguments delimiting the simulation domain and periodic_in_x,y,z are bools defining the periodic axes.
When the NNPS object is constructed with this DomainManager, care is taken to create periodic ghosts for particles in the vicinity of the periodic boundaries. These ghost particles are given a special tag defined by ParticleTAGS
class ParticleTAGS:
Local = 0
Remote = 1
Ghost = 2
Note
The Local tag is used to for ordinary particles assigned and owned by a given processor. This is the default tag for all particles.
Note
The Remote tag is used for ordinary particles assigned to but not owned by a given processor. Particles with this tag are typically used to satisfy neighbor queries across processor boundaries in a parallel simulation.
Note
The Ghost tag is used for particles that are created to satisfy boundary conditions locally.
### Particle aligning¶
In PySPH, the ParticleArray aligns all particles upon a call to the ParticleArray.align_particles() method. The aligning is done so that all particles with the Local tag are placed first, followed by particles with other tags.
There is no preference given to the tags other than the fact that a particle with a non-zero tag is placed after all particles with a zero (Local) tag. Intuitively, the local particles represent real particles or particles that we want to do active computation on (destination particles).
The data attribute ParticleArray.num_real_particles returns the number of real or Local particles. The total number of particles in a given ParticleArray can be obtained by a call to the ParticleArray.get_number_of_particles() method.
The following is a simple example demonstrating this default behaviour of PySPH:
>>> x = numpy.array( [0, 1, 2, 3], dtype=numpy.float64 )
>>> tag = numpy.array( [0, 2, 0, 1], dtype=numpy.int32 )
>>> pa = utils.get_particle_array(x=x, tag=tag)
>>> print(pa.get_number_of_particles()) # total number of particles
>>> 4
>>> print(pa.num_real_particles) # no. of particles with tag 0
>>> 2
>>> x, tag = pa.get('x', 'tag', only_real_particles=True) # get only real particles (tag == 0)
>>> print(x)
>>> [0. 2.]
>>> print(tag)
>>> [0 0]
>>> x, tag = pa.get('x', 'tag', only_real_particles=False) # get all particles
>>> print(x)
>>> [0. 2. 1. 3.]
>>> print(tag)
>>> [0 0 2 1]
We are now in a position to put all these ideas together and write our first SPH application.
## Parallel NNPS with PyZoltan¶
PySPH uses the Zoltan data management library for dynamic load balancing through a Python wrapper PyZoltan, which provides functionality for parallel neighbor queries in a manner completely analogous to NNPS.
Particle data is managed and exchanged in parallel via a derivative of the abstract base class ParallelManager object. Continuing with our example, we can instantiate a ZoltanParallelManagerGeometric object as:
>>> ... # create particles
>>> from pysph.parallel import ZoltanParallelManagerGeometric
>>> pm = ZoltanParallelManagerGeometric(dim, particles, comm, radius_scale, lb_method)
The constructor for the parallel manager is quite similar to the NNPS constructor, with two additional parameters, comm and lb_method. The first is the MPI communicator object and the latter is the partitioning algorithm requested. The following geometric load balancing algorithms are supported:
• Recursive Coordinate Bisection (RCB)
• Recursive Inertial Bisection (RIB)
• Hilbert Space Filling Curves (HSFC)
The particle distribution can be updated in parallel by a call to the ParallelManager.update() method. Particles across processor boundaries that are needed for neighbor queries are assigned the tag Remote as shown in the figure:
Local and remote particles in the vicinity of a processor boundary (dashed line)
## Putting it together: A simple example¶
Now that we know how to work with particles, we will use the data structures to carry out the simplest SPH operation, namely, the estimation of particle density from a given distribution of particles.
We consider particles distributed on a uniform Cartesian lattice ( $$\Delta x = \Delta y = \Delta$$) in a doubly periodic domain $$[0,1]\times[0,1]$$.
The particle mass is set equal to the “volume” $$\Delta^2$$ associated with each particle and the smoothing length is taken as $$1.3\times \Delta$$. With this initialization, we have for the estimation for the particle density
$<\rho>_a = \sum_{b\in\mathcal{N}(a)} m_b W_{ab} \approx 1$
We will use the CubicSpline kernel, defined in pysph.base.kernels module. The code to set-up the particle distribution is given below
# PySPH imports
from cyarray.carray import UIntArray
from pysph.base.utils import utils
from pysph.base.kernels import CubicSpline
from pysph.base.nnps import DomainManager
# NumPy
import numpy
# Create a particle distribution
dx = 0.01; dxb2 = 0.5 * dx
x, y = numpy.mgrid[dxb2:1:dx, dxb2:1:dx]
x = x.ravel(); y = y.ravel()
h = numpy.ones_like(x) * 1.3*dx
m = numpy.ones_like(x) * dx*dx
# Create the particle array
pa = utils.get_particle_array(x=x,y=y,h=h,m=m)
# Create the periodic DomainManager object and NNPS
domain = DomainManager(xmin=0., xmax=1., ymin=0., ymax=1., periodic_in_x=True, periodic_in_y=True)
# The SPH kernel. The dimension argument is needed for the correct normalization constant
k = CubicSpline(dim=2)
Note
Notice that the particles were created with an offset of $$\frac{\Delta}{2}$$. This is required since the NNPS object will box-wrap particles near periodic boundaries.
The NNPS object will create periodic ghosts for the particles along each periodic axis.
The ghost particles are assigned the tag value 2. For this example, periodic ghosts are created along each coordinate direction as shown in the figure.
### SPH Kernels¶
Pairwise interactions in SPH are weighted by the kernel $$W_{ab}$$. In PySPH, the pysph.base.kernels module provides a Python interface for these terms. The general definition for an SPH kernel is of the form:
class Kernel(object):
def __init__(self, dim=1):
self.dim = dim
def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0):
...
return wij
...
The kernel is an object with two methods kernel and gradient. $$\text{xij}$$ is the difference vector between the destination and source particle $$\boldsymbol{x}_{\text{i}} - \boldsymbol{x}_{\text{j}}$$ with $$\text{rij} = \sqrt{ \boldsymbol{x}_{ij}^2}$$. The gradient method accepts an additional argument that upon exit is populated with the kernel gradient values.
### Density summation¶
In the final part of the code, we iterate over all target or destination particles and compute the density contributions from neighboring particles:
nbrs = UIntArray() # array for neighbors
x, y, h, m = pa.get('x', 'y', 'h', 'm', only_real_particles=False) # source particles will include ghosts
for i in range( pa.num_real_particles ): # iterate over all local particles
xi = x[i]; yi = y[i]; hi = h[i]
nps.get_nearest_particles(0, 0, i, nbrs) # get neighbors
neighbors = nbrs.get_npy_array() # numpy array of neighbors
rho = 0.0
for j in neighbors: # iterate over each neighbor
xij = xi - x[j] # interaction terms
yij = yi - y[j]
rij = numpy.sqrt( xij**2 + yij**2 )
hij = 0.5 * (h[i] + h[j])
wij = k.kernel( [xij, yij, 0.0], rij, hij) # kernel interaction
rho += m[j] * wij
pa.rho[i] = rho # contribution for this destination
The average density computed in this manner can be verified as $$\rho_{\text{avg}} = 0.99994676895585222$$.
## Summary¶
In this document, we introduced the most fundamental data structures in PySPH for working with particles. With these data structures, PySPH can be used as a library for managing particles for your application.
If you are interested in the PySPH framework and want to try out some examples, check out A more detailed tutorial. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33217141032218933, "perplexity": 3258.164956330703}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00097.warc.gz"} |
https://arxiv.org/abs/hep-ph/0309126 | hep-ph
(what is this?)
Title: Summary of the Working Group on Spin Physics
Abstract: A summary is given of the experimental and theoretical results presented in the working group on spin physics. New data on inclusive and semi-inclusive deep-inelastic scattering, combined with theoretical studies of the polarized distribution functions of nucleons, were presented. Many talks addressed the relatively new subjects of transversity distributions and generalized parton distributions. These distributions can be studied by measuring single spin asymmetries, while partonic intrinsic motion and models of new spin dependent distribution and fragmentation functions are needed to obtain the corresponding theoretical description. These subjects are not only studied in deep-inelastic lepton scattering, but also in polarized proton-proton collisions at RHIC. A selection of results that have been obtained in these experiments together with several associated theoretical ideas are presented in this paper. In conclusion, a brief sketch is given of the prospects for experimental and theoretical studies of the spin structure of the nucleon in the coming years.
Comments: 9 pages, summary talk of the Working Group on Spin Physics at DIS2003, XI International Workshop on Deep Inelastic Scattering, St. Petersburg, 23-27 April 2003 Subjects: High Energy Physics - Phenomenology (hep-ph) Cite as: arXiv:hep-ph/0309126 (or arXiv:hep-ph/0309126v1 for this version)
Submission history
From: Mauro Anselmino [view email]
[v1] Thu, 11 Sep 2003 13:55:20 GMT (9kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037545323371887, "perplexity": 1958.0349406279286}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424090.53/warc/CC-MAIN-20170722182647-20170722202647-00574.warc.gz"} |
https://mnmeconomics.wordpress.com/category/macro/money/ | Archive
Archive for the ‘Money’ Category
Is zero inflation optimal?
Discussion on the ‘costs of inflation’ is a staple of A level Economics courses. You have probably heard these arguments before:
Shoe leather costs – as inflation rises and money becomes worth less, you have to take more trips to the bank so are wearing out your ‘shoe leather’ (really the cost here is the cost of your time rather than your shoes unless you are wearing really flimsy shoes).
Menu costs – as inflation rises and firms have to change their prices more often, they incur costs reprinting menus, catalogues etc.
Tax distortions – the classic case is ‘fiscal drag’ – governments usually set and fix tax bands at the start of the financial year, so if prices are rising sharply that has a distortionary effect, eg if higher rate income tax kicks in at £40000 a year, and you are earning £35000 a year in an environment where there is 20% inflation, your employer might give you a pay rise to keep pace with inflation putting you on £42000 – in real terms that is just the same as earning £35000 a year ago, but now you have suddenly become a ‘high rate’ tax payer. This also happens for capital gains tax, when people sell assets like houses that have appreciated in value due to high inflation, the tax paid becomes disproportionately high.
These answers will probably get you marks in an A level exam but in the modern economy these aren’t major issues unless you get into hyperinflation territory. There is so much electronic banking these days that shoe leather costs are minimised, and many businesses, especially in retail, change prices all the time anyway regardless of inflation, cycling offers and discounts to attract customers. Many do business on the internet as well where it’s easy to change your menus. So the menu costs are not massive. As for tax distortions they are a real nuisance but they could easily be dealt with if the government just changes their tax rules to be indexed to inflation. The fact they don’t is probably because the government is usually the one who benefits from fiscal drag anyway so they will just take the extra tax revenue (in an environment of high inflation they want to take all the benefits they can get).
Here are the big problems that come from inflation being high:
Money illusion – price signals break down when people lose their ability to judge and compare prices over time. This sounds like such a basic fact but it underpins the whole market economy. Lets say for example you decide to go shopping for some Levis jeans. Say last time you went looking for jeans was eight months ago, you saw Levis were around £80 so you will have this kind of idea in your mind as to the price of Levis. If we are in a low inflation environment (say 2%) and you see some Levis in a shop at £90 then you will have an instinctive feel for the fact that is probably pricey and you can get it elsewhere. If the market price was £80 eight months ago and inflation is 2% per year then that means that every month prices will be rising $1.02^\frac{1}{12}=1.00165$ in other words they are rising 0.165% per month, so after eight months you would expect the market price to be $80(1.00165^8 )=81.06$. This is basically a stable price, so your ‘feel’ for the market price as being £80 will be more or less right.
Now think of that scenario but in an environment where inflation is running at 60%. Say you see some jeans on offer for £105, is that a good deal or a bad deal. That is quite hard to do in your head – you have to make an ‘educated guess’ and you could be right or wrong.
If you actually had a calculator, and knew what you were doing in terms of compound interest, you could say that 60% inflation means $1.6^\frac{1}{12}=1.0399$ ie prices are rising 3.99% per month, so after eight months you would expect the market price to be $80(1.0399^8 )=109.44$. So in this case an offer of £105 is good, they are below the market price in real terms. But you aren’t going to do that in your head so you have lost the instinctive feel for prices.
This is what happens when inflation gets high, and the higher inflation is the more people lose their ability to compare prices, so it undermines the whole ability of firms to compete with each other on prices, and prices lose their ability to signal scarcity.
Inflation variability – generally low inflation means stable inflation and high inflation means variable inflation from one year to the next. Inflation in the UK (the RPI) from 1997 to 2005 was 3.3%, 2.4%, 2%, 2.7%, 1.3%, 2.9%, 2.6%, 3.2%, 2.4%. Compare this to the period between 1973 and 1981: 12%, 19.9%, 23.4%, 16.6%, 9.9%, 9.3%, 18.4%, 13%. Instead of varying by 1-2% per year, it is jumping around in variations of 5-10% per year. And this is the UK, which has traditionally been a low or medium inflation economy, the figures for some Latin American countries would dwarf that for variability. Variability is a problem because businesses need to plan for inflation when they set their wage levels, set their prices, sign contracts with suppliers etc. If you get it wrong by a percent or so its not too bad but if you think inflation will be 10% higher than it ends up being, and you have set your prices accordingly, you could be in for a big loss. The result is when inflation is high, firms do less investment and think more ‘short term’ rather than planning for the long term.
On the other hand there are some ways in which having some inflation is preferable to none.
Wage flexibility – generally wages are ‘sticky downwards’, which means that you very rarely get pay cuts in nominal terms, apart from in the more flexible parts of the private sector. Trying to cut workers’ wages in nominal terms stirs up a hornets nest of legal challenges and trade union action, so a pay freeze is about as strict as you usually get. Sometimes in a benevolent economic environment, especially when there is an unsustainable boom, wages can jump ahead of productivity, which means there will be problems down the line, not least in the context of international trade where a country with firms paying wages above its level of productivity will lose competitiveness to international competitors. So you need wages to adjust downwards, which is a slow and painful process – and the lower inflation is, the harder this is. You don’t actually need high inflation to carry out an adjustment, just a medium level. In the UK for instance, the government announced a three year public sector pay freeze in 2010. If inflation during that period runs at 4% then that means in real terms wages will fall by close to 12%, so that is an effective way of bringing wages back down in line with productivity. But if inflation was creeping around 1-2% it would take much longer to get the same adjustment.
Option of real negative interest rates – this is a big one when the economy is in trouble, particularly in the face of a liquidity trap. The issue here is that nominal interest rates can’t fall below 0%, but what influences investment is not nominal but real interest rates. As $r \approx i - \pi$. you can stimulate investment by pushing down real interest rates through higher inflation, when nominal interest rates are 0%, real interest rates can be $- \pi$. Again you don’t want to trigger hyperinflation in order to break out of a liquidity trap, but you might find inflation of 5-6% useful, rather than 1-2%.
Seignorage revenue -fraught with danger this one, because it can easily be misused and trigger an inflationary spiral. However for a developing country where it is difficult to collect taxes, seignorage revenue can be an important part of government revenue.
So what is the optimal level of inflation? There isn’t a set guide and it depends on opinion. By and large most Western democracies aim for low levels around about 1.5-2%. There are some that argue you should try to push inflation down to as close to 0% as possible, and there are others that argue some of the excessively tight monetary policy used to keep inflation low is counterproductive, and that there is nothing really wrong with inflation at 4-6%. Certainly the real negatives of inflation don’t kick in till you get higher rates (eg double figures).
The big danger with inflation though is its tendency to accelerate out of control, so once you get to 10%, it can quickly start to creep up towards 15-20% and beyond, where negatives do start to mount up.
Categories: Macro, Money
Seignorage – a tax on real money balances
Governments in most developed countries typically finance a budget deficit (the gap between government spending and tax receipts) through borrowing from the domestic or foreign private sector. Occasionally, if they can’t raise enough revenue through selling bonds to the private sector, they can get the central bank to print money to buy the bonds. This is debt monetization. This is quite rare in developed countries but it is more likely in developing countries where there can be crises (eg wars) that lead to a collapse in the ability to collect taxes, so deficits can rise beyond the government’s ability to raise revenue through borrowing. It can also happen when lenders fear sovereign default, and begin to demand ates of interest on government borrowing that the government cannot afford.
The revenue gained by government by printing money is called seignorage. It is effectively a tax on real money balances.
The seignorage revenue received by the government is $Seignorage = \frac{\Delta M}{P}$. We can multiply the expression by $\frac{M}{M}$ just to give us an alternative way of writing it.
$Seignorage = \frac{\Delta M}{P}=\frac{\Delta M}{M}\frac{M}{P}$, in other words it is the rate of money growth multiplied by real money balances.
Inflation approximately equals nominal money growth minus output growth $\pi = g_M - g_Y$, so in the short run where there is no output growth, growth $\pi = g_M=\frac{\Delta M}{M}$.
So we can write the expression for seignorage as being $Seignorage = \pi \frac{M}{P}$, ie it is inflation multiplied by the amount of real money balances in the economy (hence it being a tax on real money balances).
This is pretty effective tax because you can’t avoid it. Anybody who holds money effectively pays the tax because their money becomes worth a little bit less but the government is getting free money to spend on its spending programmes.
There is a complicating factor here, because the demand for money declines as inflation rises. You can express money demand in the form $\frac{M}{P}=YL(i)$, ie the demand for real money balances is a function of income (or output), and peoples liquidity preference schedule (how downward sloping the money demand curve is). Note that it is a function that is increasing in terms of income (the richer people are the more money they want to hold) and declining in terms of nominal interest rate (when interest rates are higher, people want to hold less money and more bonds or other forms of illiquid assets).
You can write this in terms of real interest rates as $\frac{M}{P}=YL(r+ \pi^e)$.
Over time, income, the real interest rate, and expected inflation can all change. But it is useful to think of what would happen in the ‘super short run’ (like month by month timescales) when modelling seignorage, because the big risk with seignorage (as will be explained soon) is that it triggers very high inflation, where inflation changes very quickly over a time scale where the real interest rate and income are more or less static.
So to model this we will assume that income and real interest rate stay constant and the variable factor is expected inflation: $\frac{M}{P}=\bar{Y}L(\bar{r}+ \pi^e)$, where L (demand for money) and hence seignorage, is declining in $\pi^e$.
This implies during times of high inflation, money demand depends mainly on expected inflation. As expected inflation rises, money demand falls. This is basically because as money is losing its value quickly, you don’t want to hold it for very long – higher expected inflation increases the opportunity cost of holding money.
However, in practice the real interest rate may become very negative because the nominal interest rate does not keep up with inflation, so you are not always better off holding bonds either! And it is hard to put your money in bonds fast enough, it may not be practical. So instead people start bartering goods, they start demanding wages more often (eg twice a week), or they start using a hard currency (eg dollarization). As inflation rises rapidly, people do whatever they can to avoid holding cash, and demand for money collapses.
Think about what is going on here. On the one hand, increasing money growth, is increasing the rate of the inflation tax (so seignorage is rising), on the other hand, increasing money growth is increasing inflation and decreasing money demand and so the amount of real money balances being held in the economy (so seignorage is falling). There are two effects working against each other here.
We have two equations: $Seignorage = \frac{\Delta M}{M} \frac{M}{P}$ and $\frac{M}{P} = \bar{Y}L(\bar{r}+\pi^e)$.
We can combine them to get: $Seignorage = \frac{\Delta M}{M}\bar{Y}L(\bar{r}+\pi^e)$, where L (demand for money) and hence seignorage, is declining in $\pi_e$.
Now think about what would happen if we had constant money growth. Over time, inflationary expectations would adjust to the constant level of money growth, they would catch up with it, $\frac{\Delta M}{M}=\pi^e$ so $Seignorage = \frac{\Delta M}{M} \bar{r} +\frac{\Delta M}{M}$. Here $\frac{\Delta M}{M}$ is entering the equation twice. Inflation is increasing in it directly and declining in it indirectly via inflationary expectations and falling money demand. At the start the indirect effect is small but it becomes large quite quickly, eventually outweighing the direct effect. There is therefore a hump shape like a Laffer curve, in terms of the amount of ‘tax’ (seignorage) that can be collected in this way. There will be a rate of constant money growth which optimises the amount of seignorage revenue.
Example: suppose the economy has GDP of 100. The real money stock is given by the money demand equation $\frac{M}{P} =\bar{Y}[1-(\bar{r}+\pi^e)]$ where $\bar{Y}=100$ and $\bar{r}=0.03$ so $\frac{M}{P} =100[1-({0.03}+\pi^e)]$.
To find the rate of constant nominal money growth that would maximise seignorage we start with the equation $Seignorage = \frac{\Delta M}{M}100[1-({0.03}+\pi^e)]$ remembering that in the case of constant nominal money growth, $\frac{\Delta M}{M}=\pi^e$.
So $Seignorage = \frac{\Delta M}{M}100[1-(0.03+\frac{\Delta M}{M}]=\frac{\Delta M}{M}100[0.97-\frac{\Delta M}{M}]=97\frac{\Delta M}{M}-100(\frac{\Delta M}{M})^2$.
So to optimise this we differentiate $\frac{d(Seignorage)}{d\frac{\Delta M}{M}}=97-200\frac{\Delta M}{M}$ and set equal to zero so $0=97-200(\frac{\Delta M}{M}) \Rightarrow \frac{97}{200}=\frac{\Delta M}{M} \Rightarrow 0.485=\frac{\Delta M}{M}$.
This tells us that the optimising rate of nominal money growth is 48.5%. With this level of money growth we could raise seignorage revenue of $Seignorage = 97(0.485)-100(0.485)^2 = 23.523$ ie we can raise maximum income of 23.523 through seignorage. Given that GDP is 100, this means the maximum budget deficit we could cover through seignorage would be 23.523% through seignorage.
Now what would happen if the budget deficit was actually 28%, ie we needed to raise income of 28 through seignorage. We can’t do this keeping constant money growth, the only way we could do this is to hike up nominal money growth above the level of expected inflation. We can get away with this in the short run, but inflationary expectations will catch back up with our new level of expected inflation.
Remember that in our short run with no growth, $\pi^e = \frac{\Delta M}{M}$, so when we have money growth of 48.5%, we already have inflation of 48.5% (not a good situation to be in). So inflationary expectations will be 48.5%.
What rate of nominal money growth could get us seignorage revenue of 28 with inflationary expectations of 48.5%?
$28 = \frac{\Delta M}{M}100[1-(0.03+0.485)] \Rightarrow 28=\frac{\Delta M}{M}(48.5) \Rightarrow \frac{\Delta M}{M} = 0.5773$. So we need money growth of 57.73% which means inflation of 57.73%. In the short run we can get the required level of seignorage with this higher level of inflation, but what happens when inflationary expectations catch back up to the new level of money growth? We will have to do the equation again, with 57.73% as the new value for expected inflation. This will imply that we need an even higher rate of money growth, and hence inflation, to hit our seignorage target.
The moral of the story here is that there is an optimal rate of seignorage revenue that you can generate (depending on the parameters of the money demand equation) through constant money growth.
If you want/need to raise seignorage revenue higher than that, you have to do it through increasing money growth. This means you are going to trigger an inflationary spiral, and this is how you end up with hyperinflation.
Categories: Macro, Money
The Quantity Theory of Money
The quantity theory of money basically explains how the quantity of money in the economy affects the price level. The quantity theory is usually explained simply in the identity MV = PT. This means quantity of money (M) multiplied by velocity (V) equals price level (P) multiplied by the number of transactions (T). The velocity of money describes the rate at which money circulates round the economy – for instance if you were to follow the life of one pound coin, velocity tells us how many times that pound changes hands in a given time period.
Because this is an identity, the left hand side always equals the right. Suppose you had an economy existing solely of cars, and there were 100 cars bought and sold in a year at a price of £3000 each, and the overall quantity of money in the economy was £50000
The number of transactions (T) is 100, the average price level of transaction is (P) is £3000 per car, so PT = £300000. M is £50000, so MV = PT means 50000V = 300000, so V = 6. This means the velocity of money is 6, each pound must change hands 6 times a year.
Usually when using the quantity theory of money model we make a couple of assumptions:
1 – The number of transactions is proportional to the total output in the economy. This is fairly logical – if the economy grows then more stuff is produced so more stuff changes hands. So we can roughly substitute output (Y) for transactions (T) and express the quantity theory identity in a more useful form: MV = PY. You can think of velocity now as being the income velocity (number of times a pound enters someone’s income in a given period of time).
2 – Velocity of money is constant. This is a simplification of reality because the money demand function will change depending on the interest rate. The velocity of money is related to the money demand functon – by assuming the velocity is constant we are assuming a money demand function of $\frac{M}{P}=kY$ where k is a constant. This can be rewritten as $\frac{M}{k}=PY$ which is the same as MV = PY when V = (1/k). If k is large then people have a high demand for money and want to hold a lot of money for each pound of income, so money does not change hands very much, V is small. If k is small then people have a low demand for money and want to hold little money for each pound of income, in this case money changes hands quickly and V is large. Assuming k is constant means V is constant which makes the identity a lot more informative as if one part is fixed it allows to see how certain elements of the identity will adjust in response to changes to the others.
So if we call the quantity theory of money $M\bar{V}=PY$ where V is now fixed, then it leads us to important conclusions:
– If the quantity of money in the economy (M) increases faster than output (Y) increases, then the price level (P) has to rise.
– If the quantity of money (M) increases slower than output (Y) increases, then the price level (P) has to fall.
In the real world prices tend to adjust upwards more easily than they adjust downwards (known as being sticky downwards) so if the price level cannot adjust downwards quickly enough, then output itself may decrease in response to an excessively slow increase in M. In this case you have a recession rooted in monetary factors – people are not spending due to a lack of available money to facilitate transactions, so demand drops. This type of recession can be addressed by increasing the money supply.
Categories: Macro, Money
The LM relation
The downward sloping money demand curve can be written like this: $\frac {M}{P}=YL(i)$. What this means is real money balances = the level of income (Y) in the economy multiplied by the liquidity preference (L). The liquidity preference is basically the steepness of the curve, it shows how much people prefer to hold bonds (or other illiquid assets) rather than money, at a higher interest rate, if the liquidity preference schedule is steep then it means demand for money is less elastic with respect to interest rate, if it is fairly flat then it means demand for money is elastic with respect to interest rate (ie a small fall in interest rate means a large increase in the amount of money people want to hold rather than holding bonds). The reason there is an (i) in brackets after the L means that liquidity preference is a function of i, the nominal interest rate. It will of course depend negatively on i, because the higher the interest rate, the lower is M.
L(i) determines the steepness of the curve, and Y determines where the curve is, if you increase Y, then you push the whole curve up. This is the principle behind the LM relation. We can basically think of a relationship between i, the nominal interest rate, and Y, the level of income in the economy. As Y increases, then you shift up the money demand curve, so if you keep the money supply constant (the vertical line) then the point of intersection on the diagram is higher. This means that the new interest rate which brings the money market into equilibrium (money supply = money demand) is higher. So if money supply is constant, increasing Y means you get an increase in i; decreasing Y means you get a decrease in i. This is the LM relation – and it is upward sloping:
Think of movements along the LM curve as being what happens when you shift money demand curve up and down in the money supply/money demand diagram. Higher Y means money demand shifts up so you get higher i in equilibrium, that’s a movement up the LM curve.
What shifts the LM curve up or down is movements in the money supply curve. If you shift the money supply curve out (to the right) on the money supply/money demand diagram then you will get a lower i in equilibrium, at all levels of income, that’s the equivalent to shifting the LM curve down. If you shift the money supply curve in (to the left) on the money supply/money demand diagram then you will get a higher i in equilibrium at all levels of income, that’s the equivalent to shifting the LM curve up.
Remember that prices also affect the position of the money supply curve, because we are thinking about real money balances (M/P). As P is on the denominator, an increase in P makes (M/P) smaller, so an increase in P (rising prices) has the same effect as a decrease in M (reducing nominal money supply), so it moves the money supply curve to the left.
So to sum up:
The LM curve is upward sloping, as income in the economy increases, the interest rate increases
The LM curve shifts down when there is an increase in nominal money supply (a monetary expansion) or a fall in prices (rare!)
The LM curve shifts up when there is a reduction in the nominal money supply (a monetary contraction) or a rise in prices (inflation)
The Central Bank can stop inflation from pushing the LM curve up and increasing interest rates, by increasing nominal money supply in a proportionate amount that keeps (M/P) constant when P is rising.
Categories: ISLM, Macro, Money
Instruments of monetary policy
The Central Bank has a few tools it can use to control the money supply.
The most well-known method in modern economies is open market operations. The Central Bank will hold a stock of illiquid assets like bonds. When it wants to decrease the supply of money in the economy, it will sell some of its bonds, when retail banks buy them, it simply deducts the value in cash from their reserve accounts at the Central Bank. That means those banks hold bonds where they used to have cash reserves, and the amount of high-powered money has fallen. Alternatively, if it wants to increase the amount of high-powered money, it can go to the markets and buy some bonds from retail banks. It pays for it by simply increasing the value of those banks’ reserve accounts held at the Central Bank, so bank reserves rise, and bank reserves are a component of high-powered money.
It can change the required reserve ratio, $\theta$, by forcing retail banks to hold a higher proportion of their liabilities to depositors in the form of reserves. This raises the money multiplier, $\frac{1}{c+\theta (1-c)}$. So it means any increase in high-powered money that the Central Bank introduces, will have a more magnified effect on the total amount of money in the economy. Not all countries have a legally required reserve ratio for banks.
As the Central Bank is the ‘lender of last resort‘ when banks are short on liquidity and need a short-term loan it can control the rate at which it charges interest to other banks. This will obviously have a knock on to banks’ market rates. If the Central Bank pushes up the rates at which it lends to other banks, they will charge their borrowers higher rates themselves. But if the Central Bank cuts its rates, other banks will take advantage of the ability to borrow cheaply from the Central Bank, by cutting their rates on lending to try and compete with each other for potential borrowers, who are all chasing the most competitive rates.
Categories: Macro, Monetary Policy, Money
Fractional reserve banking
Fractional reserve banking is at the heart of the way money is created. It means that banks do not keep cash reserves equal to the balances of their depositors, instead they just hold a fraction of their depositors’ balances in reserve. This allows them to use the rest of the depositors’ money to make loans to others. It also relies on the gamble that depositors are not going to want to withdraw all their money at once but will only demand access to a small amount of their deposits at any one time.
For instance if you had a bank that had £1000 of deposits from customers, that £1000 is a liability to the bank because it owes £1000 to its customers should they wish to withdraw it. If the bank keeps £1000 in reserves then it is fully covered, but has no money free to make loans to others – this would be 100% reserve banking. If instead the bank said that it would operate a reserve ratio of 10% (or 0.1) then it would just keep £100 in reserves and then make £900 of loans to other customers. This £900 would then be an asset to the bank as it is owed by the customers to the bank – of course the bank would charge interest on this loan as well.
Now what would happen if that bank then received a new deposit of £100? It now has £1100 of deposits from customers and £200 in reserves. But if it is just keeping a reserve ratio of 0.1, then for £1100 of deposits it only needs to keep £110 in reserves, so it can reduce its reserves by lending out another £90. So from £100 deposit the bank creates £90 of new loans. The reserve ratio of 0.1 means that the bank lends out 0.9 of its new deposits.
If that £90 gets lent to a customer that goes and deposits it in their bank (or spends it and it ends up deposited in another customer’s bank). Their bank will keep 0.1 x £90 = £9 extra in reserves and lend out 0.9 x £90 = £81 to another customer.
And so the pattern repeats itself. That customer deposits the £81 which goes into another bank. When their bank gets it they keep back 0.1 x £81 = £8.10 in reserves and lend out 0.9 x £81 = £72.90. The original deposit of £100 is creating new loans through every round of lending and depositing but each round gets smaller.
Where will it end? We can think of it in terms of a geometric series. If we call the initial deposit $d$ and the reserve ratio $\theta$ then the rounds of spending look like this:
$d + d(1-\theta) + d(1-\theta)^2 + d(1-\theta)^3 +....d(1-\theta)^n = d[1 + (1-\theta) + (1-\theta)^2 + (1-\theta)^3 +....(1-\theta)^n ]$. This is a geometric series with geometric ratio $1-\theta$.
As it’s a geometric series it will sum to $d[\frac{1-(1-\theta)^{n+1}}{1-(1-\theta)}] = d[\frac{1-(1-\theta)^{n+1}}{\theta}]$ (see here for why). As $0<(1-\theta)<1$ then it means when $n \rightarrow \infty$, $(1-\theta)^{n+1} \rightarrow 0$ so $d[\frac{1-(1-\theta)^{n+1}}{\theta}] \rightarrow d[\frac{1}{\theta}]$.
This is the bank multiplier, $\frac{1}{\theta}$. The total amount of money created from the initial deposit is found by multiplying the initial deposit by $\frac{1}{\theta}$.
We can extend this model by taking account of the fact that customers will hold some of their wealth in cash, they won’t just deposit everything straight back into the bank.
Before making the model lets lay out all the definitions we need.
$M$ is the total amount of money people have.
$D$ is the total amount customers deposit with banks.
$CU$ is the total amount of currency people hold in cash.
$c$ is the currency ratio, the proportion of their total money that they hold in cash, so $cM = CU$ and $(1-c)M = D$
$R$ are the cash reserves banks keep in order to keep enough liquidity to cover the expected withdrawals of consumers.
$\theta$ is the reserve ratio, ie $R = \theta D$
Now if we extend the original model by saying there is a reserve ratio of 0.1 and a currency ratio of 0.4, this time the initial £100 meant again that the bank could lend out 0.9 x £100 = £90 to a customer.
That now has £90, but will hold 0.4 x £90 = £36 in cash and deposit 0.6 x £90 = £54 into a bank. The bank will then lend out 0.9 x £54 = £48.60 to another customer who will deposit 0.6 x £48.60 = £29.16 into theirs.
The pattern continues as before but this time the multiplier is smaller. The geometric series is:
$d[1 + [(1-\theta)(1-c)] + [(1-\theta)(1-c)]^2 + [(1-\theta)(1-c)]^3 +....[(1-\theta)(1-c)]^n ]$, the geometric ratio this time is $(1-\theta)(1-c)$.
So this geometric series will sum to $d[\frac{1-[(1-\theta)(1-c)]^{n+1}}{1-[(1-\theta)(1-c)]}] = d[\frac{1-[(1-\theta)(1-c)]^{n+1}}{1-[1-\theta-c+\theta c]}]=d[\frac{1-[(1-\theta)(1-c)]^{n+1}}{\theta+c-\theta c}]$.
Again, as $0<\theta<1$ and $0 then when $n \rightarrow \infty$, $[(1-\theta)(1-c)]^{n+1} \rightarrow 0$. So $d[\frac{1-[(1-\theta)(1-c)]^{n+1}}{\theta+c-\theta c}] \rightarrow d[\frac{1}{\theta+c-\theta c}]=d[\frac{1}{c+\theta(1-c)}]$
This is the money multiplier, $\frac{1}{c+\theta(1-c)}$. It is smaller than the bank multiplier.
Categories: Macro, Money
The money multiplier
If the interest rate is determined by the interaction between money supply and money demand, how can we model this in a simple way?
We can look at it through defining the concept of Central Bank money, or high-powered money (also known as the monetary base).
Firstly think in terms of assets and liabilities. Retail banks (the type individual consumers bank with) will hold as assets the loans they make (eg mortgages, personal loans), that need to be repaid back to them, they will hold bonds and shares etc, and they will keep a certain amount of liquid assets (like cash) as reserves in their account with the Central Bank. They need these reserves to have liquidity available to meet the daily needs of customers’ withdrawals, when customers want to withdraw money they want cash not bonds, so the bank needs to have a ready supply of it.
Retail banks effectively ‘bank’ with the Central Bank and they have their own accounts there, this is effectively what happens if you buy something with a debit card, say you bank with Barclays and the shop banks with NatWest, when you pay £30 for something on a debit card, then the electronic transaction deducts £30 from Barclays’ account at the Central Bank and credits NatWest’s account at the Central Bank with £30. At the same time, Barclays will deduct £30 from your current account balance, while NatWest will credit the shop’s balance by £30.
The liabilities the retail banks have are the customers’ deposits. When you use your debit card or withdraw cash from the ATM, the bank has to give you money (up to the sum of your current account balance) so that is a liability to it – something it owes.
Now the Central Bank will also have assets and liabilities. It holds foreign exchange reserves and bonds, shares, gold etc as its assets, and its liabilities are the retail banks balances (their reserves) in their accounts with the central bank, plus the money the Central Bank has ‘issued’ (ie coins and notes in circulation).
So high-powered money, or the monetary base, is equal to retail banks’ reserves plus currency in circulation.
With this in mind we can make a few definitions to make a simplifed model:
$M^d$, the total demand for money in the economy, is made up partly of currency $CU$(coins and notes floating around in circulation) and partly of current account deposits $D$. If we denote the proportion of total money made up of currency as $c$ then the demand for currency is $CU^d = cM^d$ and the demand for current account deposits is $D^d = (1-c)M^d$.
$H^d$, the demand for high-powered money (or the monetary base) is the equal to the demand for currency, $CU^d = cM^d$ plus demand for retail bank reserves, $R^d$. Consumers are depositing $D$ among of deposits in current accounts with the retail banks, but the banks won’t hold all of this as reserves, they operate a system of fractional reserve banking. This means that they know that (or are banking on the fact that) customers won’t suddenly all demand to withdraw their deposits at once, so they only need to keep a proportion of their total deposit liabilities to customers in reserve, to meet day to day withdrawals…and they will use the rest to lend out to other people in order to make interest on it. So if we denote the proportion as the ‘reserve ratio’, $\theta$, then the demand for retail bank reserves, will be $R^d=\theta D^d = \theta (1-c)M^d$
So putting that together, we have two expressions:
Total demand for money in the economy: $M^d = CU^d + D^d = cM^d + (1-c)M^d$
Demand for Central Bank money: $H^d = CU^d + R^d = cM^d + \theta (1-c)M^d$
From the second equation we can say that $H^d = M^d (c + \theta (1-c))$ So $M^d = \frac{1}{(c + \theta (1-c))}H^d$
This expression basically expresses how total demand for money relates to demand for high-powered money, ie demand for total money will always be higher than demand for high-powered money.
But remember that the total demand for money is determined by two things, the interest rate and overall incomes in the economy. If we assume that incomes are fixed in the short run then it will be the interest rate that determines the demand for money, when interest rates are high there will be a lower demand for both parts of high-powered money, currency and retail bank reserves (because customers have lower demand for current account deposits when interest rates are high, they are instead putting their money in illiquid forms of saving like bonds).
The money market will head into equilibrium, ie the demand and supply for money will come into equilibrium because the interest rate will adjust to get there, just like any other market comes into equilibrium due to adjustments in the price. When we get to the equilibrium we can say that $M^s = M^d =M$ and $H^s = H^d =H$ so $M = \frac{1}{(c + \theta (1-c))}H$
Here we have a money multiplier of $\frac{1}{(c + \theta (1-c))}$
The multiplier shows us how changes in high-powered money translate into changes in the overall amount of money in the economy. This is where the name high-powered comes from, the monetary base is a type of money that has magnified effects on the overall amount of money: if you increase the monetary base by £1, you get an increase in the overall amount of money in the economy of £$\frac{1}{(c + \theta (1-c))}$
As an example, suppose the total amount of high powered money is £1000000, the reserve ratio is 0.1 and the proportion of money which people hold as currency as being 0.2, then the total amount of money is $M = \frac{1}{(0.2 + 0.1(1-0.2))}1000000 = 3571428.57$. Our multiplier here is $\frac{1}{(0.2 + 0.1(1-0.2))} = 3.571429$
The Central Bank controls the amount of high-powered money in the economy. If it decides to increase the amount of high-powered money by £10000, then the multiplier implies that it will increase the total amount of money by $3.571(10000)=35714.29$. The new total amount of money in the economy is $M = \frac{1}{(0.2 + 0.1(1-0.2))}1010000 = 3607142.86$
Categories: Macro, Money | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 95, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5268366932868958, "perplexity": 1364.307541846692}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170884.63/warc/CC-MAIN-20170219104610-00124-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.authorea.com/users/65816/articles/78120-untitled-document/_show_article | # Here Be Dragons: Characterization of ACS/WFC Scattered Light Anomalies
Abstract
We present a study characterizing scattered light anomalies that occur near the edges of the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) images. The study is based on all full-frame WFC raw images ever produced by ACS. Using the 2MASS catalog, we identified stars that cause two particular scattered light artifacts known as ”dragon’s breath” and edge glow. These artifacts are caused by stars located in narrow bands outside the ACS/WFC field of view. We have completed this study for the ACS F606W and F814W filters. The results for both filters are similar when expressed in total fluence, or flux multiplied by exposure time. We provide a map of risky areas around the ACS chips and an upper limit of magnitudes to be concerned about. We will use these results to develop interactive tools that will aid the astronomical community in the proposal process for ACS/WFC.
## Introduction
ACS/WFC images can suffer from a number of optical and scattered light anomalies. Most of the optical anomalies that effect ACS have been well characterized. Hardware, software, and optical anomalies are discussed in ISR 2008-01. This is not the case for the scattered light anomalies known as ”dragon’s breath” and edge glow. Dragon’s breath is caused by reflections being scattered back to the detector. There is a knife-edged mask in front of the CCD that scatters light back to the detector when its back side is illuminated by reflections from the CCD surface. These phenomena were discovered in early testing of ACS and were mitigated by sharpening the knife edges and coating them black. However, when point sources fall on the edge of the mask, scattering still occurs (Hartig et. al.).
Figure 1a: Dragon’s Breath
Figure 1b: Edge Glow
Although ACS was designed with a requirement limiting the amount of energy that may be contained in an anomalous feature relative to the object producing it, this scattering exceeds that limit by an order of magnitude (Hartig et. al.). These anomalies can have potentially severe effects on data but can be avoided when designing observations.
In this report, we identify the upper right and lower left corners of the detector as the most severely affected regions of ACS. A range of magnitudes has also been determined in which stars may cause scattering. This information has been worked into an interactive tool to aid the community in the proposal process.
## Method
### Creating Guide Star Files
In November 2013, MAST deposited all full-frame WFC raw images into directories sorted by cycle and anneal date on ACS servers. For each full-array broad-band ACS/WFC exposure, we generated a catalog of stars from the Hubble Guide Star Catalog II (GSC II) within 3’ of the pointing. This catalog contains guide star ID, position, several photographic magnitudes, image file location, gain, and exposure time.
### Identifying Dragon’s Breath
We inspected all full-frame ACS/WFC images with exposure times longer than 350 seconds for inspection. We created distortion corrected FITS mosaics with an overlay of 2MASS objects for each frame that contained anomalies. Using these images we matched the ”offending stars” off the field of view (FOV) with the scattered light on the detector. We also measured the size of each artifact by observing its linear extent in pixels. In more crowded fields it was important that features, such as glint or diffraction spikes, were not mistakenly identified as dragon’s breath. Edge glow required less nuance to identify, and was typically accompanied by a central spike which made selecting the correct offending star simple.
Figure 2: 2MASS stars overlayed on ACS/WFC full frame image. The stars are shown as solid white circles with sizes corresponding to their brightness.
Figure 3: A portion of an ACS/WFC image with the 2MASS overlay. The offending star associated with the larger dragon’s breath feature is marked in pink. The star marked in yellow is an example of a star on the edge of the chip, which is causing glint.
Once we created this catalog containing offending stars, their coordinates, and the size of the scattering, they were matched with guide stars in the previously created guide star files. The physical coordinates recorded from the ACS/WFC images were converted into RA and Dec, then matched to guide star coordinates in the corresponding guide star files with a 5 arcsecond threshold to account for possible alignment offsets. Mismatches are possible, but cases with two very close stars were discarded to mitigate this possibility.
With the guide stars identified by name, we were able to use the information from the guide star catalogs to determine the total ACS filter magnitude for each star. The guide star catalog uses F and J magnitudes. Due to the lack of J magnitude information for many of the stars estimates were made based only on the F magnitude. The photographic F filter has an effective wavelength around 6700Å. Magnitudes in F606W or F814W are similar (to within less than a magnitude). We also normalized the magnitude of each star, based on the exposure time in seconds, to 500 seconds
$$mag500=F_{mag}-2.5log(\frac{exptime}{500})\nonumber \\$$
The resulting mag500 is a measure of the total fluence, or flux multiplied by exposure time:
$$flux500=10^{-0.4\times mag500}=flux\times(\frac{exptime}{500})\nonumber \\$$
The greater the fluence, the more charge deposited onto the CCD, and the greater the scattered light artifact for a given star position relative to the ACS/WFC detectors.
## Results
### How Dragon’s Breath Can Manifest
Through the cataloging process we discovered that dragon’s breath can appear in different forms. The name ”dragon’s breath” comes from classic examples as seen in figure 4a, which look like fire shooting onto the frame. The scattering can also take on more irregular forms as seen in Figure 4b. Location can affect the size and shape of the anomalies.
Figure 4a: Two cases of classic Dragon’s Breath. This HST ”preview” image was generated by MAST and is available at https://archive.stsci.edu/missions/hst/previews/JBPK/JBPK06H9Q.jpg
Figure 4b: Elongated scattering on the upper left side stretches to the center of the detector. This HST ”preview” image was generated by MAST and is available at https://archive.stsci.edu/missions/hst/previews/J96G/J96G07JXQ.jpg
By inspecting individual stars that caused scattering in multiple exposures, we determined that small changes in star location can affect the size and shape of scattering. Take the particularly egregious example of a star in the corner of the chip in Figure 5. These two instances of scatter are caused by the same star in slightly different positions with respect to the detector. Figure 6 shows another instance of corner scattering which changes with the position of the star. This location dependency is not limited to the corners and can occur anywhere around the detector.
Figure 5b: Corner Scattering. Frame j8de59y0q.
Figure 5a: Corner Scattering. Frame j8de59xwq.
Figure 6: Corner Scattering four observations: j8zq07zvq, zyq, zwq, and a1q.
We also inspected cases where observations were made in both filters during the same visit. Differences in dragon's breath between the two filters studied are due to the stars being different magnitudes in the different filters. In the two examples below, images from the same visit are compared and small differences in the size and shape of the dragon's breath are apparent. The sizes of the circles in the plots below are different in each filter due to the plotting, not necessarily the magnitudes.
Figure 7a: In the F606W image on the left the normalized magnitude is 13.7 with a 510s exposure time. In the F814W image on the right the normalized magnitude is 14.1 with a 350s exposure time. The scattering in the F814W image is slightly smaller with differences in structure.
Figure 7b: In the F606W image on the left the normalized magnitude is 15.0 with a 707s exposure time. In the F814W image on the right the normalized magnitude is 15.8 with a 357s exposure time. The scattering in the F814W image has a slightly larger spread.
## Anomaly Map
We plotted the locations of all scattered light anomalies in both F606W and F814W around a footprint of the detector. The anomaly map in figure 8 shows that the scattered light occurs in a very thin band in the upper right and lower left corners of the detector. This positioning could be due to the fact that the WFC detectors are a rhombus shape, or it could be related to the positioning of the knife-edge mask above the detector. There are two clear loci. The outer locus is made up of dragon’s breath whereas the inner locus is predominantly edge glow.
Figure 8: Positions relative to the ACS/WFC detectors of stars which caused dragon’s breath or edge glow.
## Magnitudes
The anomalies we identified were due to stars with corrected magnitudes between 10 and 20. A histogram of the the magnitudes of offending stars are plotted along with the magnitudes of all guide stars is shown in figure 9.
We also compared the linear extent of dragon's breath in pixels to the magnitude of the star causing it. Figure 10 shows that there is correlation between magnitude and scatter length.
Replace this text with your caption
## Web Interface
Using the catalog we created two interactive plots which allows users to explore our results. The plot is similar in appearance to the anomaly map in figure 8. Each point represents a star in or near an ACS/WFC observation and the black lines represent the ACS/WFC chips. Users can hover over each point and see the image in which it caused scattering. Above the image there is also information about the star including the guide star name, the name of the image, the filter used, the filter magnitude, and the magnitude corrected to 500 seconds. There is also a slider which selects stars based on their exposure time corrected detector magnitudes. This slider allows the user to view stars withing 1 magnitude of the selected magnitude. This is a great tool for exploring examples of scattered light appearing in multiple dithered observations, in multiple filters, and how magnitude can determine the length of the scattering.
## Conclusions and Future Work
Our anomaly map clearly shows that small displacements ($$\sim$$1 arcsecond) can prevent potentially severe scattered light in your observations. The locations prone to severe scattered light are apparently filter-independent. Two clear loci of scattering are evident in figure 8. The STScI ACS instrument team is making available the interactive web interface described above for further exploration of these results. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.5573018789291382, "perplexity": 1864.1826196856694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998440.47/warc/CC-MAIN-20190617063049-20190617085049-00236.warc.gz"} |
http://mathhelpforum.com/calculus/144051-finding-interval-radius-convergence-print.html | # Finding the Interval and Radius of Convergence
• May 10th 2010, 03:20 PM
p75213
Finding the Interval and Radius of Convergence
The equation is: Summation n=1 to inf x^n/(n+1)
When the endpoints are checked at x=1 - 1^n/(n+1) is rewritten as 1/n - 1. Can somebody show me the validity of this?
http://my.thinkwell.com/questionbank.../img/61724.gif
• May 10th 2010, 03:34 PM
lilaziz1
It really doesn't matter because $\sum_{n=0}^{\infty} \frac{1}{n+1}$ behaves like $\sum_{n=0}^{\infty} \frac{1}{n}$ by the limit camparison test.
• May 10th 2010, 04:38 PM
p75213
That's true. But I would still like to know.
• May 10th 2010, 05:08 PM
lilaziz1
I don't think it can be $\sum_{n=0}^{\infty} \frac{1}{n} -1$ because if you make it into an improper fraction, you get $\sum_{n=0}^{\infty} \frac{1-n}{n}$ which diverges.
• May 10th 2010, 08:08 PM
boardguy67
Hi p75213...are you a Stargate fan? (if you get that, you are :))
Anyway, your problem is in the form of the ratio test for convergence, which basically lets you compare your series to a geometric series to determine it's behavior. I think you'll find your proof here.
http://www.math.scar.utoronto.ca/cal...ture43-rev.pdf
Hope that helps,
Be well,
T | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401745796203613, "perplexity": 1316.1745641853202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891750.87/warc/CC-MAIN-20180123052242-20180123072242-00257.warc.gz"} |
https://worldwidescience.org/topicpages/f/fullerene+derivatives+protect.html | #### Sample records for fullerene derivatives protect
1. Biochemical activity of fullerenes and related derivatives
Huczko, A.; Lange, H.; Calko, E.
1999-01-01
An astonishing scientific interest, embodied in over 15000 research articles so far, has been encountered since 1985 when fullerenes were discovered. From new superconductors to a rich electrochemistry and reaction chemistry, fullerene nanostructures continue to excite the scientific world, and new findings continue at record pace. This review presents many examples of the biochemical activities of fullerenes and derivatives, e. g. cytotoxic activity, selective DNA cleavage and antiviral activity against HIV. We also present some results of our testing which show that, despite its chemical and biochemical activity, fullerene matter does not present any health hazard directly related to skin irritation and allergic risks. (author)
2. Synthesis and radiation resistance of fullerenes and fullerene derivatives
Shilin, V. A., E-mail: shilin@pnpi.spb.ru; Lebedev, V. T.; Sedov, V. P.; Szhogina, A. A. [St. Petersburg Nuclear Physics Institute, National Research Centre “Kurchatov Institute” (Russian Federation)
2016-07-15
The parameters of an electric-arc facility for the synthesis of fullerenes and endohedral metallofullerenes are optimized. The resistance of C{sub 60} and C{sub 70} fullerenes and C{sub 60}(OH){sub 30} and C{sub 70}(OH){sub 30} fullerenols against neutron irradiation is studied. It is established that the radiation resistance of the fullerenes is higher than that of the fullerenols, but the radiation resistance of the Gd@C{sub 2n} endometallofullerenes is lower than that of the corresponding Gd@C{sub 2n}(OH){sub 38} fullerenols. The radiation resistance of mixtures of Me@C{sub 2n}(OH){sub 38} (Me = Gd, Tb, Sc, Fe, and Pr) endometallofullerenes with C{sub 60}(OH){sub 30} is determined. The factors affecting the radiation resistance of the fullerenes and fullerenols are discussed.
3. Exohedral and skeletal rearrangements in the molecules of fullerene derivatives
Ignat' eva, Daria V; Ioffe, I N; Troyanov, Sergey I; Sidorov, Lev N [Department of Chemistry, M.V. Lomonosov Moscow State University, Moscow (Russian Federation)
2011-07-31
The data on the migration of monoatomic addends, perfluoroalkyl and more complex organic groups in the molecules of fullerene derivatives published mainly in the last decade are analyzed. Skeletal rearrangements of the carbon cage occurring during chemical reactions are considered.
4. Fullerenes
Ehrenreich, Henry
1994-01-01
Fullerenes or"buckyballs,"a new carbon-based family of materials, have fascinated the scientific community for the past few years. These materials are likely to find applications ranging from lubricants to batteries to biological magic bullets, which will be of great importance in the science and technology of the next century. This carefully edited volume, the first to include Frans Spaepen as co-editor, summarizes our present understanding in a series of didacticarticles, which take the reader from the fundamentals to the present cutting-edge research. A general overview is followed by chapters devoted to synthesis and characterization of fullerenes and their derivatives, the novel structural properties of buckyballs, tubes, and buckyonions, a theoretical and experimental view of electrons and phonons, and finally to the fascinating superconducting properties of these materials.Key Features* Presents systematic overview of entire field* Discusses synthesis, characterization, structure, and superconducting p...
5. Fullerene derivatives as components for 'plastic' photovoltaic cells
Hummelen, J.C.; Knol, J.; Kadish, KM; Ruoff, RS
1998-01-01
Derivatives of [60]fullerene, mixed with conducting polymers to yield donor-acceptor bulk-heterojunction (beta-junction) materials, are useful in 'plastic' photovoltaic devices. In order to enhance the charge carrier mobilities in the two individual interpenetrating networks, one important goal of
6. Fullerene Derivatives as Components for ‘Plastic’ Photovoltaic Cells
Knol, Joop; Hummelen, Jan C.
1998-01-01
Derivatives of [60]fullerene, mixed with conducting polymers to yield donor-acceptor bulk-heterojunction (β-junction) materials, are useful in ‘plastic’ photovoltaic devices. In order to enhance the charge carrier mobilities in the two individual interpenetrating networks, one important goal of our
7. Competitive photometric enzyme immunoassay for fullerene C60 and its derivatives using a fullerene conjugated to horseradish peroxidase
Hendrickson, Olga D.; Smirnova, Natalya I.; Zherdev, Anatoly V.; Dzantiev, Boris B.; Sveshnikov, Peter G.
2016-01-01
The article describes a highly sensitive single-step microplate enzyme immunoassay of the ELISA type for fullerene C 60 and its derivatives. Monoclonal anti-fullerene antibodies and a conjugate between fullerene and horseradish peroxidase were used as specific reagents. A direct competitive ELISA was carried out that was based on antibodies immobilized in the well of a microtiter plate, a peroxidase-labeled antigen, and detection via the dye formed from 3,3′,5,5′-tetramethylbenzidine and hydrogen peroxide. Both pristine fullerene C 60 and its water-soluble forms can be determined. The detection limits are 1.5 ng∙mL −1 for fullerene C 60 , and between 0.1 and 1.3 ng∙mL −1 for its derivatives. This ELISA format allows for almost two-fold reduction of the time needed for the assay in comparison to indirect scheme with labeled antibodies. (author)
8. In-Silico Study Of Water Soluble C60-Fullerene Derivatives And Different Drug Targets
2015-08-01
Full Text Available Fullerene C60 is a unique carbon molecule that adopts a sphere shape. It has been proved that fullerene and some of its derivatives several disease targets. Fullerene itself is insoluble in water. So fullerene application is hindered in medical field. In this study a literature search was performed and all derivatives were collected. The fullerene binding protein previously reported in literature were also retrieved from protein databank. The docking study were performed with fullerene derivatives and its binding proteins. The selected proteins include Voltage-Gated Potassium Channel estrogenic 17beta-hydroxysteroid dehydrogenase and monoclonal anti-progesterone antibody. The binding affinity and binding free energy were computed for these proteins and fullerene derivatives complexes. The binding affinity and binding free energy calculation of the co-crystal ligands were also carried out. The results show the good fitting of fullerene derivatives in the active site of different proteins. The binding affinities and binding free energies of fullerene derivatives are better. The present study gives a detail information about the binding mode of C60 derivatives. The finding will be helpful in fullerene-based drug discovery and facilitate the efforts of fighting many diseases.
9. Fullerene derivatives as electron acceptors for organic photovoltaic cells.
Mi, Dongbo; Kim, Ji-Hoon; Kim, Hee Un; Xu, Fei; Hwang, Do-Hoon
2014-02-01
Energy is currently one of the most important problems humankind faces. Depletion of traditional energy sources such as coal and oil results in the need to develop new ways to create, transport, and store electricity. In this regard, the sun, which can be considered as a giant nuclear fusion reactor, represents the most powerful source of energy available in our solar system. For photovoltaic cells to gain widespread acceptance as a source of clean and renewable energy, the cost per watt of solar energy must be decreased. Organic photovoltaic cells, developed in the past two decades, have potential as alternatives to traditional inorganic semiconductor photovoltaic cells, which suffer from high environmental pollution and energy consumption during production. Organic photovoltaic cells are composed of a blended film of a conjugated-polymer donor and a soluble fullerene-derivative acceptor sandwiched between a poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate)-coated indium tin oxide positive electrode and a low-work-function metal negative electrode. Considerable research efforts aim at designing and synthesizing novel fullerene derivatives as electron acceptors with up-raised lowest unoccupied molecular orbital energy, better light-harvesting properties, higher electron mobility, and better miscibility with the polymer donor for improving the power conversion efficiency of the organic photovoltaic cells. In this paper, we systematically review novel fullerene acceptors synthesized through chemical modification for enhancing the photovoltaic performance by increasing open-circuit voltage, short-circuit current, and fill factor, which determine the performance of organic photovoltaic cells.
10. Polythiophenes and fullerene derivatives based donor-acceptor system: topography by atomic force microscopy
Marcakova, M. L.; Repovsky, D.; Cik, G.; Velic, D.
2017-01-01
The goal of this work is to examine the surface of a polythiophene/fullerene film in order to understand the structure. In this work polythiophene is used as electron donor and fullerene-derivative is used as electron acceptor. Atomic force microscopy (AFM), is an ideal method to study surfaces and nanostructures. Surfaces of fullerene C60 , fullerene-derivates PCBM, polythiophene P12 and a mixture of P12 and PCBM are characterized. In all samples, the average roughness, the arithmetical value of divergence from the high of the surface, is determined concluding that P12 and PCBM mix together well and form a film with specific topography. (authors)
11. In vivo biology and toxicology of fullerenes and their derivatives
Nielsen, Gunnar Damgård; Roursgaard, Martin; Jensen, Keld Alstrup
2008-01-01
Fullerenes represent a group of nanoparticles discovered in 1985. They are spherical molecules consisting entirely of carbon atoms (C(x)) to which side chains can be added, furnishing compounds with widely different properties. Fullerenes interact with biological systems, for example, by enzyme i...
12. Synthesis and Photophysical Properties of Novel Fullerene Derivatives as Model Compounds for Bulk-Heterojunction PV Cells
Hal, P.A. van; Langeveld-Voss, B.M.W.; Peeters, E.; Janssen, R.A.J.; Knol, J.; Hummelen, J.C.
2000-01-01
Covalent and well-defined oligomer-fullerene donor-acceptor molecular structures can serve as important model systems for plastic PV cells, based on interpenetrating networks of conjugated polymers and fullerene derivatives. Two series of [60]fullerene-oligomer dyads and triads were prepared and
13. Soluble fullerene derivatives : The effect of electronic structure on transistor performance and air stability
Ball, James M.; Bouwer, Ricardo K.M.; Kooistra, Floris B.; Frost, Jarvist M.; Qi, Yabing; Buchaca Domingo, Ester; Smith, Jeremy; de Leeuw, Dago M.; Hummelen, Jan C.; Nelson, Jenny; Kahn, Antoine; Stingelin, Natalie; Bradley, Donal D.C.; Anthopoulos, Thomas D.
2011-01-01
The family of soluble fullerene derivatives comprises a widely studied group of electron transporting molecules for use in organic electronic and optoelectronic devices. For electronic applications, electron transporting (n-channel) materials are required for implementation into organic
14. 2D-QSAR study of fullerene nanostructure derivatives as potent HIV-1 protease inhibitors
Barzegar, Abolfazl; Jafari Mousavi, Somaye; Hamidi, Hossein; Sadeghi, Mehdi
2017-09-01
The protease of human immunodeficiency virus1 (HIV-PR) is an essential enzyme for antiviral treatments. Carbon nanostructures of fullerene derivatives, have nanoscale dimension with a diameter comparable to the diameter of the active site of HIV-PR which would in turn inhibit HIV. In this research, two dimensional quantitative structure-activity relationships (2D-QSAR) of fullerene derivatives against HIV-PR activity were employed as a powerful tool for elucidation the relationships between structure and experimental observations. QSAR study of 49 fullerene derivatives was performed by employing stepwise-MLR, GAPLS-MLR, and PCA-MLR models for variable (descriptor) selection and model construction. QSAR models were obtained with higher ability to predict the activity of the fullerene derivatives against HIV-PR by a correlation coefficient (R2training) of 0.942, 0.89, and 0.87 as well as R2test values of 0.791, 0.67and 0.674 for stepwise-MLR, GAPLS-MLR, and PCA -MLR models, respectively. Leave-one-out cross-validated correlation coefficient (R2CV) and Y-randomization methods confirmed the models robustness. The descriptors indicated that the HIV-PR inhibition depends on the van der Waals volumes, polarizability, bond order between two atoms and electronegativities of fullerenes derivatives. 2D-QSAR simulation without needing receptor's active site geometry, resulted in useful descriptors mainly denoting ;C60 backbone-functional groups; and ;C60 functional groups; properties. Both properties in fullerene refer to the ligand fitness and improvement van der Waals interactions with HIV-PR active site. Therefore, the QSAR models can be used in the search for novel HIV-PR inhibitors based on fullerene derivatives.
15. Thermodynamics of association of water soluble fullerene derivatives
SONANKI KESHRI
2017-08-31
Aug 31, 2017 ... Entropic and enthalpic contributions to the association of solute molecules are calculated ... authors.7,46–70 The association of fullerene in aque- ous media is ..... The main mechanism accounting for the stabiliza- tion of the ...
16. Preparation and tribology properties of water-soluble fullerene derivative nanoball
Guichang Jiang
2017-02-01
Full Text Available Water-soluble fullerene derivatives were synthesized via radical polymerization. They are completely soluble in water, yielding a clear brown solution. The products were characterized by FTIR, UV–Vis, 1H-NMR, 13CNMR, GPC, TGA, and SEM. Four-ball tests show that the addition of a certain concentration of the fullerene derivatives to base stock (2 wt.% triethanolamine aqueous solution can effectively increase both the load-carrying capacity (PB value, and the resistance to wear. SEM observations confirm the additive results in a reduced diameter of the wear scar and decreased wear.
17. Synthetic strategies for modifying dielectric properties and the electron mobility of fullerene derivatives
Jahani Bahnamiri, Fatemeh
2016-01-01
The goal of this PhD research project was to develop fullerene derivatives with enhanced dielectric properties for photovoltaic applications. Organic solar cells suffer from relatively low power conversion efficiency mainly due to charge recombination, which stems from the low dielectric constant of
18. Growth and Potential Damage of Human Bone-Derived Cells on Fresh and Aged Fullerene C60 Films
Jiri Vacik
2013-04-01
Full Text Available Fullerenes are nanoparticles composed of carbon atoms arranged in a spherical hollow cage-like structure. Numerous studies have evaluated the therapeutic potential of fullerene derivates against oxidative stress-associated conditions, including the prevention or treatment of arthritis. On the other hand, fullerenes are not only able to quench, but also to generate harmful reactive oxygen species. The reactivity of fullerenes may change in time due to the oxidation and polymerization of fullerenes in an air atmosphere. In this study, we therefore tested the dependence between the age of fullerene films (from one week to one year and the proliferation, viability and metabolic activity of human osteosarcoma cells (lines MG-63 and U-2 OS. We also monitored potential membrane and DNA damage and morphological changes of the cells. After seven days of cultivation, we did not observe any cytotoxic morphological changes, such as enlarged cells or cytosolic vacuole formation. Furthermore, there was no increased level of DNA damage. The increasing age of the fullerene films did not cause enhancement of cytotoxicity. On the contrary, it resulted in an improvement in the properties of these materials, which are more suitable for cell cultivation. Therefore, fullerene films could be considered as a promising material with potential use as a bioactive coating of cell carriers for bone tissue engineering.
19. Growth and potential damage of human bone-derived cells on fresh and aged fullerene c60 films.
Kopova, Ivana; Bacakova, Lucie; Lavrentiev, Vasily; Vacik, Jiri
2013-04-26
Fullerenes are nanoparticles composed of carbon atoms arranged in a spherical hollow cage-like structure. Numerous studies have evaluated the therapeutic potential of fullerene derivates against oxidative stress-associated conditions, including the prevention or treatment of arthritis. On the other hand, fullerenes are not only able to quench, but also to generate harmful reactive oxygen species. The reactivity of fullerenes may change in time due to the oxidation and polymerization of fullerenes in an air atmosphere. In this study, we therefore tested the dependence between the age of fullerene films (from one week to one year) and the proliferation, viability and metabolic activity of human osteosarcoma cells (lines MG-63 and U-2 OS). We also monitored potential membrane and DNA damage and morphological changes of the cells. After seven days of cultivation, we did not observe any cytotoxic morphological changes, such as enlarged cells or cytosolic vacuole formation. Furthermore, there was no increased level of DNA damage. The increasing age of the fullerene films did not cause enhancement of cytotoxicity. On the contrary, it resulted in an improvement in the properties of these materials, which are more suitable for cell cultivation. Therefore, fullerene films could be considered as a promising material with potential use as a bioactive coating of cell carriers for bone tissue engineering.
20. Discriminating between Different Heavy Metal Ions with Fullerene-Derived Nanoparticles
Erica Ciotta
2018-05-01
Full Text Available A novel type of graphene-like nanoparticle, synthesized by oxidation and unfolding of C60 buckminsterfullerene fullerene, showed multiple and reproducible sensitivity to Cu2+, Pb2+, Cd2+, and As(III through different degrees of fluorescence quenching or, in the case of Cd2+, through a remarkable fluorescence enhancement. Most importantly, only for Cu2+ and Pb2+, the fluorescence intensity variations came with distinct modifications of the optical absorption spectrum. Time-resolved fluorescence study confirmed that the common origin of these diverse behaviors lies in complexation of the metal ions by fullerene-derived carbon layers, even though further studies are required for a complete explanation of the involved processes. Nonetheless, the different response of fluorescence and optical absorbance towards distinct cationic species makes it possible to discriminate between the presence of Cu2+, Pb2+, Cd2+, and As(III, through two simple optical measurements. To this end, the use of a three-dimensional calibration plot is discussed. This property makes fullerene-derived nanoparticles a promising material in view of the implementation of a selective, colorimetric/fluorescent detection system.
1. Tuning Fullerene Intercalation in a Poly (thiophene) derivative by Controlling the Polymer Degree of Self-Organisation
Paternò, G. M.; Skoda, M. W. A.; Dalgliesh, Robert; Cacialli, F.; Sakai, V. García
2016-10-01
Controlling the nanoscale arrangement in polymer-fullerene organic solar cells is of paramount importance to boost the performance of such promising class of photovoltaic diodes. In this work, we use a pseudo-bilayer system made of poly(2,5-bis(3-hexadecylthiophen-2-yl)thieno[3,2-b]thiophene (PBTTT) and [6,6]-phenyl-C61-butyric acid methyl ester (PCBM), to acquire a more complete understanding of the diffusion and intercalation of the fullerene-derivative within the polymer layer. By exploiting morphological and structural characterisation techniques, we observe that if we increase the film solidification time the polymer develops a higher crystalline order, and, as a result, it does not allow fullerene molecules to intercalate between the polymer side-chains. Gaining insight into the detailed fullerene intercalation mechanism is important for the development of organic photovoltaic diodes (PVDs).
2. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives
Sikorska, Celina; Puzyn, Tomasz
2015-11-01
The capability of reproducing the open circuit voltages (V oc) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.
3. The performance of selected semi-empirical and DFT methods in studying C60 fullerene derivatives
Sikorska, Celina; Puzyn, Tomasz
2015-01-01
The capability of reproducing the open circuit voltages (V oc ) of 15 representative C 60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V oc ), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C 61 -buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E HOMO ). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications. (paper)
4. Novel Terthiophene-Substituted Fullerene Derivatives as Easily Accessible Acceptor Molecules for Bulk-Heterojunction Polymer Solar Cells
Filippo Nisic
2014-01-01
Full Text Available Five fulleropyrrolidines and methanofullerenes, bearing one or two terthiophene moieties, have been prepared in a convenient way and well characterized. These novel fullerene derivatives are characterized by good solubility and by better harvesting of the solar radiation with respect to traditional PCBM. In addition, they have a relatively high LUMO level and a low band gap that can be easily tuned by an adequate design of the link between the fullerene and the terthiophene. Preliminary results show that they are potential acceptors for the creation of efficient bulk-heterojunction solar cells based on donor polymers containing thiophene units.
5. Highly selective reactions of C(60)Cl(6) with thiols for the synthesis of functionalized [60]fullerene derivatives
Khakina, Ekaterina A; Yurkova, Anastasiya A; Peregudov, Alexander S; Troyanov, Sergey I; Trush, Vyacheslav V; Vovk, Andrey I; Mumyatov, Alexander V; Martynenko, Vyacheslav M; Balzarini, Jan; Troshin, Pavel A
2012-01-01
Chlorofullerene C(60)Cl(6) undergoes highly selective reactions with thiols forming compounds C(60)[SR](5)H with high yields. These reactions open up straightforward synthetic routes to many functionalized fullerene derivatives, e.g. water-soluble compounds showing interesting biological activities.
6. The Activity of [60]Fullerene Derivatives Bearing Amine and Carboxylic Solubilizing Groups against Escherichia coli: A Comparative Study
Dmitry G. Deryabin
2014-01-01
Full Text Available We report a comparative investigation of the antibacterial activity of two water-soluble fullerene derivatives bearing protonated amine (AF and deprotonated carboxylic (CF groups appended to the fullerene cage via organic linkers. The negatively charged fullerene derivative CF showed no tendency to bind to the bacterial cells and, consequently, no significant antibacterial activity. In contrast, the compound AF loaded with cationic groups showed strong and partially irreversible binding to the negatively charged Escherichia coli K12 TG1 cells and to human erythrocytes, also possessing negative zeta potential. Adsorption of AF on the bacterial surface was visualized by atomic force microscopy revealing the formation of specific clusters (AF aggregates surrounding the bacterial cell. Incubation of E. coli K12 TG1 with AF led to a dose-dependent bactericidal effect with LD50 = 79.1 µM. The presence of human erythrocytes in the test medium decreased the AF antibacterial activity. Thus we reveal that the water-soluble cationic fullerene derivative AF possesses promising antibacterial activity, which might be utilized in the development of novel types of chemical disinfectants.
7. The topology of fullerenes
Schwerdtfeger, Peter; Wirz, Lukas; Avery, James Emil
2014-01-01
Fullerenes are carbon molecules that form polyhedral cages. Their bond structures are exactly the planar cubic graphs that have only pentagon and hexagon faces. Strikingly, a number of chemical properties of a fullerene can be derived from its graph structure. A rich mathematics of cubic planar g....... In this paper, we present a general overview of recent topological and graph theoretical developments in fullerene research over the past two decades, describing both solved and open problems....
8. Nanostructured diamine-fullerene derivatives: computational density functional theory study and experimental evidence for their formation via gas-phase functionalization.
Contreras-Torres, Flavio F; Basiuk, Elena V; Basiuk, Vladimir A; Meza-Laguna, Víctor; Gromovoy, Taras Yu
2012-02-16
Nanostructure derivatives of fullerene C(60) are used in emerging applications of composite matrices, including protective and decorative coating, superadsorbent material, thin films, and lightweight high-strength fiber-reinforced materials, etc. In this study, quantum chemical calculations and experimental studies were performed to analyze the derivatives of diamine-fullerene prepared by the gas-phase solvent-free functionalization technique. In particular, the aliphatic 1,8-diamino-octane and the aromatic 1,5-diaminonaphthalene, which are diamines volatile in vacuum, were studied. We addressed two alternative mechanisms of the amination reaction via polyaddition and cross-linking of C(60) with diamines, using the pure GGA BLYP, PW91, and PBE functionals; further validation calculations were performed using the semiempirical dispersion GGA B97-D functional which contains parameters that have been specially adjusted by a more realistic view on dispersion contributions. In addition, we looked for experimental evidence for the covalent functionalization by using laser desorption/ionization time-of-flight mass spectrometry, thermogravimetric analysis, and atomic force microscopy.
9. MAPLE prepared heterostructures with oligoazomethine: Fullerene derivative mixed layer for photovoltaic applications
Stanculescu, A.; Rasoga, O.; Socol, M.; Vacareanu, L.; Grigoras, M.; Socol, G.; Stanculescu, F.; Breazu, C.; Matei, E.; Preda, N.; Girtan, M.
2017-09-01
Mixed layers of azomethine oligomers containing 2,5-diamino-3,4-dicyanothiophene as central unit and triphenylamine (LV5) or carbazol (LV4) at both ends as donor and fullerene derivative, [6,6]-phenyl-C61 butyric acid butyl ester ([C60]PCB-C4) as acceptor, have been prepared by Matrix Assisted Pulsed Laser Evaporation (MAPLE) on glass/ITO and Si substrates. The effect of weight ratio between donor and acceptor (1:1; 1:2) and solvent type (chloroform, dimethylsulphoxide) on the optical (UV-vis transmission/absorption, photoluminescence) and morphological properties of LV4 (LV5): [C60]PCB-C4 mixed layers has been evidenced. Dark and under illumination I-V characteristics of the heterostructures realized with these mixed layers sandwiched between ITO and Al electrodes have revealed a solar cell behavior for the heterostructures prepared with both LV4 and LV5 using chloroform as matrix solvent. The solar cell structure realized with oligomer LV5, glass/ITO/LV5: [C60]PCB-C4 (1:1) has shown the best parameters.
10. Fullerene and oxidative stress
M. A. Orlova
2012-01-01
Full Text Available Fullerene derivatives superfamily attracts a serious attention as antiviral and anticancer agents and drug delivery carriers as well. A large number of such fullerene С60 derivatives obtained to date. However, there is an obvious deficit of information about causes and mechanisms of immediately and long-term consequences of their effects in vivo which is a true obstacle on the way leading to their practical medical using. First, this concerns their impact on the proliferation, apoptosis and necrosis regulation. Fullerene nanoparticle functionalization type, their sizes and surface nanopathology are of great importance for further promoting of either cytoprotective or cytotoxic effects. One of the main effects of fullerenes on living systems is the reactive oxygen species (ROS formation induction. This lecture provides a modern concept analysis regarding fullerenes effects on ROS formation and modulation of proliferation and apoptosis in normal and tumor cells.
11. Quasi 2D Mesoporous Carbon Microbelts Derived from Fullerene Crystals as an Electrode Material for Electrochemical Supercapacitors.
Tang, Qin; Bairi, Partha; Shrestha, Rekha Goswami; Hill, Jonathan P; Ariga, Katsuhiko; Zeng, Haibo; Ji, Qingmin; Shrestha, Lok Kumar
2017-12-27
Fullerene C 60 microbelts were fabricated using the liquid-liquid interfacial precipitation method and converted into quasi 2D mesoporous carbon microbelts by heat treatment at elevated temperatures of 900 and 2000 °C. The carbon microbelts obtained by heat treatment of fullerene C 60 microbelts at 900 °C showed excellent electrochemical supercapacitive performance, exhibiting high specific capacitances ca. 360 F g -1 (at 5 mV s -1 ) and 290 F g -1 (at 1 A g -1 ) because of the enhanced surface area and the robust mesoporous framework structure. Additionally, the heat-treated carbon microbelt showed good rate performance, retaining 49% of capacitance at a high scan rate of 10 A g -1 . The carbon belts exhibit super cyclic stability. Capacity loss was not observed even after 10 000 charge/discharge cycles. These results demonstrate that the quasi 2D mesoporous carbon microbelts derived from a π-electron-rich carbon source, fullerene C 60 crystals, could be used as a new candidate material for electrochemical supercapacitor applications.
12. A zeta potential value determines the aggregate's size of penta-substituted [60]fullerene derivatives in aqueous suspension whereas positive charge is required for toxicity against bacterial cells.
Deryabin, Dmitry G; Efremova, Ludmila V; Vasilchenko, Alexey S; Saidakova, Evgeniya V; Sizova, Elena A; Troshin, Pavel A; Zhilenkov, Alexander V; Khakina, Ekaterina A; Khakina, Ekaterina E
2015-08-08
The cause-effect relationships between physicochemical properties of amphiphilic [60]fullerene derivatives and their toxicity against bacterial cells have not yet been clarified. In this study, we report how the differences in the chemical structure of organic addends in 10 originally synthesized penta-substituted [60]fullerene derivatives modulate their zeta potential and aggregate's size in salt-free and salt-added aqueous suspensions as well as how these physicochemical characteristics affect the bioenergetics of freshwater Escherichia coli and marine Photobacterium phosphoreum bacteria. Dynamic light scattering, laser Doppler micro-electrophoresis, agarose gel electrophoresis, atomic force microscopy, and bioluminescence inhibition assay were used to characterize the fullerene aggregation behavior in aqueous solution and their interaction with the bacterial cell surface, following zeta potential changes and toxic effects. Dynamic light scattering results indicated the formation of self-assembled [60]fullerene aggregates in aqueous suspensions. The measurement of the zeta potential of the particles revealed that they have different surface charges. The relationship between these physicochemical characteristics was presented as an exponential regression that correctly described the dependence of the aggregate's size of penta-substituted [60]fullerene derivatives in salt-free aqueous suspension from zeta potential value. The prevalence of DLVO-related effects was shown in salt-added aqueous suspension that decreased zeta potential values and affected the aggregation of [60]fullerene derivatives expressed differently for individual compounds. A bioluminescence inhibition assay demonstrated that the toxic effect of [60]fullerene derivatives against E. coli cells was strictly determined by their positive zeta potential charge value being weakened against P. phosphoreum cells in an aquatic system of high salinity. Atomic force microscopy data suggested that the
13. Charge-associated effects of fullerene derivatives on microbialstructural integrity and central metabolism
Tang, Yinjie J.; Ashcroft, Jared M.; Chen, Ding; Min, Guangwei; Kim, Chul; Murkhejee, Bipasha; Larabell, Carolyn; Keasling, Jay D.; Chen,Fanqing Frank
2007-01-23
The effects of four types of fullerene compounds (C60,C60-OH, C60-COOH, C60-NH2) were examined on two model microorganisms(Escherichia coli W3110 and Shewanella oneidensis MR-1). Positivelycharged C60-NH2 at concentrations as low as 10 mg/L inhibited growth andreduced substrate uptake for both microorganisms. Scanning ElectronMicroscopy (SEM) revealed damage to cellular structures.Neutrally-charged C60 and C60-OH had mild negative effects on S.oneidensis MR-1, whereas the negatively-charged C60-COOH did not affecteither microorganism s growth. The effect of fullerene compounds onglobal metabolism was further investigated using [3-13C]L-lactateisotopic labeling, which tracks perturbations to metabolic reaction ratesin bacteria by examining the change in the isotopic labeling pattern inthe resulting metabolites (often amino acids).1-3 The 13C isotopomeranalysis from all fullerene-exposed cultures revealed no significantdifferences in isotopomer distributions from unstressed cells. Thisresult indicates that microbial central metabolism is robust toenvironmental stress inflicted by fullerene nanoparticles. In addition,although C60-NH2 compounds caused mechanical stress on the cell wall ormembrane, both S. oneidensis MR-1 and E. coli W3110 can efficientlyalleviate such stress by cell aggregation and precipitation of the toxicnanoparticles. The results presented here favor the hypothesis thatfullerenes cause more membrane stress4, 5, 6 than perturbation to energymetabolism7
14. Radiological protection optimization using derivatives
Freitas Acosta Perez, C. de; Sordi, G.M.A.A.
2006-01-01
The aim of this paper is to provide a different approach related to the integral cost-benefit and extended cost-benefit analysis used in the decision-aiding techniques. In the ICRP publication 55 the annual protection cost is envisaged as a set of points, each of them representing an option, linked by a straight line. The detriment cost function is considered a linear function whose angular coefficient is determined by the alpha value. In this paper the uranium mine example considered in the ICRP publication 55 was used. But the potential curve was introduced both in the integral cost benefit analysis and in the extended cost-benefit analysis, which the individual dose distribution attribute is added. The result was obtained using derivatives. The detriment cost, Y, is not necessary because the alpha value is known. The Y derivative dS/dY is the alpha value itself and so, the attention is directed to the derivative -dX/dS on the points that, along with the alpha value, present the optimum option. The results makes clear that the prevailing factor in the optimum option selection is the alpha value imputed, and those a single alpha value, as suggested now, probably as little efficiency on the optimization process. Obtaining a curve for the alpha value and using the derivative technique introduced in this paper, the analytical solution is more convenient and reliable compared to the one used now. (authors)
15. The performance of selected semi-empirical and DFT methods in studying C₆₀ fullerene derivatives.
Sikorska, Celina; Puzyn, Tomasz
2015-11-13
The capability of reproducing the open circuit voltages (V(oc)) of 15 representative C60 fullerene derivatives was tested using the selected quantum mechanical methods (B3LYP, PM6, and PM7) together with the two one-electron basis sets. Certain theoretical treatments (e.g. PM6) were found to be satisfactory for preliminary estimates of the open circuit voltages (V(oc)), whereas the use of the B3LYP/6-31G(d) approach has been proven to assure highly accurate results. We also examined the structural similarity of 19 fullerene derivatives by employing principle component analysis (PCA). In order to express the structural features of the studied compounds we used molecular descriptors calculated with semi-empirical (PM6 and PM7) and density functional (B3LYP/6-31G(d)) methods separately. In performing PCA, we noticed that semi-empirical methods (i.e. PM6 and PM7) seem satisfactory for molecules, in which one can distinguish the aromatic and the aliphatic parts in the cyclopropane ring of PCBM (phenyl-C61-buteric acid methyl ester) and they significantly overestimate the energy of the highest occupied molecular orbital (E(HOMO)). The use of the B3LYP functional, however, is recommended for studying methanofullerenes, which closely resemble the structure of PCBM, and for their modifications.
16. Self assembly of amphiphilic C60 fullerene derivatives into nanoscale supramolecular structures
Casscells S Ward
2007-08-01
Full Text Available Abstract Background The amphiphilic fullerene monomer (AF-1 consists of a "buckyball" cage to which a Newkome-like dendrimer unit and five lipophilic C12 chains positioned octahedrally to the dendrimer unit are attached. In this study, we report a novel fullerene-based liposome termed 'buckysome' that is water soluble and forms stable spherical nanometer sized vesicles. Cryogenic electron microscopy (Cryo-EM, transmission electron microscopy (TEM, and dynamic light scattering (DLS studies were used to characterize the different supra-molecular structures readily formed from the fullerene monomers under varying pH, aqueous solvents, and preparative conditions. Results Electron microscopy results indicate the formation of bilayer membranes with a width of ~6.5 nm, consistent with previously reported molecular dynamics simulations. Cryo-EM indicates the formation of large (400 nm diameter multilamellar, liposome-like vesicles and unilamellar vesicles in the size range of 50–150 nm diameter. In addition, complex networks of cylindrical, tube-like aggregates with varying lengths and packing densities were observed. Under controlled experimental conditions, high concentrations of spherical vesicles could be formed. In vitro results suggest that these supra-molecular structures impose little to no toxicity. Cytotoxicity of 10–200 μM buckysomes were assessed in various cell lines. Ongoing studies are aimed at understanding cellular internalization of these nanoparticle aggregates. Conclusion In this current study, we have designed a core platform based on a novel amphiphilic fullerene nanostructure, which readily assembles into supra-molecular structures. This delivery vector might provide promising features such as ease of preparation, long-term stability and controlled release.
17. Fullerene derivatives and fullerene superconductors
Wang, H.H.; Schlueter, J.A.; Cooper, A.C.
1993-01-01
A series of 1:1 C 60 Cycloaddition adducts, C 60 A (A = anthracene, butadiene, cyclopentadiene, and methylcyclopentadiene), has been synthesized. The products are cleanly separated and characterized by use of TGA, 1H-NMR, IR, and mass spectrometry. Among these adducts, C 60 (methylcyclopentadiene) showed the highest thermal stability and was doped with three equivalents of rubidium. The resulting Rb 3 C 60 (MeCp) is a semiconductor but can be thermally converted to the superconducting Rb 3 C 60 through a retro-Diels-Alder reaction. A one-step doping process to prepare Rb 3 C 60 crystals has been developed. The optimal doping condition occurs at ∼ 300 degrees C. High superconducting shielding fractions between 60 and 90% and sharp transition widths (ΔT 10--90 between 4 and 0.7 K) were measured for these samples
18. Fullerene and apoptosis
M. A. Orlova
2013-01-01
Full Text Available Fullerene derivatives superfamily attracts a serious attention as antiviral and anticancer agents and drug delivery carriers as well. A large number of such fullerene С60 derivatives obtained to date. However, there is an obvious deficit of information about causes and mechanisms of immediately and long-term consequences of their effects in vivo which is a true obstacle on the way leading to practical medical use of them. First, this concerns their impact on the proliferation, apoptosis and necrosis regulation. Fullerene nanoparticle functionalization type, their sizes and surface nanopathology are of great importance to further promoting of either cytoprotective or cytotoxic effects. This lecture provides modern concept analysis regarding fullerenes effects on apoptosis pathway in normal and tumor cells.
19. Fullerenes and disk-fullerenes
Deza, M; Dutour Sikirić, M; Shtogrin, M I
2013-01-01
A geometric fullerene, or simply a fullerene, is the surface of a simple closed convex 3-dimensional polyhedron with only 5- and 6-gonal faces. Fullerenes are geometric models for chemical fullerenes, which form an important class of organic molecules. These molecules have been studied intensively in chemistry, physics, crystallography, and so on, and their study has led to the appearance of a vast literature on fullerenes in mathematical chemistry and combinatorial and applied geometry. In particular, several generalizations of the notion of a fullerene have been given, aiming at various applications. Here a new generalization of this notion is proposed: an n-disk-fullerene. It is obtained from the surface of a closed convex 3-dimensional polyhedron which has one n-gonal face and all other faces 5- and 6-gonal, by removing the n-gonal face. Only 5- and 6-disk-fullerenes correspond to geometric fullerenes. The notion of a geometric fullerene is therefore generalized from spheres to compact simply connected two-dimensional manifolds with boundary. A two-dimensional surface is said to be unshrinkable if it does not contain belts, that is, simple cycles consisting of 6-gons each of which has two neighbours adjacent at a pair of opposite edges. Shrinkability of fullerenes and n-disk-fullerenes is investigated. Bibliography: 87 titles
20. Fullerenes and disk-fullerenes
Deza, M.; Dutour Sikirić, M.; Shtogrin, M. I.
2013-08-01
A geometric fullerene, or simply a fullerene, is the surface of a simple closed convex 3-dimensional polyhedron with only 5- and 6-gonal faces. Fullerenes are geometric models for chemical fullerenes, which form an important class of organic molecules. These molecules have been studied intensively in chemistry, physics, crystallography, and so on, and their study has led to the appearance of a vast literature on fullerenes in mathematical chemistry and combinatorial and applied geometry. In particular, several generalizations of the notion of a fullerene have been given, aiming at various applications. Here a new generalization of this notion is proposed: an n-disk-fullerene. It is obtained from the surface of a closed convex 3-dimensional polyhedron which has one n-gonal face and all other faces 5- and 6-gonal, by removing the n-gonal face. Only 5- and 6-disk-fullerenes correspond to geometric fullerenes. The notion of a geometric fullerene is therefore generalized from spheres to compact simply connected two-dimensional manifolds with boundary. A two-dimensional surface is said to be unshrinkable if it does not contain belts, that is, simple cycles consisting of 6-gons each of which has two neighbours adjacent at a pair of opposite edges. Shrinkability of fullerenes and n-disk-fullerenes is investigated. Bibliography: 87 titles.
1. Drawing a different picture with pencil lead as matrix-assisted laser desorption/ionization matrix for fullerene derivatives.
Nye, Leanne C; Hungerbühler, Hartmut; Drewello, Thomas
2018-02-01
Inspired by reports on the use of pencil lead as a matrix-assisted laser desorption/ionization matrix, paving the way towards matrix-free matrix-assisted laser desorption/ionization, the present investigation evaluates its usage with organic fullerene derivatives. Currently, this class of compounds is best analysed using the electron transfer matrix trans-2-[3-(4-tert-butylphenyl)-2-methyl-2-propenylidene] malononitrile (DCTB), which was employed as the standard here. The suitability of pencil lead was additionally compared to direct (i.e. no matrix) laser desorption/ionization-mass spectrometry. The use of (DCTB) was identified as the by far gentler method, producing spectra with abundant molecular ion signals and much reduced fragmentation. Analytically, pencil lead was found to be ineffective as a matrix, however, appears to be an extremely easy and inexpensive method for producing sodium and potassium adducts.
2. Program Fullerene
Wirz, Lukas; Peter, Schwerdtfeger,; Avery, James Emil
2013-01-01
Fullerene (Version 4.4), is a general purpose open-source program that can generate any fullerene isomer, perform topological and graph theoretical analysis, as well as calculate a number of physical and chemical properties. The program creates symmetric planar drawings of the fullerene graph, an......-Fowler, and Brinkmann-Fowler vertex insertions. The program is written in standard Fortran and C++, and can easily be installed on a Linux or UNIX environment....
3. Fullerene-catalyzed reduction of azo derivatives in water under UV irradiation
Guo, Yong; Li, Wengang; Yan, Jingjing; Moosa, Basem; Amad, Maan H.; Werth, Charles; Khashab, Niveen M.
2012-01-01
Metal-free fullerene (C60) was found to be an effective catalyst for the reduction of azo groups in basic aqueous solution under UV irradiation in the presence of NaBH4. Use of NaBH4 by itself is not sufficient to reduce the azo dyes without the assistance of a metal catalyst such as Pd and Ag. Experimental and theoretical results suggest that C 60 catalyzes this reaction by using its vacant orbital to accept the electron in the bonding orbital of azo dyes, which leads to the activation of the N=N bond. UV irradiation increases the ability of C60 to interact with electron-donor moieties in azo dyes. Filling a vacancy: Experimental and theoretical methods have been combined to show that C60-catalyzed reductions of azo compounds form aromatic amines under UV irradiation (see scheme). The obtained results show that C60 acts as an electron acceptor to catalyze the reduction of azo compounds, and the role of UV irradiation is to increase the ability of C60 to interact with electron-donor moieties in azo compounds. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
4. PDI Derivative through Fine-Tuning Molecular Structure for Fullerene-Free Organic Solar Cells
Sun, Hua
2017-08-10
A perylenediimide (PDI)-based small molecular (SM) acceptor with both an extended π-conjugation and a three dimensional structure concurrently is critical for achieving high performance PDI-based fullerene-free organic solar cells (OSCs). In this work, we designed and synthesized a novel PDI-based SM acceptor possessing both characteristics by fusing PDI units with a spiro core of 4,4’-spirobi[cyclopenta[2,1-b;3,4-b’]dithiophene(SCPDT) through the -position of the thiophene rings. An enhanced strong absorption in the range of 350–520 nm and arisen LUMO energy level of FSP was observed, compared with previous reported acceptor SCPDT-PDI4, in which the PDI units and SCPDT are not fused. OSCs based on PTB7-Th donor and FSP acceptor were fabricated and achieved a power conversion efficiency of up to 8.89% with DPE as an additive. Efficient and complementary photo absorption, favorable phase separation and balanced carrier mobilites in the blend film account for the high photovoltaic performance. This study offers an effective strategy to design high performance PDI-based acceptors.
5. Fullerene-catalyzed reduction of azo derivatives in water under UV irradiation
Guo, Yong
2012-09-27
Metal-free fullerene (C60) was found to be an effective catalyst for the reduction of azo groups in basic aqueous solution under UV irradiation in the presence of NaBH4. Use of NaBH4 by itself is not sufficient to reduce the azo dyes without the assistance of a metal catalyst such as Pd and Ag. Experimental and theoretical results suggest that C 60 catalyzes this reaction by using its vacant orbital to accept the electron in the bonding orbital of azo dyes, which leads to the activation of the N=N bond. UV irradiation increases the ability of C60 to interact with electron-donor moieties in azo dyes. Filling a vacancy: Experimental and theoretical methods have been combined to show that C60-catalyzed reductions of azo compounds form aromatic amines under UV irradiation (see scheme). The obtained results show that C60 acts as an electron acceptor to catalyze the reduction of azo compounds, and the role of UV irradiation is to increase the ability of C60 to interact with electron-donor moieties in azo compounds. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
6. Incorporation in Langmuir-Blodgett films of an amphiphilic derivative of fullerene C{sub 60} and oligo-para-phenylenevinylene
Alvarez-Venicio, V. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (UNAM), Circuito Exterior, CU, C.P. 04510, D.F. (Mexico); Gutierrez-Nava, M. [CIATEQ, A.C., Centro de Tecnologia Avanzada, Circuito de la Industria Poniente Lote: 11, Mza. 3, No. 11, Colonia Parque Industrial Ex Hacienda Dona Rosa, Lerma C.P. 52004, Estado de Mexico (Mexico); Amelines-Sarria, O. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (UNAM), Circuito Exterior, CU, C.P. 04510, D.F. (Mexico); Alvarez-Zauco, E. [Facultad de Ciencias, UNAM, Circuito Exterior, C.U., C.P. 04510, D.F. (Mexico); Basiuk, V.A. [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (UNAM), Circuito Exterior, CU, C.P. 04510, D.F. (Mexico); Carreon-Castro, M.P., E-mail: pilar@nucleares.unam.mx [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico (UNAM), Circuito Exterior, CU, C.P. 04510, D.F. (Mexico)
2012-12-30
Langmuir (L) and Langmuir-Blodgett (LB) films of fullerene C{sub 60}-oligo-para-phenylenevinylene (OPV) derivative with six C{sub 12}H{sub 25} aliphatic chains were characterized. For the Langmuir films, isotherms of surface pressure versus molecular area, compression/expansion cycles (hysteresis curves) and Brewster angle microscopic images were obtained. We performed molecular mechanics and density functional theory calculations to determine the molecular and electronic structure of our compound at a water-air interface. We found agreement between experimental and theoretical values for the molecular surface area. LB films of up to ten layers were obtained on glass substrates, and were characterized by ultraviolet-visible spectroscopy. We observed that the absorbance at a wavelength of 326 nm grows almost linearly as a function of the number of layers. Films on glass-indium tin oxide were characterized by atomic force microscopy. We also observed a uniform deposition over the whole area of the scanned substrate. We demonstrated that the fullerene C{sub 60}-OPV derivative is able to form both L and LB films preventing fullerene aggregation with its aliphatic chains. We suggest that, due to its electron-acceptor properties, the C{sub 60}-OPV derivative could be used for organic-photovoltaic and organic-electronic applications. - Highlights: Black-Right-Pointing-Pointer We performed isotherm and hysteresis studies of fullerene derivative compound. Black-Right-Pointing-Pointer We found that the theoretical and experimental molecular areas agree. Black-Right-Pointing-Pointer We deposited Langmuir-Blodgett (LB) films on glass-indium tin oxide. Black-Right-Pointing-Pointer LB films were characterized using UV-visible spectroscopy. Black-Right-Pointing-Pointer We observed the morphology of the LB films through atomic force microscopy.
7. Electrochemical Properties of Boron-Doped Fullerene Derivatives for Lithium-Ion Battery Applications.
Sood, Parveen; Kim, Ki Chul; Jang, Seung Soon
2018-03-19
The high electron affinity of fullerene C 60 coupled with the rich chemistry of carbon makes it a promising material for cathode applications in lithium-ion batteries. Since boron has one electron less than carbon, the presence of boron on C 60 cages is expected to generate electron deficiency in C 60 , and thereby to enhance its electron affinity. By using density functional theory (DFT), we studied the redox potentials and electronic properties of C 60 and C 59 B. We have found that doping C 60 with one boron atom results in a substantial increase in redox potential from 2.462 V to 3.709 V, which was attributed to the formation of an open shell system. We also investigated the redox and electronic properties of C 59 B functionalized with various redox-active oxygen containing functional groups (OCFGs). For the combination of functionalization with OCFGs and boron doping, it is found that the enhancement of redox potential is reduced, which is mainly attributed to the open shell structure being changed to a closed-shell one. Nevertheless, the redox potentials are still higher than that of pristine C 60 . From the observation that the lowest unoccupied molecular orbital of closed-shell OCFG- functionalized C 59 B is correlated well with the redox potential, it was confirmed that the spin state is crucial to be considered to understand the relationship between electronic structure and redox properties. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
8. Fullerene-biomolecule conjugates and their biomedicinal applications.
Yang, Xinlin; Ebrahimi, Ali; Li, Jie; Cui, Quanjun
2014-01-01
Fullerenes are among the strongest antioxidants and are characterized as "radical sponges." The research on biomedicinal applications of fullerenes has achieved significant progress since the landmark publication by Friedman et al in 1993. Fullerene-biomolecule conjugates have become an important area of research during the past 2 decades. By a thorough literature search, we attempt to update the information about the synthesis of different types of fullerene-biomolecule conjugates, including fullerene-containing amino acids and peptides, oligonucleotides, sugars, and esters. Moreover, we also discuss in this review recently reported data on the biological and pharmaceutical utilities of these compounds and some other fullerene derivatives of biomedical importance. While within the fullerene-biomolecule conjugates, in which fullerene may act as both an antioxidant and a carrier, specific targeting biomolecules conjugated to fullerene will undoubtedly strengthen the delivery of functional fullerenes to sites of clinical interest.
9. Investigation of Annealing and Blend Concentration Effects of Organic Solar Cells Composed of Small Organic Dye and Fullerene Derivative
Yasser A. M. Ismail
2011-01-01
Full Text Available We have fabricated bulk heterojunction organic solar cells using coumarin 6 (C6 as a small organic dye, for light harvesting and electron donation, with fullerene derivative [6,6]-phenyl-C61 butyric acid methyl ester (PCBM, acting as an electron acceptor, by spin-coating technique. We have investigated thermal annealing and blend concentration effects on light harvesting, photocurrent, and performance parameters of the solar cells. In this work, we introduced an experimental method by which someone can easily detect the variation in the contact between active layer and cathode due to thermal annealing after cathode deposition. We have showed, in this work, unusual behavior of solar cell composed of small organic molecules under the influence of thermal annealing at different conditions. This behavior seemed uncommon for polymer solar cells. We try from this work to understand device physics and to locate a relationship between production parameters and performance parameters of the solar cell based on small organic molecules.
10. Photocatalytic activity enhancement by electron irradiation of fullerene derivative-TiO2 nanoparticles under visible light illumination
Cho, Sung Oh; Yoo, Seung Hwa; Lee, Dong Hoon
2011-01-01
Photocatalytic decomposition of aqueous organic pollutant have attracted many interest due to its simple, low cost, and clean procedure. By only using the sun light and photocatalyst, especially TiO 2 nanoparticles based systems have been extensively studied and commercialized for real life application. However, TiO 2 has a critical disadvantage, which can only absorb the ultra-violet region of the solar spectrum, due to the large band-gap of 3.2 eV. Extensive studies have been preformed to expand the light absorption of TiO 2 to the visible light region of the solar spectrum, by doping metal or non-metal elements on TiO 2 or attaching small band-gap semiconductors on TiO 2 . In this study, a fullerene derivative 1-(3- carboxypropyl)-1-phenyl[6,6]C 61 (PCBA) was attached on the surface of TiO 2 nanoparticles, and its photocatalytic activity was evaluated by decomposition of methyl orange under visible light. Furthermore, enhancement in the photocatalytic activity of these nanoparticles by electron irradiation is discussed
11. Fullerene derivative-doped zinc oxide nanofilm as the cathode of inverted polymer solar cells with low-bandgap polymer (PTB7-Th) for high performance.
Liao, Sih-Hao; Jhuo, Hong-Jyun; Cheng, Yu-Shan; Chen, Show-An
2013-09-14
Modification of a ZnO cathode by doping it with a hydroxyl-containing derivative - giving a ZnO-C60 cathode - provides a fullerene-derivative-rich surface and enhanced electron conduction. Inverted polymer solar cells with the ZnO-C60 cathode display markedly improved power conversion efficiency compared to those with a pristine ZnO cathode, especially when the active layer includes the low-bandgap polymer PTB7-Th. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
12. Study of the Cytotoxic Effects of the New Synthetic Isothiocyanate CM9 and Its Fullerene Derivative on Human T-Leukemia Cells
Elena De Gianni
2015-02-01
Full Text Available One important strategy to develop effective anticancer agents is based on natural products. Many active phytochemicals are in human clinical trials and have been used for a long time, alone and in association with conventional anticancer drugs, for the treatment of various types of cancers. A great number of in vitro, in vivo and clinical reports document the multi-target anticancer activities of isothiocyanates and of compounds characterized by a naphthalenetetracarboxylic diimide scaffold. In order to search for new anticancer agents with a better pharmaco-toxicological profile, we investigated hybrid compounds obtained by inserting isothiocyanate group(s on a naphthalenetetracarboxylic diimide scaffold. Moreover, since water-soluble fullerene derivatives can cross cell membranes thus favoring the delivery of anticancer therapeutics, we explored the cytostatic and cytotoxic activity of hybrid compounds conjugated with fullerene. We studied their cytostatic and cytotoxic effects on a human T-lymphoblastoid cell line by using different flow cytometric assays. In order to better understand their pharmaco-toxicological potential, we also analyzed their genotoxicity. Our global results show that the synthesized compounds reduced significantly the viability of leukemia cells. However, the conjugation with a non-toxic vector did not increase their anticancer potential. This opens an interesting research pattern for certain fullerene properties.
13. Construction of a zinc porphyrin-fullerene-derivative based nonenzymatic electrochemical sensor for sensitive sensing of hydrogen peroxide and nitrite.
Wu, Hai; Fan, Suhua; Jin, Xiaoyan; Zhang, Hong; Chen, Hong; Dai, Zong; Zou, Xiaoyong
2014-07-01
Enzymatic sensors possess high selectivity but suffer from some limitations such as instability, complicated modified procedure, and critical environmental factors, which stimulate the development of more sensitive and stable nonenzymatic electrochemical sensors. Herein, a novel nonenzymatic electrochemical sensor is proposed based on a new zinc porphyrin-fullerene (C60) derivative (ZnP-C60), which was designed and synthesized according to the conformational calculations and the electronic structures of two typical ZnP-C60 derivatives of para-ZnP-C60 (ZnP(p)-C60) and ortho-ZnP-C60 (ZnP(o)-C60). The two derivatives were first investigated by density functional theory (DFT) and ZnP(p)-C60 with a bent conformation was verified to possess a smaller energy gap and better electron-transport ability. Then ZnP(p)-C60 was entrapped in tetraoctylammonium bromide (TOAB) film and modified on glassy carbon electrode (TOAB/ZnP(p)-C60/GCE). The TOAB/ZnP(p)-C60/GCE showed four well-defined quasi-reversible redox couples with extremely fast direct electron transfer and excellent nonenzymatic sensing ability. The electrocatalytic reduction of H2O2 showed a wide linear range from 0.035 to 3.40 mM, with a high sensitivity of 215.6 μA mM(-1) and a limit of detection (LOD) as low as 0.81 μM. The electrocatalytic oxidation of nitrite showed a linear range from 2.0 μM to 0.164 mM, with a sensitivity of 249.9 μA mM(-1) and a LOD down to 1.44 μM. Moreover, the TOAB/ZnP(p)-C60/GCE showed excellent stability and reproducibility, and good testing recoveries for analysis of the nitrite levels of river water and rainwater. The ZnP(p)-C60 can be used as a novel material for the fabrication of nonenzymatic electrochemical sensors.
14. Recent progresses in application of fullerenes in cosmetics.
Lens, Marko
2011-08-01
Cosmetic industry is a fast growing industry with the continuous development of new active ingredients for skin care products. Fullerene C(60) and its derivates have been subject of intensive research in the last few years. Fullerenes display a wide range of different biological activities. Strong antioxidant capacities and effective quenching radical oxygen species (ROS) made fullerenes suitable active compounds in the formulation of skin care products. Published evidence on biological activities of fullerenes relevant for their application in cosmetics use and examples of published patents are presented. Recent trends in the use of fullerenes in topical formulations and patents are reviewed. Future investigations covering application of fullerenes in skin care are discussed.
15. Fullerene-Related Nanocarbons and Their Applications
Geng, Junfeng; Miyazawa, Kun'ichi; Hu, Zheng
2012-01-01
. From the vast amount of research that has been conducted over the last two decades, it is now apparent that these nanomaterials, notably, carbon nanotubes, carbon-based nanoparticles, graphene, fullerene and fullerene derivatives promise very distinct applications and will add great value to industries...
16. Nonlinear optical and optical limiting properties of fullerene, multi-walled carbon nanotubes, graphene and their derivatives with oxygen-containing functional groups
Zhang, Xiao-Liang; Li, Xiao-Chun; Liu, Zhi-Bo; Yan, Xiao-Qing; Tian, Jian-Guo; Chen, Yong-Sheng
2015-01-01
Nonlinear optical properties (NLO) and optical limiting effect of fullerene (C 60 ), multi-walled carbon nanotubes (MWNTs), reduced graphene oxide (RGO) and their oxygenated derivatives were investigated by open-aperture Z-scan technique with nanosecond pulses at 532 nm. C 60 functionalized by oxygen-containing functional groups exhibits weaker NLO properties than that of pristine C 60 . Graphene oxide (GO) with many oxygen-containing functional groups also shows weaker NLO properties than that of RGO. That can be attributed to the disruption of conjugative structures of C 60 and graphene by oxygen-containing functional groups. However, MWNTs and their oxygenated derivatives exhibit comparable NLO properties due to the small weight ratio of these oxygen-containing groups. To investigate the correlation between structures and NLO response for these carbon nanomaterials with different dimensions, nonlinear scattered signal spectra versus input fluence were also measured. (paper)
17. Physical properties of organic fullerene cocrystals
Macovez, Roberto
2017-12-01
The basic facts and fundamental properties of binary fullerene cocrystals are reviewed, focusing especially on solvates and salts of Buckminsterfullerene (C60), and hydrates of hydrophilic C60 derivatives. The examined properties include the lattice structure and the presence of orientational disorder and/or rotational dynamics (of both fullerenes and cocrystallizing moieties), thermodynamic properties such as decomposition enthalpies, and charge transport properties. Both thermodynamic properties and molecular orientational disorder shed light on the extent of intermolecular interactions in these binary solid-state systems. Comparison is carried out also with pristine fullerite and with the solid phases of functionalized C60. Interesting experimental findings on binary fullerene cocrystals include the simultaneous occurrence of rotations of both constituent molecular species, crystal morphologies reminiscent of quasi-crystalline behaviour, the observation of proton conduction in hydrate solids of hydrophilic fullerene derivatives, and the production of super-hard carbon materials by application of high pressures on solvated fullerene crystals.
18. Electronic structure of the boron fullerene B14 and its silicon derivatives B13Si(+), B13Si(-) and B12Si2: a rationalization using a cylinder model.
Van Duong, Long; Nguyen, Minh Tho
2016-06-29
Geometric and electronic structures of the boron cluster B14 and its silicon derivatives B13Si(+), B13Si(-), and B12Si2 were determined using DFT calculations (TPSSh/6-311+G(d)). The B12Si2 fullerene, which is formed by substituting two B atoms at two apex positions of the B14 fullerene by two Si atoms, was also found as the global minimum structure. We demonstrated that the electronic structure and orbital configuration of these small fullerenes can be predicted by the wavefunctions of a particle on a cylinder. The early appearance of high angular node MOs in B14 and B12Si2 can be understood by this simple model. Replacement of one B atom at a top position of B14 by one Si atom, followed by the addition or removal of one electron does not lead to a global minimum fullerene structure for the anion B13Si(-) and cation B13Si(+). The early appearance of the 5σ1 orbital in B13Si(+) causes a lower stability for the fullerene-type structure.
19. Correction: Electronic structure of the boron fullerene B14 and its silicon derivatives B13Si+, B13Si- and B12Si2: a rationalization using a cylinder model.
Van Duong, Long; Nguyen, Minh Tho
2016-08-28
Correction for 'Electronic structure of the boron fullerene B 14 and its silicon derivatives B 13 Si + , B 13 Si - and B 12 Si 2 : a rationalization using a cylinder model' by Long Van Duong et al., Phys. Chem. Chem. Phys., 2016, 18, 17619-17626.
20. Radiation Protection Using Carbon Nanotube Derivatives
Conyers, Jodie L., Jr.; Moore, Valerie C.; Casscells, S. Ward
2010-01-01
1. Diazo compounds in the chemistry of fullerenes
Tuktarov, Airat R; Dzhemilev, Usein M
2010-01-01
Experimental and theoretical data on the reactions of different diazo compounds (diazomethane, its derivatives, cyclic diazo compounds and diazocarbonyl compounds) with fullerenes are summarized. The structures and stereochemistry of cycloadducts formed in these reactions are considered.
2. Diazo compounds in the chemistry of fullerenes
Tuktarov, Airat R.; Dzhemilev, Usein M.
2010-09-01
Experimental and theoretical data on the reactions of different diazo compounds (diazomethane, its derivatives, cyclic diazo compounds and diazocarbonyl compounds) with fullerenes are summarized. The structures and stereochemistry of cycloadducts formed in these reactions are considered.
3. Diazo compounds in the chemistry of fullerenes
Tuktarov, Airat R; Dzhemilev, Usein M [Institute of Petrochemistry and Catalysis, Russian Academy of Sciences, Ufa (Russian Federation)
2010-09-14
Experimental and theoretical data on the reactions of different diazo compounds (diazomethane, its derivatives, cyclic diazo compounds and diazocarbonyl compounds) with fullerenes are summarized. The structures and stereochemistry of cycloadducts formed in these reactions are considered.
4. Characterization of the Structural, Mechanical, and Electronic Properties of Fullerene Mixtures: A Molecular Simulations Description
Tummala, Naga Rajesh; Aziz, Saadullah; Coropceanu, Veaceslav; Bredas, Jean-Luc
2017-01-01
We investigate mixtures of fullerenes and fullerene derivatives, the most commonly used electron accepting materials in organic solar cells, by using a combination of molecular dynamics and density functional theory methods. Our goal is to describe
5. Production of anti-fullerene C{sub 60} polyclonal antibodies and study of their interaction with a conjugated form of fullerene
Hendrickson, O. D., E-mail: odhendrick@gmail.com; Fedyunina, N. S. [Russian Academy of Sciences, Institute of Biochemistry (Russian Federation); Martianov, A. A. [Moscow State University (Russian Federation); Zherdev, A. V.; Dzantiev, B. B. [Russian Academy of Sciences, Institute of Biochemistry (Russian Federation)
2011-09-15
The aim of this study was to produce anti-fullerene C{sub 60} antibodies for the development of detection systems for fullerene C{sub 60} derivatives. To produce anti-fullerene C{sub 60} antibodies, conjugates of the fullerene C{sub 60} carboxylic derivative with thyroglobulin, soybean trypsin inhibitor, and bovine serum albumin were synthesized by carbodiimide activation and characterized. Immunization of rabbits by the conjugates led to the production of polyclonal anti-fullerene antibodies. The specificity of the immune response to fullerene was investigated. Indirect competitive immunoenzyme assay was developed for the determination of conjugated fullerene with detection limits of 0.04 ng/mL (calculated for coupled C{sub 60}) and 0.4 ng/mL (accordingly to total fullerene-protein concentration).
6. Rigid rod spaced fullerene as building block for nanoclusters
By using phenylacetylene based rigid-rod linkers (PhA), we have successfully synthesized two fullerene derivatives, C60-PhA and C60-PhA-C60. The absorption spectral features of C60, as well as that of the phenylacetylene moiety are retained in the monomeric forms of these fullerene derivatives, ruling out the possibility ...
7. Enhanced superconductivity of fullerenes
Washington, II, Aaron L.; Teprovich, Joseph A.; Zidan, Ragaiy
2017-06-20
Methods for enhancing characteristics of superconductive fullerenes and devices incorporating the fullerenes are disclosed. Enhancements can include increase in the critical transition temperature at a constant magnetic field; the existence of a superconducting hysteresis over a changing magnetic field; a decrease in the stabilizing magnetic field required for the onset of superconductivity; and/or an increase in the stability of superconductivity over a large magnetic field. The enhancements can be brought about by transmitting electromagnetic radiation to the superconductive fullerene such that the electromagnetic radiation impinges on the fullerene with an energy that is greater than the band gap of the fullerene.
8. Fullerene solubility-current density relationship in polymer solar cells
Renz, Joachim A.; Gobsch, Gerhard; Hoppe, Harald; Troshin, Pavel A.; Razumov, V.F.
2008-01-01
During the last decade polymer solar cells have undergone a steady increase in overall device efficiency. To date, essential efficiency improvements of polymer-fullerene solar cells require the development of new materials. Whilst most research efforts aim at an improved or spectrally extended absorption of the donor polymer, not so much attention has been paid to the fullerene properties themselves. We have investigated a number of structurally related fullerenes, in order to study the relationship between chemical structure and resulting polymer-fullerene bulk heterojunction photovoltaic properties. Our study reveals a clear connection between the fullerene solubility as material property on one hand and the solar cells short circuit photocurrent on the other hand. The tendency of the less soluble fullerene derivates to aggregate was accounted for smaller current densities in the respective solar cells. Once a minimum solubility of approx. 25 mg/ml in chlorobenzene was overcome by the fullerene derivative, the short circuit current density reached a plateau, of about 8-10 mA/cm 2 . Thus the solubility of the fullerene derivative directly influences the blend morphology and displays an important parameter for efficient polymer-fullerene bulk heterojunction solar cell operation. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (Abstract Copyright [2008], Wiley Periodicals, Inc.)
9. Ultrathin Carbon with Interspersed Graphene/Fullerene-like Nanostructures: A Durable Protective Overcoat for High Density Magnetic Storage.
Dwivedi, Neeraj; Satyanarayana, Nalam; Yeo, Reuben J; Xu, Hai; Ping Loh, Kian; Tripathy, Sudhiranjan; Bhatia, Charanjit S
2015-06-25
One of the key issues for future hard disk drive technology is to design and develop ultrathin (Forming carbon overcoats (COCs) having interspersed nanostructures by the filtered cathodic vacuum arc (FCVA) process can be an effective approach to achieve the desired target. In this work, by employing a novel bi-level surface modification approach using FCVA, the formation of a high sp(3) bonded ultrathin (~1.7 nm) amorphous carbon overcoat with interspersed graphene/fullerene-like nanostructures, grown on magnetic hard disk media, is reported. The in-depth spectroscopic and microscopic analyses by high resolution transmission electron microscopy, scanning tunneling microscopy, time-of-flight secondary ion mass spectrometry, and Raman spectroscopy support the observed findings. Despite a reduction of ~37% in COC thickness, the FCVA-processed thinner COC (~1.7 nm) shows promising functional performance in terms of lower coefficient of friction (~0.25), higher wear resistance, lower surface energy, excellent hydrophobicity and similar/better oxidation corrosion resistance than current commercial COCs of thickness ~2.7 nm. The surface and tribological properties of FCVA-deposited COC was further improved after deposition of lubricant layer.
10. Oscillations of spherical fullerenes interacting with graphene sheet
2017-01-01
In the present study, the oscillations of spherical fullerenes in the vicinity of a fully constrained graphene sheet are investigated. Using the continuous approximation and Lennard-Jones potential, the van der Waals (vdW) potential energy and interaction forces are obtained. The equation of motion is derived and directly solved based on the actual force distribution between the fullerene molecules and the graphene sheet. Numerical results are obtained and shown that the oscillation is sensitive to the size of the fullerene as well as the distance between the center of the fullerene and the graphene sheet.
11. Electron energy-loss spectroscopy on fullerenes and fullerene compounds
Armbruster, J.
1996-03-01
A few years ago, a new form of pure carbon, the fullerenes, has been discovered, which shows many fascinating properties. Within this work the spatial and electronic structure of some selected fullerene compounds have been investigated by electron-energy-loss spectroscopy in transmission. Phase pure samples of alkali intercalated fullerides A x C 60 (A=Na, K, Cs) have been prepared using vacuum distillation. Measruements of K 3 C 60 show a dispersion of the charge carrier plasmon close to zero. This can be explained by calculations, which take into account both band structure and local-field (inhomogeneity) effects. The importance of the molecular structure can also be seen from the A 4 C 60 compounds, where the non-metallic properties are explained by a splitting of the t 1u and t 1g derived bands that is caused by electron-correlation and Jahn-Teller effects. First measurements of the electronic structure of Na x C 60 (x>6) are presented and reveal a complete transfer from the sodium atoms but an incomplete transfer onto the C 60 molecules. This behaviour can be explained by taking into account additional electronic states that are situated between the sodium atoms in the octahedral sites and are predicted by calculations using local density approximation. The crystal structure of the higher fullerenes C 76 and C 84 is found to be face-centered cubic
12. Tuning the Properties of Polymer Bulk Heterojunction Solar Cells by Adjusting Fullerene Size to Control Intercalation
Cates, Nichole C.; Gysel, Roman; Beiley, Zach; Miller, Chad E.; Toney, Michael F.; Heeney, Martin; McCulloch, Iain; McGehee, Michael D.
2009-01-01
We demonstrate that intercalation of fullerene derivatives between the side chains of conjugated polymers can be controlled by adjusting the fullerene size and compare the properties of intercalated and nonintercalated poly(2,5-bis(3-hexadecylthiophen-2-yl)thieno[3,2-b]thiophene (pBTTT):fullerene blends. The intercalated blends, which exhibit optimal solar-cell performance at 1:4 polymer:fullerene by weight, have better photoluminescence quenching and lower absorption than the nonintercalated blends, which optimize at 1:1. Understanding how intercalation affects performance will enable more effective design of polymer:fullerene solar cells. © 2009 American Chemical Society.
13. Tuning the Properties of Polymer Bulk Heterojunction Solar Cells by Adjusting Fullerene Size to Control Intercalation
Cates, Nichole C.
2009-12-09
We demonstrate that intercalation of fullerene derivatives between the side chains of conjugated polymers can be controlled by adjusting the fullerene size and compare the properties of intercalated and nonintercalated poly(2,5-bis(3-hexadecylthiophen-2-yl)thieno[3,2-b]thiophene (pBTTT):fullerene blends. The intercalated blends, which exhibit optimal solar-cell performance at 1:4 polymer:fullerene by weight, have better photoluminescence quenching and lower absorption than the nonintercalated blends, which optimize at 1:1. Understanding how intercalation affects performance will enable more effective design of polymer:fullerene solar cells. © 2009 American Chemical Society.
14. Terrestrial and extraterrestrial fullerenes
Heymann, D.; Jenneskens, L.W.; Jehlicka, J; Koper, C.; Vlietstra, E. [Rice Univ, Houston, TX (United States). Dept. of Earth Science
2003-07-01
This paper reviews reports of occurrences of fullerenes in circumstellar media, interstellar media, meteorites, interplanetary dust particles (IDPs), lunar rocks, hard terrestrial rocks from Shunga (Russia), Sudbury (Canada) and Mitov (Czech Republic), coal, terrestrial sediments from the Cretaceous-Tertiary-Boundary and Pennian-Triassic-Boundary, fulgurite, ink sticks, dinosaur eggs, and a tree char. The occurrences are discussed in the context of known and postulated processes of fullerene formation, including the suggestion that some natural fullerenes might have formed from biological (algal) remains.
15. Polymer-fullerene bulk heterojunction solar cells
Janssen, RAJ; Hummelen, JC; Saricifti, NS
Nanostructured phase-separated blends, or bulk heterojunctions, of conjugated Polymers and fullerene derivatives form a very attractive approach to large-area, solid-state organic solar cells.The key feature of these cells is that they combine easy, processing from solution on a variety of
16. Fullerenes and nanostructured plastic solar cells
Knol, Joop; Hummelen, Jan C.; Kuzmany, H; Fink, J; Mehring, M; Roth, S
1998-01-01
We report on the present on the present status of the plastic solar cell and on the design of fullerene derivatives and pi-conjugated donor molecules that can function as acceptor-donor pairs and (supra-) molecular building blocks in organized, nanostructured interpenetrating networks, forming a
17. Electronic properties of fullerenes
Kuzmany, H [ed.; Vienna Univ. (Austria). Inst. fuer Festkoerperphysik; Fink, J [ed.; Kernforschungszentrum Karlsruhe GmbH (Germany). Inst. fuer Nukleare Festkoerperphysik; Mehring, M [ed.; Stuttgart Univ. (Germany). Physikalisches Teilinstitut 2; Roth, S [ed.; Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)
1993-01-01
Since 1991, research in the field of organic carbon materials has developed at a rapid pace due to the advent of the fullerenes and related materials. These forms of carbon are considered as a missing link between the previously discussed electroactive polymers and the oxidic superconductors. It was therefore challenging to select this topic for an international winter school in Kirchberg. Although still in its infancy, research on the physics and chemistry of fullerenes and related compounds has already led to a wealth of results, which was reflected in the wide range of topics covered and the numerous discussions which emerged at the meeting. For C[sub 60] itself, preparation methods and crystal growth techniques continue to evolve, while the understanding of the electronic and structural properties of its solid state continues to pose challenges to experimental and theoretical physicists. The ever-expanding range of higher fullerens and related materials, such as nanotubes and onions, poses a daunting but exciting task for researchers. For synthetic chemists, fullerenes represent the basis of a whole new range of synthetic compounds. The prospect of a periodic table of endohedral fullerene complexes has been discussed, and exohedrally complexed metal-fullerenes have already attracted the attention of physicists. The first endohedral materials are now available. (orig.)
18. Electronic properties of fullerenes
Kuzmany, H.
1993-01-01
Since 1991, research in the field of organic carbon materials has developed at a rapid pace due to the advent of the fullerenes and related materials. These forms of carbon are considered as a missing link between the previously discussed electroactive polymers and the oxidic superconductors. It was therefore challenging to select this topic for an international winter school in Kirchberg. Although still in its infancy, research on the physics and chemistry of fullerenes and related compounds has already led to a wealth of results, which was reflected in the wide range of topics covered and the numerous discussions which emerged at the meeting. For C 60 itself, preparation methods and crystal growth techniques continue to evolve, while the understanding of the electronic and structural properties of its solid state continues to pose challenges to experimental and theoretical physicists. The ever-expanding range of higher fullerens and related materials, such as nanotubes and onions, poses a daunting but exciting task for researchers. For synthetic chemists, fullerenes represent the basis of a whole new range of synthetic compounds. The prospect of a periodic table of endohedral fullerene complexes has been discussed, and exohedrally complexed metal-fullerenes have already attracted the attention of physicists. The first endohedral materials are now available. (orig.)
19. Fullerene Derived Molecular Electronic Devices
Menon, Madhu; Srivastava, Deepak; Saini, Subbash
1998-01-01
The carbon Nanotube junctions have recently emerged as excellent candidates for use as the building blocks in the formation of nanoscale electronic devices. While the simple joint of two dissimilar tubes can be generated by the introduction of a pair of heptagon-pentagon defects in an otherwise perfect hexagonal grapheme sheet, more complex joints require other mechanisms. In this work we explore structural and electronic properties of complex 3-point junctions of carbon nanotubes using a generalized tight-binding molecular-dynamics scheme.
20. Polyethene with pendant fullerene moieties
Zhang, XC; Sieval, AB; Hummelen, JC; Hessen, B; Zhang, Xiaochun
2005-01-01
Polyethene with fullerene moieties pendant on short-chain branches was prepared by the catalytic copolymerisation of ethene and a fullerene-containing vinylic comonomer, yielding polyethene copolymers containing up to 25 wt% of C-60.
1. Radiation Protection Using Single-Wall Carbon Nanotube Derivatives
Tour, James M.; Lu, Meng; Lucente-Schultz, Rebecca; Leonard, Ashley; Doyle, Condell Dewayne; Kosynkin, Dimitry V.; Price, Brandi Katherine
2011-01-01
This invention is a means of radiation protection, or cellular oxidative stress mitigation, via a sequence of quenching radical species using nano-engineered scaffolds, specifically single-wall carbon nanotubes (SWNTs) and their derivatives. The material can be used as a means of radiation protection by reducing the number of free radicals within, or nearby, organelles, cells, tissue, organs, or living organisms, thereby reducing the risk of damage to DNA and other cellular components (i.e., RNA, mitochondria, membranes, etc.) that can lead to chronic and/or acute pathologies, including but not limited to cancer, cardiovascular disease, immuno-suppression, and disorders of the central nervous system. In addition, this innovation could be used as a prophylactic or antidote for accidental radiation exposure, during high-altitude or space travel where exposure to radiation is anticipated, or to protect from exposure from deliberate terrorist or wartime use of radiation- containing weapons.
2. Transmutation of fullerenes.
Cross, R James; Saunders, Martin
2005-03-09
Fullerenes were pyrolyzed by subliming them into a stream of flowing argon gas and then passing them through an oven heated to approximately 1000 degrees C. C(76), C(78), and C(84) all readily lost carbons to form smaller fullerenes. In the case of C(78), some isomerization was seen. Pyrolysis of (3)He@C(76) showed that all or most of the (3)He was lost during the decomposition. C(60) passes through the apparatus with no decomposition and no loss of helium.
3. Fullerenes and fulleranes in circumstellar envelopes
2016-01-01
Three decades of search have recently led to convincing discoveries of cosmic fullerenes. The presence of C_6_0 and C"+ _6_0 in both circumstellar and interstellar environments suggests that these molecules and their derivatives can be efficiently formed in circumstellar envelopes and survive in harsh conditions. Detailed analysis of the infrared bands from fullerenes and their connections with the local properties can provide valuable information on the physical conditions and chemical processes that occurred in the late stages of stellar evolution. The identification of C"+ _6_0 as the carrier of four diffuse interstellar bands (DIBs) suggests that fullerene- related compounds are abundant in interstellar space and are essential for resolving the DIB mystery. Experiments have revealed a high hydrogenation rate when C_6_0 is exposed to atomic hydrogen, motivating the attempt to search for cosmic fulleranes. In this paper, we present a short review of current knowledge of cosmic fullerenes and fulleranes and briefly discuss the implications on circumstellar chemistry. (paper)
4. Perhydropolysilazane derived silica coating protecting Kapton from atomic oxygen attack
Hu Longfei [China Academy of Aerospace Aerodynamics, Beijing 100074 (China); Li Meishuan, E-mail: mshli@imr.ac.cn [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, Shenyang 110016 (China); Xu Caihong; Luo Yongming [Institute of Chemistry, Chinese Academy of Sciences, Beijing 100080 (China)
2011-11-30
By using surface sol-gel method with perhydropolysilazane (PHPS) as a precursor, a silica coating was prepared on a Kapton substrate as an atomic oxygen (AO) protective coating. The AO exposure tests were conducted in a ground-based simulator. It is found that the erosion yield of Kapton decreases by about three orders of magnitude after the superficial application of the coating. After AO exposure, the surface of the coating is smooth and uniform, no surface shrinkage induced cracks or undercutting erosion are observed. This is because that during AO exposure the PHPS is oxidized directly to form SiO{sub 2} without through intermediate reaction processes, the surface shrinkage and cracking tendency are prohibited. Meanwhile, this PHPS derived silica coating also presents self-healing effect due to the oxidation of free Si. Compared with other kinds of silica or organic polymer coatings, this PHPS derived silica coating exhibits a superior AO erosion resistance.
5. Perhydropolysilazane derived silica coating protecting Kapton from atomic oxygen attack
Hu Longfei; Li Meishuan; Xu Caihong; Luo Yongming
2011-01-01
By using surface sol–gel method with perhydropolysilazane (PHPS) as a precursor, a silica coating was prepared on a Kapton substrate as an atomic oxygen (AO) protective coating. The AO exposure tests were conducted in a ground-based simulator. It is found that the erosion yield of Kapton decreases by about three orders of magnitude after the superficial application of the coating. After AO exposure, the surface of the coating is smooth and uniform, no surface shrinkage induced cracks or undercutting erosion are observed. This is because that during AO exposure the PHPS is oxidized directly to form SiO 2 without through intermediate reaction processes, the surface shrinkage and cracking tendency are prohibited. Meanwhile, this PHPS derived silica coating also presents self-healing effect due to the oxidation of free Si. Compared with other kinds of silica or organic polymer coatings, this PHPS derived silica coating exhibits a superior AO erosion resistance.
6. Recent advances in fullerene superconductivity
2002-01-01
Superconducting transition temperatures in bulk chemically intercalated fulleride salts reach 33 K at ambient pressure and in hole-doped C sub 6 sub 0 derivatives in field-effect-transistor (FET) configurations, they reach 117 K. These advances pose important challenges for our understanding of high-temperature superconductivity in these highly correlated organic metals. Here we review the structures and properties of intercalated fullerides, paying particular attention to the correlation between superconductivity and interfullerene separation, orientational order/disorder, valence state, orbital degeneracy, low-symmetry distortions, and metal-C sub 6 sub 0 interactions. The metal-insulator transition at large interfullerene separations is discussed in detail. An overview is also given of the exploding field of gate-induced superconductivity of fullerenes in FET electronic devices.
7. Synthetic Strategies towards Fullerene-Rich Dendrimer Assemblies
Jean-François Nierengarten
2012-02-01
Full Text Available The sphere-shaped fullerene has attracted considerable interest not least due to the peculiar electronic properties of this carbon allotrope and the fascinating materials emanating from fullerene-derived structures. The rapid development and tremendous advances in organic chemistry allow nowadays the modification of C60 to a great extent by pure chemical means. It is therefore not surprising that the fullerene moiety has also been part of dendrimers. At the initial stage, fullerenes have been examined at the center of the dendritic structure mainly aimed at possible shielding effects as exerted by the dendritic environment and light-harvesting effects due to multiple chromophores located at the periphery of the dendrimer. In recent years, also many research efforts have been devoted towards fullerene-rich nanohybrids containing multiple C60 units in the branches and/or as surface functional groups. In this review, synthetic efforts towards the construction of dendritic fullerene-rich nanostructures have been compiled and will be summarized herein.
8. Importance of the Donor:Fullerene intermolecular arrangement for high-efficiency organic photovoltaics
Graham, Kenneth; Cabanetos, Clement; Jahnke, Justin P.; Idso, Matthew N.; El Labban, Abdulrahman; Ngongang Ndjawa, Guy Olivier; Heumueller, Thomas; Vandewal, Koen; Salleo, Alberto; Chmelka, Bradley F.; Amassian, Aram; Beaujuge, Pierre; McGehee, Michael D.
2014-01-01
The performance of organic photovoltaic (OPV) material systems are hypothesized to depend strongly on the intermolecular arrangements at the donor:fullerene interfaces. A review of some of the most efficient polymers utilized in polymer:fullerene PV devices, combined with an analysis of reported polymer donor materials wherein the same conjugated backbone was used with varying alkyl substituents, supports this hypothesis. Specifically, the literature shows that higher-performing donor-acceptor type polymers generally have acceptor moieties that are sterically accessible for interactions with the fullerene derivative, whereas the corresponding donor moieties tend to have branched alkyl substituents that sterically hinder interactions with the fullerene. To further explore the idea that the most beneficial polymer:fullerene arrangement involves the fullerene docking with the acceptor moiety, a family of benzo[1,2-b:4,5-b]dithiophene-thieno[3,4-c]pyrrole-4,6-dione polymers (PBDTTPD derivatives) was synthesized and tested in a variety of PV device types with vastly different aggregation states of the polymer. In agreement with our hypothesis, the PBDTTPD derivative with a more sterically accessible acceptor moiety and a more sterically hindered donor moiety shows the highest performance in bulk-heterojunction, bilayer, and low-polymer concentration PV devices where fullerene derivatives serve as the electron-accepting materials. Furthermore, external quantum efficiency measurements of the charge-transfer state and solid-state two-dimensional (2D) 13C{1H} heteronuclear correlation (HETCOR) NMR analyses support that a specific polymer:fullerene arrangement is present for the highest performing PBDTTPD derivative, in which the fullerene is in closer proximity to the acceptor moiety of the polymer. This work demonstrates that the polymer:fullerene arrangement and resulting intermolecular interactions may be key factors in determining the performance of OPV material systems
9. Importance of the Donor:Fullerene intermolecular arrangement for high-efficiency organic photovoltaics
Graham, Kenneth
2014-07-09
The performance of organic photovoltaic (OPV) material systems are hypothesized to depend strongly on the intermolecular arrangements at the donor:fullerene interfaces. A review of some of the most efficient polymers utilized in polymer:fullerene PV devices, combined with an analysis of reported polymer donor materials wherein the same conjugated backbone was used with varying alkyl substituents, supports this hypothesis. Specifically, the literature shows that higher-performing donor-acceptor type polymers generally have acceptor moieties that are sterically accessible for interactions with the fullerene derivative, whereas the corresponding donor moieties tend to have branched alkyl substituents that sterically hinder interactions with the fullerene. To further explore the idea that the most beneficial polymer:fullerene arrangement involves the fullerene docking with the acceptor moiety, a family of benzo[1,2-b:4,5-b]dithiophene-thieno[3,4-c]pyrrole-4,6-dione polymers (PBDTTPD derivatives) was synthesized and tested in a variety of PV device types with vastly different aggregation states of the polymer. In agreement with our hypothesis, the PBDTTPD derivative with a more sterically accessible acceptor moiety and a more sterically hindered donor moiety shows the highest performance in bulk-heterojunction, bilayer, and low-polymer concentration PV devices where fullerene derivatives serve as the electron-accepting materials. Furthermore, external quantum efficiency measurements of the charge-transfer state and solid-state two-dimensional (2D) 13C{1H} heteronuclear correlation (HETCOR) NMR analyses support that a specific polymer:fullerene arrangement is present for the highest performing PBDTTPD derivative, in which the fullerene is in closer proximity to the acceptor moiety of the polymer. This work demonstrates that the polymer:fullerene arrangement and resulting intermolecular interactions may be key factors in determining the performance of OPV material systems
10. Pericytes derived from adipose-derived stem cells protect against retinal vasculopathy.
Thomas A Mendel
Full Text Available Retinal vasculopathies, including diabetic retinopathy (DR, threaten the vision of over 100 million people. Retinal pericytes are critical for microvascular control, supporting retinal endothelial cells via direct contact and paracrine mechanisms. With pericyte death or loss, endothelial dysfunction ensues, resulting in hypoxic insult, pathologic angiogenesis, and ultimately blindness. Adipose-derived stem cells (ASCs differentiate into pericytes, suggesting they may be useful as a protective and regenerative cellular therapy for retinal vascular disease. In this study, we examine the ability of ASCs to differentiate into pericytes that can stabilize retinal vessels in multiple pre-clinical models of retinal vasculopathy.We found that ASCs express pericyte-specific markers in vitro. When injected intravitreally into the murine eye subjected to oxygen-induced retinopathy (OIR, ASCs were capable of migrating to and integrating with the retinal vasculature. Integrated ASCs maintained marker expression and pericyte-like morphology in vivo for at least 2 months. ASCs injected after OIR vessel destabilization and ablation enhanced vessel regrowth (16% reduction in avascular area. ASCs injected intravitreally before OIR vessel destabilization prevented retinal capillary dropout (53% reduction. Treatment of ASCs with transforming growth factor beta (TGF-β1 enhanced hASC pericyte function, in a manner similar to native retinal pericytes, with increased marker expression of smooth muscle actin, cellular contractility, endothelial stabilization, and microvascular protection in OIR. Finally, injected ASCs prevented capillary loss in the diabetic retinopathic Akimba mouse (79% reduction 2 months after injection.ASC-derived pericytes can integrate with retinal vasculature, adopting both pericyte morphology and marker expression, and provide functional vascular protection in multiple murine models of retinal vasculopathy. The pericyte phenotype demonstrated
11. Geochemie fullerenů
Frank, Otakar; Jehlička, J.; Vítek, P.; Juha, Libor; Hamplová, Věra; Pokorná, Zdeňka
2010-01-01
Roč. 104, č. 8 (2010), s. 762-769 ISSN 0009-2770 R&D Projects: GA ČR GA205/07/0772; GA MŠk LC510; GA MŠk(CZ) LC528 Institutional research plan: CEZ:AV0Z40400503; CEZ:AV0Z10100520 Keywords : geochemistry * fullerene s * geological materials Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.620, year: 2010
12. The impact of electrostatic interactions on ultrafast charge transfer at Ag 29 nanoclusters–fullerene and CdTe quantum dots–fullerene interfaces
Ahmed, Ghada H.; Parida, Manas R.; Tosato, Alberto; AbdulHalim, Lina G.; Usman, Anwar; Alsulami, Qana; Banavoth, Murali; Alarousu, Erkki; Bakr, Osman; Mohammed, Omar F.
2015-01-01
investigate the electrostatic interactions between the positively charged fullerene derivative C60-(N,N dimethylpyrrolidinium iodide) (CF) employed as an efficient molecular acceptor and two different donor molecules: Ag29 nanoclusters (NCs) and CdTe quantum
13. Fulereno[C60]: química e aplicações Fullerene C60: chemistry and applications
Leandro José dos Santos
2010-01-01
Full Text Available Fullerene chemistry has become a very active research field in the two last decades, largely because of the exceptional properties of the C60 molecule and the variety of fullerene derivatives that appear to be possible. In this review, a general analysis of fullerene C60 reactivity is performed. The principal methods for the covalent modification of this fascinating carbon cage are presented. The prospects of using fullerene derivatives as medicinal drugs and photoactive materials in light converting devices are demonstrated.
14. An analytical method for determination of fullerenes and functionalized fullerenes in soils with high performance liquid chromatography and UV detection
Carboni, Andrea; Emke, Erik; Parsons, John R.; Kalbitz, Karsten; Voogt, Pim de
2014-01-01
Graphical abstract: -- Highlights: •A total of eight fullerenes can be analyzed in a single run with HPLC-UV. •The method allows the analysis of fullerenes in soil at relatively low concentrations. •The method developed is robust, highly reproducible and relatively efficient. •The method can be applied to the study of the environmental fate and toxicology of fullerenes. -- Abstract: Fullerenes are carbon-based nanomaterials expected to play a major role in emerging nanotechnology and produced at an increasing rate for industrial and household applications. In the last decade a number of novel compounds (i.e. fullerene derivatives) is being introduced into the market and specific analytical methods are needed for analytical purposes as well as environmental and safety issues. In the present work eight fullerenes (C60 and C70) and functionalized fullerenes (C60 and C70 exohedral-derivatives) were selected and a novel liquid chromatographic method was developed for their analysis with UV absorption as a method of detection. The resulting HPLC-UV method is the first one suitable for the analysis of all eight compounds. This method was applied for the analysis of fullerenes added to clayish, sandy and loess top-soils at concentrations of 20, 10 and 5 μg kg −1 and extracted with a combination of sonication and shaking extraction. The analytical method limits of detection (LoD) and limits of quantification (LoQ) were in the range of 6–10 μg L −1 and 15–24 μg L −1 respectively for the analytical solutions. The extraction from soil was highly reproducible with recoveries ranging from 47 ± 5 to 71 ± 4% whereas LoD and LoQ for all soils tested were of 3 μg kg −1 and 10 μg kg −1 respectively. No significant difference in the extraction performance was observed depending of the different soil matrices and between the different concentrations. The developed method can be applied for the study of the fate and toxicity of fullerenes in complex matrices
15. Molecular understanding of the open-circuit voltage of polymer: Fullerene solar cells
Yamamoto, Shunsuke; Orimo, Akiko; Benten, Hiroaki; Ito, Shinzaburo [Department of Polymer Chemistry, Graduate School of Engineering, Kyoto University, Katsura, Nishikyo, Kyoto (Japan); Ohkita, Hideo [Japan Science and Technology Agency (JST), PRESTO, Saitama (Japan); Department of Polymer Chemistry, Graduate School of Engineering, Kyoto University, Katsura, Nishikyo, Kyoto (Japan)
2012-02-15
The origin of open-circuit voltage (V{sub OC}) was studied for polymer solar cells based on a blend of poly(3-hexylthiophene) (P3HT) and seven fullerene derivatives with different LUMO energy levels and side chains. The temperature dependence of J-V characteristics was analyzed by an equivalent circuit model. As a result, V{sub OC} increased with the decrease in the saturation current density J{sub 0} of the device. Furthermore, J{sub 0} was dependent on the activation energy E{sub A} for J{sub 0}, which is related to the HOMO-LUMO energy gap between P3HT and fullerene. Interestingly, the pre-exponential term J{sub 00} for J{sub 0} was larger for pristine fullerenes than for substituted fullerene derivatives, suggesting that the electronic coupling between molecules also has substantial impact on V{sub OC}. This is probably because the recombination is non-diffusion-limited reaction depending on electron transfer at the P3HT/fullerene interface. In summary, the origin of V{sub OC} is ascribed not only to the relative HOMO-LUMO energy gap but also to the electronic couplings between fullerene/fullerene and polymer/fullerene. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
16. Fabrication of fullerene nano-strucutres in mixed films and devices utilizing fullerene nano-structures
Zhong, Yufei; Amassian, Aram; Tajima, Keisuke
2017-01-01
Embodiments provide methods for controlling crystallization of fullerene compounds in mixed films comprising one or more polymers. Methods can include depositing fullerene mixed films comprising one or more polymers on crystalline fullerene
17. Molecular design of novel fullerene-based acceptors for enhancing the open circuit voltage in polymer solar cells
2017-12-01
Organic solar cells, especially bulk hetero-junction polymer solar cells (PSCs), are the most successful structures for applications in renewable energy. The dramatic improvement in the performance of PSCs has increased demand for new conjugated polymer donors and fullerene derivative acceptors. In the present study, quantum chemical calculations were performed for several representative fullerene derivatives in order to determine their frontier orbital energy levels and electronic structures, thereby helping to enhance their performance in PSC devices. We found correlations between the theoretical lowest unoccupied molecular orbital levels and electrophilicity index of various fullerenes with the experimental open circuit voltage of photovoltaic devices according to the poly(3-hexylthiophene) (P3HT):fullerene blend. The correlations between the structure and descriptors may facilitate screening of the best fullerene acceptor for the P3HT donor. Thus, we considered fullerenes with new functional groups and we predicted the output factors for the corresponding P3HT:fullerene blend devices. The results showed that fullerene derivatives based on thieno-o-quinodimethane-C60 with a methoxy group will have enhanced photovoltaic properties. Our results may facilitate the design of new fullerenes and the development of favorable acceptors for use in photovoltaic applications.
18. Exciton and Hole-Transfer Dynamics in Polymer: Fullerene Blends
van Loosdrecht P. H. M.
2013-03-01
Full Text Available Ultrafast hole transfer dynamics from fullerene derivative to polymer in bulk heterojunction blends are studied with visible-pump - IR-probe spectroscopy. The hole transfer process is found to occur in 50/300 fs next to the interface, while a longer 15-ps time is attributed to exciton diffusion towards interface in PC71BM domains. High polaron generation efficiency in P3HT blends indicates excellent intercalation between the polymer and the fullerene even at highest PC71BM concentration thereby yielding a valuable information on the blend morphology.
19. Superconducting Fullerene Nanowhiskers
Yoshihiko Takano
2012-04-01
Full Text Available We synthesized superconducting fullerene nanowhiskers (C60NWs by potassium (K intercalation. They showed large superconducting volume fractions, as high as 80%. The superconducting transition temperature at 17 K was independent of the K content (x in the range between 1.6 and 6.0 in K-doped C60 nanowhiskers (KxC60NWs, while the superconducting volume fractions changed with x. The highest shielding fraction of a full shielding volume was observed in the material of K3.3C60NW by heating at 200 °C. On the other hand, that of a K-doped fullerene (K-C60 crystal was less than 1%. We report the superconducting behaviors of our newly synthesized KxC60NWs in comparison to those of KxC60 crystals, which show superconductivity at 19 K in K3C60. The lattice structures are also discussed, based on the x-ray diffraction (XRD analyses.
20. Polymer solar cells based on poly(3-hexylthiophene) and fullerene: Pyrene acceptor systems
Cominetti, Alessandra; Pellegrino, Andrea; Longo, Luca [Research Center for Renewable Energies and Environment, Istituto Donegani, Eni S.p.A, Via Fauser 4, IT-28100 Novara (Italy); Po, Riccardo, E-mail: riccardo.po@eni.com [Research Center for Renewable Energies and Environment, Istituto Donegani, Eni S.p.A, Via Fauser 4, IT-28100 Novara (Italy); Tacca, Alessandra; Carbonera, Chiara; Salvalaggio, Mario [Research Center for Renewable Energies and Environment, Istituto Donegani, Eni S.p.A, Via Fauser 4, IT-28100 Novara (Italy); Baldrighi, Michele; Meille, Stefano Valdo [Dipartimento di Chimica, Materiali e Ingegneria Chimica “G. Natta”, Politecnico di Milano, via Mancinelli 7, IT-20131 Milano (Italy)
2015-06-01
The replacement of widely used fullerene derivatives, e.g. [6,6]-phenyl-C61-butyric acid methyl ester (PCBM), with unfunctionalized C60 and C70 is an effective approach to reduce the costs of organic photovoltaics. However, solubility issues of these compounds have always represented an obstacle to their use. In this study, bulk-heterojunction solar cells made of poly(3-hexylthiophene) donor polymer, C60 or C70 acceptors and a pyrene derivative (1-pyrenebutiric acid butyl ester) are reported. Butyl 1-pyrenebutirate limits the aggregation of fullerenes and improves the active layer morphology, plausibly due to the formation of pyrene-fullerene complexes which, in the case of pyrene-C70, were also obtained in a crystalline form. Maximum power conversion efficiencies of 1.54% and 2.50% have been obtained using, respectively, C60 or C70 as acceptor. Quantum mechanical modeling provides additional insight into the formation of plausible supermolecular structures via π-π interactions and on the redox behaviour of pyrene-fullerene systems. - Highlights: • Pyrene derivatives favour the dispersion of unfunctionalized fullerenes. • Polymer solar cells with pyrene: C60 adduct as acceptor have efficiencies of 1.54%. • When C60 is substituted with C70 the efficiency is increased to 2.50%. • DFT calculations support the plausibility of the formation of pyrene: fullerene adducts. • The use of unfunctionalized fullerenes may decrease the costs of polymer solar cells.
1. Glycofullerenes: Sweet fullerenes vanquish viruses
Vidal, Sébastien
2016-01-01
Fullerene-based dendritic structures coated with 120 sugars can be made in high yields in a relatively short sequence of reactions. The mannosylated compound is shown to inhibit Ebola infection in cells more efficiently than monofullerene-based glycoclusters.
2. Fullerene-based materials for solar cell applications: design of novel acceptors for efficient polymer solar cells--a DFT study.
Mohajeri, Afshan; Omidvar, Akbar
2015-09-14
Fossil fuel alternatives, such as solar energy, are moving to the forefront in a variety of research fields. Polymer solar cells (PSCs) hold promise for their potential to be used as low-cost and efficient solar energy converters. PSCs have been commonly made from bicontinuous polymer:fullerene composites or so-called bulk heterojunctions. The conjugated polymer donors and the fullerene derivative acceptors are the key materials for high performance PSCs. In the present study, we have performed density functional theory calculations to investigate the electronic structures and magnetic properties of several representative C60 fullerene derivatives, seeking ways to improve their efficiency as acceptors of photovoltaic devices. In our survey, we have successfully correlated the LUMO energy level as well as chemical hardness, hyper-hardness, nucleus-independent chemical shift, and static dipole polarizability of PC60BM-like fullerene derivative acceptors with the experimental open circuit voltage of the photovoltaic device based on the P3HT:fullerene blend. The obtained structure-property correlations allow finding the best fullerene acceptor match for the P3HT donor. For this purpose, four new fullerene derivatives are proposed and the output parameters for the corresponding P3HT-based devices are predicted. It is found that the proposed fullerene derivatives exhibit better photovoltaic properties than the traditional PC60BM acceptor. The present study opens the way for manipulating fullerene derivatives and developing promising acceptors for solar cell applications.
3. Fullerene C70 decorated TiO2 nanowires for visible-light-responsive photocatalyst
Cho, Er-Chieh; Ciou, Jing-Hao; Zheng, Jia-Huei; Pan, Job; Hsiao, Yu-Sheng; Lee, Kuen-Chan; Huang, Jen-Hsien
2015-01-01
Graphical abstract: - Highlights: • TiO 2 nanowire decorated with C 60 and C 70 derivatives has been synthesized. • The fullerenes impede the charge recombination due to its high electron affinity. • The fullerenes expand the utilization of solar light from UV to visible light. • The modified-TiO 2 has great biocompatibility. - Abstract: In this study, we have synthesized C 60 and C 70 -modified TiO 2 nanowire (NW) through interfacial chemical bonding. The results indicate that the fullerenes (C 60 and C 70 derivatives) can act as sinks for photogenerated electrons in TiO 2 , while the fullerene/TiO 2 is illuminated under ultraviolet (UV) light. Therefore, in comparison to the pure TiO 2 NWs, the modified TiO 2 NWs display a higher photocatalytic activity under UV irradiation. Moreover, the fullerenes also can function as a sensitizer to TiO 2 which expand the utilization of solar light from UV to visible light. The results reveal that the C 70 /TiO 2 NWs show a significant photocatalytic activity for degradation of methylene blue (MB) in visible light region. To better understand the mechanism responsible for the effect of fullerenes on the photocatalytic properties of TiO 2 , the electron only devices and photoelectrochemical cells based on fullerenes/TiO 2 are also fabricated and evaluated.
4. Comparative computational study of interaction of C60-fullerene and tris-malonyl-C60-fullerene isomers with lipid bilayer: relation to their antioxidant effect.
Marine E Bozdaganyan
Full Text Available Oxidative stress induced by excessive production of reactive oxygen species (ROS has been implicated in the etiology of many human diseases. It has been reported that fullerenes and some of their derivatives-carboxyfullerenes-exhibits a strong free radical scavenging capacity. The permeation of C60-fullerene and its amphiphilic derivatives-C3-tris-malonic-C60-fullerene (C3 and D3-tris-malonyl-C60-fullerene (D3-through a lipid bilayer mimicking the eukaryotic cell membrane was studied using molecular dynamics (MD simulations. The free energy profiles along the normal to the bilayer composed of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC for C60, C3 and D3 were calculated. We found that C60 molecules alone or in clusters spontaneously translocate to the hydrophobic core of the membrane and stay inside the bilayer during the whole period of simulation time. The incorporation of cluster of fullerenes inside the bilayer changes properties of the bilayer and leads to its deformation. In simulations of the tris-malonic fullerenes we discovered that both isomers, C3 and D3, adsorb at the surface of the bilayer but only C3 tends to be buried in the area of the lipid headgroups forming hydrophobic contacts with the lipid tails. We hypothesize that such position has implications for ROS scavenging mechanism in the specific cell compartments.
5. The Role of Electron Affinity in Determining Whether Fullerenes Catalyze or Inhibit Photooxidation of Polymers for Solar Cells
Hoke, Eric T.
2012-05-21
Understanding the stability and degradation mechanisms of organic solar materials is critically important to achieving long device lifetimes. Here, an investigation of the photodegradation of polymer:fullerene blend fi lms exposed to ambient conditions for a variety of polymer and fullerene derivative combinations is presented. Despite the wide range in polymer stabilities to photodegradation, the rate of irreversible polymer photobleaching in blend fi lms is found to consistently and dramatically increase with decreasing electron affi nity of the fullerene derivative. Furthermore, blends containing fullerenes with the smallest electron affi nities photobleached at a faster rate than fi lms of the pure polymer. These observations can be explained by a mechanism where both the polymer and fullerene donate photogenerated electrons to diatomic oxygen to form the superoxide radical anion which degrades the polymer. © 2012 WILEY-VCH Verlag GmbH & Co.
6. Olefin cross metathesis based de novo synthesis of a partially protected L-amicetose and a fully protected L-cinerulose derivative
Bernd Schmidt
2014-05-01
Full Text Available Cross metathesis of a lactate derived allylic alcohol and acrolein is the entry point to a de novo synthesis of 4-benzoate protected L-amicetose and a cinerulose derivative protected at C5 and C1.
7. Comparing the Device Physics and Morphology of Polymer Solar Cells Employing Fullerenes and Non-Fullerene Acceptors
Bloking, Jason T.
2014-04-23
There is a need to find electron acceptors for organic photovoltaics that are not based on fullerene derivatives since fullerenes have a small band gap that limits the open-circuit voltage (VOC), do not absorb strongly and are expensive. Here, a phenylimide-based acceptor molecule, 4,7-bis(4-(N-hexyl-phthalimide)vinyl)benzo[c]1,2,5-thiadiazole (HPI-BT), that can be used to make solar cells with VOC values up to 1.11 V and power conversion efficiencies up to 3.7% with two thiophene polymers is demonstrated. An internal quantum efficiency of 56%, compared to 75-90% for polymer-fullerene devices, results from less efficient separation of geminate charge pairs. While favorable energetic offsets in the polymer-fullerene devices due to the formation of a disordered mixed phase are thought to improve charge separation, the low miscibility (<5 wt%) of HPI-BT in polymers is hypothesized to prevent the mixed phase and energetic offsets from forming, thus reducing the driving force for charges to separate into the pure donor and acceptor phases where they can be collected. A small molecule electron acceptor, 4,7-bis(4-(N-hexyl-phthalimide)vinyl)benzo[c]1,2,5-thiadiazole (HPI-BT), achieves efficiencies of 3.7% and open-circuit voltage values of 1.11 V in bulk heterojunction (BHJ) devices with polythiophene donor materials. The lower internal quantum efficiency (56%) in these non-fullerene acceptor devices is attributed to an absence of the favorable energetic offsets resulting from nanoscale mixing of donor and acceptor found in comparable fullerene-based devices. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
8. Preparation and protection of silver nanoparticles with chitosan derivative
Nguyen Thi Kim Cuc; Cao Van Du; Nguyen Cuu Khoa; Tran Ngoc Quyen
2013-01-01
In this paper, nano silver solution is prepared and stabilized by chitosan dihydroxyphenyl acetamide (CDHPA). Chitosan is a natural carbohydrate polymer deriving from chitin that has biodegradable, biocompatible, antibacterial and antifungal properties, so when conjugation of the polymer and silver nanoparticles could be expected to increase bactericidal features of the obtained product. The chemical and physical methods were used to characterize the chitosan derivative such as transmission spectrum (UV-Vis), IR spectrum, nuclear magnetic resonance (1H-NMR). Morphology of the obtained nano silver particles were observed by transmission electron microscopy (TEM). (author)
9. Profitable Innovation Without Patent Protection: The Case of Derivatives.
Helios Herrera; Enrique Schroth
2003-01-01
Investment banks find it profitable to invest in the development of innovative derivative securities even without being able to preclude early competition from other investment banks using patents. To explain this, we assume that the developer can learn from the first issues of the innovative financial product and is able to become the expert issuer by the time imitation enters the market. We show how this becomes an informational first-mover advantage that turns innovators into the market le...
10. Fullerenes doped with metal halides
Martin, T.P.; Heinebrodt, M.; Naeher, U.; Goehlich, H.; Lange, T.; Schaber, H.
1993-01-01
The cage-like structure of fullerenes is a challenge to every experimental to put something inside - to dope the fullerenes. In fact, the research team that first identified C 60 as a football-like molecule quickly succeeded in trapping metal atoms inside and in shrinking the cage around this atom by photofragmentation. In this paper we report the results of ''shrink-wrapping'' the fullerenes around metal halide molecules. Of special interest is the critical size (the minimum number of carbon atoms) that can still enclose the dopant. A rough model for the space available inside a carbon cage gives good agreement with the measured shrinking limits. (author). 8 refs, 6 figs
11. Superconductivity in doped fullerenes
Hebard, A.F.
1992-01-01
While there is not complete agreement on the microscopic mechanism of superconductivity in alkali-metal-doped C 60 , further research may well lead to the production of analogous materials that lose resistance at even higher temperatures. Carbon 60 is a fascinating and arrestingly beautiful molecule. With 12 pentagonal and 20 hexagonal faces symmetrically arrayed in a soccer-ball-like structure that belongs to the icosahedral point group, I h , its high symmetry alone invites special attention. The publication in September 1990 of a simple technique for manufacturing and concentrating macroscopic amounts of this new form of carbon announced to the scientific community that enabling technology had arrived. Macroscopic amounts of C 60 (and the higher fullerenes, such as C 70 and C 84 ) can now be made with an apparatus as simple as an arc furnace powered with an arc welding supply. Accordingly, chemists, physicists and materials scientists have joined forces in an explosion of effort to explore the properties of this unusual molecular building block. 23 refs., 6 figs
12. Superconductivity in doped fullerenes
Herbard, A.F.
1996-01-01
While there is not complete agreement on the microscopic mechanism of superconductivity in alkali-metal-doped C sup 0, further research may well lead to the production of analogous materials that lose resistance at even higher temperatures. Carbon 60 is a fascinating and arrestingly beautiful molecule. With 12 pentagonal and 20 hexagonal faces symmetrically arrayed in a soccer-ball-like structure that belongs to the icosahedral point group, I sub h, its high symmetry alone invites special attention. The publication in september 1990 of a simple technique for manufacturing and concentrating macroscopic amounts of this new form of carbon announced to the scientific community that enabling technology had arrived. Macroscopic amounts of C sub 6 sub 0 (and the higher fullerenes, such as C sub 7 sub 0 and C sub 8 sub 4) can now be made with an apparatus as simple as an arc furnace powered with an arc welding supply. Accordingly, chemists, physicists and materials scientists have joined forces in an explosion of effort to explore the properties of this unusual molecular building block. (author). 23 refs., 6 figs
13. Protective effects of a coumarin derivative in diabetic rats.
Bucolo, Claudio; Ward, Keith W; Mazzon, Emanuela; Cuzzocrea, Salvatore; Drago, Filippo
2009-08-01
Retinal microvascular cells play a crucial role in the pathogenesis of diabetic retinopathy. The endothelial effects of cloricromene, a novel coumarin derivative, on diabetic retinopathy induced by streptozotocin (STZ) in the rat were investigated. Cloricromene (10 mg/kg intraperitoneally) was administered daily in diabetic rats, and 60 days later eyes were enucleated for localization of nitrotyrosine, ICAM-1, VEGF, ZO-1, occludin, claudin-5, and VE-cadherin by immunohistochemical analysis. The effect of treatment was also evaluated by TNFalpha, ICAM-1, VEGF, and eNOS protein levels measurement in the retina with the respective ELISA kits. Blood-retinal barrier (BRB) integrity was also evaluated by Evans blue. Increased amounts of cytokines, adhesion molecule, and nitric oxide synthase were observed in retina. Cloricromene treatment significantly lowered retinal TNFalpha, ICAM-1, VEGF, and eNOS. Furthermore, immunohistochemical analysis for VEGF, ICAM-1, nitrotyrosine (a marker of peroxynitrite), and tight junctions revealed positive staining in the retina from STZ-treated rats. The degree of staining for VEGF, ICAM-1, nitrotyrosine, and tight junctions was markedly reduced in tissue sections obtained from diabetic rats treated with cloricromene. Treatment with cloricromene suppressed diabetes-related BRB breakdown by 45%. This study provides the first evidence that the new coumarin derivative cloricromene attenuates the degree of inflammation preserving the BRB in diabetic rats.
14. Packing and Disorder in Substituted Fullerenes
Tummala, Naga Rajesh
2016-07-15
Fullerenes are ubiquitous as electron-acceptor and electron-transport materials in organic solar cells. Recent synthetic strategies to improve the solubility and electronic characteristics of these molecules have translated into a tremendous increase in the variety of derivatives employed in these applications. Here, we use molecular dynamics (MD) simulations to examine the impact of going from mono-adducts to bis- and tris-adducts on the structural, cohesive, and packing characteristics of [6,6]-phenyl-C60-butyric acid methyl ester (PCBM) and indene-C60. The packing configurations obtained at the MD level then serve as input for density functional theory calculations that examine the solid-state energetic disorder (distribution of site energies) as a function of chemical substitution. The variations in structural and site-energy disorders reflect the fundamental materials differences among the derivatives and impact the performance of these materials in thin-film electronic devices.
15. Broadband electroluminescence in fullerene crystals
Werner, A.T.; Anders, J.; Byrne, H.J.; Maser, W.K.; Kaiser, M.; Mittelbach, A.; Roth, S.
1993-01-01
The observation of electroluminescence from crystalline fullerenes is described. A broad band emission spectrum, extending from 400nm to 1100nm is observed. The spectrum has a primary maximum at 920nm and a weaker feature centered on 420nm. The spectral characteristics are independent of the applied field and the longer wavelength region is identical to that measured in the high excitation density photoluminescence spectrum. In addition, the electroluminescence intensity increases with the cube of the injection current, strengthening the association to the nonlinear phenomena observed in the highly excited state of fullerenes. (orig.)
16. Enhancement of device performance of organic solar cells by an interfacial perylene derivative layer
Kim, Inho; Haverinen, Hanna M.; Li, Jian; Jabbour, Ghassan E.
2010-01-01
We report that device performance of organic solar cells consisting of zinc phthalocyanine and fullerene (C60) can be enhanced by insertion of a perylene derivative interfacial layer between fullerene and bathocuproine (BCP) exciton blocking layer
17. Co-Exposure with Fullerene May Strengthen Health Effects of Organic Industrial Chemicals
Lehto, M.; Karilainen, T.; Rog, T.
2014-01-01
In vitro toxicological studies together with atomistic molecular dynamics simulations show that occupational co-exposure with C-60 fullerene may strengthen the health effects of organic industrial chemicals. The chemicals studied are acetophenone, benzaldehyde, benzyl alcohol, m-cresol, and toluene...... which can be used with fullerene as reagents or solvents in industrial processes. Potential co-exposure scenarios include a fullerene dust and organic chemical vapor, or a fullerene solution aerosolized in workplace air. Unfiltered and filtered mixtures of C-60 and organic chemicals represent different...... co-exposure scenarios in in vitro studies where acute cytotoxicity and immunotoxicity of C-60 and organic chemicals are tested together and alone by using human THP-1-derived macrophages. Statistically significant co-effects are observed for an unfiltered mixture of benzaldehyde and C-60 that is more...
18. Nature of the Binding Interactions between Conjugated Polymer Chains and Fullerenes in Bulk Heterojunction Organic Solar Cells
Ravva, Mahesh Kumar; Wang, Tonghui; Bredas, Jean-Luc
2016-01-01
Blends of π-conjugated polymers and fullerene derivatives are ubiquitous as the active layers of organic solar cells. However, a detailed understanding of the weak noncovalent interactions at the molecular level between the polymer chains
19. Stable Au–C bonds to the substrate for fullerene-based nanostructures
Taras Chutora
2017-05-01
Full Text Available We report on the formation of fullerene-derived nanostructures on Au(111 at room temperature and under UHV conditions. After low-energy ion sputtering of fullerene films deposited on Au(111, bright spots appear at the herringbone corner sites when measured using a scanning tunneling microscope. These features are stable at room temperature against diffusion on the surface. We carry out DFT calculations of fullerene molecules having one missing carbon atom to simulate the vacancies in the molecules resulting from the sputtering process. These modified fullerenes have an adsorption energy on the Au(111 surface that is 1.6 eV higher than that of C60 molecules. This increased binding energy arises from the saturation by the Au surface of the bonds around the molecular vacancy defect. We therefore interpret the observed features as adsorbed fullerene-derived molecules with C vacancies. This provides a pathway for the formation of fullerene-based nanostructures on Au at room temperature.
20. Garlic-Derived Organic Polysulfides and Myocardial Protection123
Bradley, Jessica M; Organ, Chelsea L; Lefer, David J
2016-01-01
For centuries, garlic has been shown to exert substantial medicinal effects and is considered to be one of the best disease-preventative foods. Diet is important in the maintenance of health and prevention of many diseases including cardiovascular disease (CVD). Preclinical and clinical evidence has shown that garlic reduces risks associated with CVD by lowering cholesterol, inhibiting platelet aggregation, and lowering blood pressure. In recent years, emerging evidence has shown that hydrogen sulfide (H2S) has cardioprotective and cytoprotective properties. The active metabolite in garlic, allicin, is readily degraded into organic diallyl polysulfides that are potent H2S donors in the presence of thiols. Preclinical studies have shown that enhancement of endogenous H2S has an impact on vascular reactivity. In CVD models, the administration of H2S prevents myocardial injury and dysfunction. It is hypothesized that these beneficial effects of garlic may be mediated by H2S-dependent mechanisms. This review evaluates the current knowledge concerning the cardioprotective effects of garlic-derived diallyl polysulfides. PMID:26764335
1. Pyrrolidinium fullerene induces apoptosis by activation of procaspase-9 via suppression of Akt in primary effusion lymphoma
Watanabe, Tadashi [Department of Cell Biology, Kyoto Pharmaceutical University, Misasagi-Shichonocho 1, Yamashinaku, Kyoto 607-8412 (Japan); Nakamura, Shigeo [Department of Chemistry, Nippon Medical School, 1-7-1 Kyonan-cho, Musashino, Tokyo 180-0023 (Japan); Ono, Toshiya; Ui, Sadaharu [Department of Biotechnology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Kofu 400-8511 (Japan); Yagi, Syota; Kagawa, Hiroki [Department of Cell Biology, Kyoto Pharmaceutical University, Misasagi-Shichonocho 1, Yamashinaku, Kyoto 607-8412 (Japan); Watanabe, Hisami [Center of Molecular Biosciences, Tropical Biosphere Research Center, University of the Ryukyus, 1 Senbaru, Nishihara-cho, Okinawa 903-0213 (Japan); Ohe, Tomoyuki; Mashino, Tadahiko [Department of Pharmaceutical Sciences, Faculty of Pharmacy, Keio University, 1-5-30 Shibakoen, Minato-ku, Tokyo 105-8512 (Japan); Fujimuro, Masahiro, E-mail: fuji2@mb.kyoto-phu.ac.jp [Department of Cell Biology, Kyoto Pharmaceutical University, Misasagi-Shichonocho 1, Yamashinaku, Kyoto 607-8412 (Japan)
2014-08-15
Highlights: • Seven fullerenes were evaluated in terms of their cytotoxic effects on B-lymphomas. • Pyrrolidinium fullerene induced apoptosis of KSHV-infected B-lymphoma PEL cells. • The activation of Akt is essential for PEL cell survival. • Pyrrolidinium fullerene activated caspase-9 by inactivating Akt in PEL cells. • Pyrrolidinium fullerene have potential as novel drugs for the treatment of PEL. - Abstract: Primary effusion lymphoma (PEL) is a subtype of non-Hodgkin’s B-cell lymphoma and is an aggressive neoplasm caused by Kaposi’s sarcoma-associated herpesvirus (KSHV) in immunosuppressed patients. In general, PEL cells are derived from post-germinal center B-cells and are infected with KSHV. To evaluate potential novel anti-tumor compounds against KSHV-associated PEL, seven water-soluble fullerene derivatives were evaluated as potential drug candidates for the treatment of PEL. Herein, we discovered a pyrrolidinium fullerene derivative, 1,1,1′,1′-tetramethyl [60]fullerenodipyrrolidinium diiodide, which induced apoptosis of PEL cells via a novel mechanism, the caspase-9 activation by suppressing the caspase-9 phosphorylation, causing caspase-9 inactivation. Pyrrolidinium fullerene treatment reduced significantly the viability of PEL cells compared with KSHV-uninfected lymphoma cells, and induced the apoptosis of PEL cells by activating caspase-9 via procaspase-9 cleavage. Pyrrolidinium fullerene additionally reduced the Ser473 phosphorylation of Akt and Ser196 of procaspase-9. Ser473-phosphorylated Akt (i.e., activated Akt) phosphorylates Ser196 in procaspase-9, causing inactivation of procaspase-9. We also demonstrated that Akt inhibitors suppressed the proliferation of PEL cells compared with KSHV-uninfected cells. Our data therefore suggest that Akt activation is essential for cell survival in PEL and a pyrrolidinium fullerene derivative induced apoptosis by activating caspase-9 via suppression of Akt in PEL cells. In addition, we evaluated
2. Pyrrolidinium fullerene induces apoptosis by activation of procaspase-9 via suppression of Akt in primary effusion lymphoma
Watanabe, Tadashi; Nakamura, Shigeo; Ono, Toshiya; Ui, Sadaharu; Yagi, Syota; Kagawa, Hiroki; Watanabe, Hisami; Ohe, Tomoyuki; Mashino, Tadahiko; Fujimuro, Masahiro
2014-01-01
Highlights: • Seven fullerenes were evaluated in terms of their cytotoxic effects on B-lymphomas. • Pyrrolidinium fullerene induced apoptosis of KSHV-infected B-lymphoma PEL cells. • The activation of Akt is essential for PEL cell survival. • Pyrrolidinium fullerene activated caspase-9 by inactivating Akt in PEL cells. • Pyrrolidinium fullerene have potential as novel drugs for the treatment of PEL. - Abstract: Primary effusion lymphoma (PEL) is a subtype of non-Hodgkin’s B-cell lymphoma and is an aggressive neoplasm caused by Kaposi’s sarcoma-associated herpesvirus (KSHV) in immunosuppressed patients. In general, PEL cells are derived from post-germinal center B-cells and are infected with KSHV. To evaluate potential novel anti-tumor compounds against KSHV-associated PEL, seven water-soluble fullerene derivatives were evaluated as potential drug candidates for the treatment of PEL. Herein, we discovered a pyrrolidinium fullerene derivative, 1,1,1′,1′-tetramethyl [60]fullerenodipyrrolidinium diiodide, which induced apoptosis of PEL cells via a novel mechanism, the caspase-9 activation by suppressing the caspase-9 phosphorylation, causing caspase-9 inactivation. Pyrrolidinium fullerene treatment reduced significantly the viability of PEL cells compared with KSHV-uninfected lymphoma cells, and induced the apoptosis of PEL cells by activating caspase-9 via procaspase-9 cleavage. Pyrrolidinium fullerene additionally reduced the Ser473 phosphorylation of Akt and Ser196 of procaspase-9. Ser473-phosphorylated Akt (i.e., activated Akt) phosphorylates Ser196 in procaspase-9, causing inactivation of procaspase-9. We also demonstrated that Akt inhibitors suppressed the proliferation of PEL cells compared with KSHV-uninfected cells. Our data therefore suggest that Akt activation is essential for cell survival in PEL and a pyrrolidinium fullerene derivative induced apoptosis by activating caspase-9 via suppression of Akt in PEL cells. In addition, we evaluated
3. Fullerene genesis by ion beams
Gamaly, E.G.; Chadderton, L.T.; Commonwealth Scientific and Industrial Research Organization, Lindfield, NSW
1995-01-01
Clearly detectable quantities of molecular fullerene (C 60 ), the most recently discovered allotrope of carbon, have been observed in graphite following irradiation with heavy projectile ions at energies of about 1 GeV using high pressure chromatography. Similar experiments using lower ion energies gave no corresponding signal, indicating an absence of fullerene. This clear difference suggests that there exists an energy threshold for fullerene genesis. Beginning with a microscopic description of deposition and transfer of energy from the ion to the target, a theoretical model is developed for interpretation of these and similar experiments. An important consequence is a description of the formation of large carbon clusters in the hot dense 'primeval soup' of single carbon atoms by means of random 'sticky' collisions. The ion energy threshold is seen as arising, physically, from a balance in the competition between the rate of primary energy deposition and the rate of system cooling. Rate equations for the basic clustering process allow calculations of the time-dependent number densities for the different carbon clusters produced. An important consequence of the theory is that it is established that the region for the specific phase transition from graphite to fullerene lies in the same pressure regime on the phase diagram as does the corresponding transition for graphite to diamond. (author)
4. Fabrication of fullerene nano-strucutres in mixed films and devices utilizing fullerene nano-structures
Zhong, Yufei
2017-04-06
Embodiments provide methods for controlling crystallization of fullerene compounds in mixed films comprising one or more polymers. Methods can include depositing fullerene mixed films comprising one or more polymers on crystalline fullerene substrates and annealing the deposited mixed films. Methods can further include one or more of exposing the annealed mixed film to UV light, and washing the annealed mixed film with a solvent. Fullerene compounds can include one or more of PCBM, PCBNB, and PCBA.
5. Characterizing the Polymer:Fullerene Intermolecular Interactions
Sweetnam, Sean
2016-02-02
Polymer:fullerene solar cells depend heavily on the electronic coupling of the polymer and fullerene molecular species from which they are composed. The intermolecular interaction between the polymer and fullerene tends to be strong in efficient photovoltaic systems, as evidenced by efficient charge transfer processes and by large changes in the energetics of the polymer and fullerene when they are molecularly mixed. Despite the clear presence of these strong intermolecular interactions between the polymer and fullerene, there is not a consensus on the nature of these interactions. In this work, we use a combination of Raman spectroscopy, charge transfer state absorption, and density functional theory calculations to show that the intermolecular interactions do not appear to be caused by ground state charge transfer between the polymer and fullerene. We conclude that these intermolecular interactions are primarily van der Waals in nature. © 2016 American Chemical Society.
6. Diels-Alders adducts of C-60 and esters of 3-(1-indenyl)-propionic acid : alternatives for [60]PCBM in polymer:fullerene solar cells
Sieval, Alexander B.; Treat, Neil D.; Rozema, Desiree; Hummelen, Jan C.; Stingelin, Natalie
2015-01-01
A series of new, easily synthesized C-60-fullerene derivatives is introduced that allow for optimization of the interactions between rr-P3HT and the fullerene by systematic variation of the size of the ester group. Two compounds gave overall cell efficiencies of 4.8%, clearly outperforming [60]PCBM
7. Human umbilical cord blood-derived stem cells and brain-derived neurotrophic factor protect injured optic nerve: viscoelasticity characterization
Xue-man Lv
2016-01-01
Full Text Available The optic nerve is a viscoelastic solid-like biomaterial. Its normal stress relaxation and creep properties enable the nerve to resist constant strain and protect it from injury. We hypothesized that stress relaxation and creep properties of the optic nerve change after injury. More-over, human brain-derived neurotrophic factor or umbilical cord blood-derived stem cells may restore these changes to normal. To validate this hypothesis, a rabbit model of optic nerve injury was established using a clamp approach. At 7 days after injury, the vitreous body re-ceived a one-time injection of 50 µg human brain-derived neurotrophic factor or 1 × 106 human umbilical cord blood-derived stem cells. At 30 days after injury, stress relaxation and creep properties of the optic nerve that received treatment had recovered greatly, with patho-logical changes in the injured optic nerve also noticeably improved. These results suggest that human brain-derived neurotrophic factor or umbilical cord blood-derived stem cell intervention promotes viscoelasticity recovery of injured optic nerves, and thereby contributes to nerve recovery.
8. Production of metal fullerene surface layer from various media in the process of steel carbonization
KUZEEV Iskander Rustemovich
2018-04-01
Full Text Available Studies devoted to production of metal fullerene layer in steels when introducing carbon from organic and inorganic media were performed. Barium carbonate was used as an inorganic medium and petroleum pitch was used as an organic medium. In order to generate the required amount of fullerenes in the process of steel samples carbonization, optimal temperature mode was found. The higher temperature, absorption and cohesive effects become less important and polymeric carbon structures destruction processes become more important. On the bottom the temperature is limited by petroleum pitch softening temperature and its transition to low-viscous state in order to enhance molecular mobility and improve the possibility of their diffusion to metal surface. Identification of fullerenes in the surface modified layer was carried out following the methods of IR-Fourier spectrometry and high-performance liquid chromatography. It was found out that nanocarbon structures, formed during carbonization in barium carbonate and petroleum pitch mediums, possess different morphology. In the process of metal carbonization from carbonates medium, the main role in fullerenes synthesis is belonged to catalytic effect of surface with generation of endohedral derivatives in the surface layer; but in the process of carbonization from pitch medium fullerenes are formed during crystallization of the latter and crystallization centers are of fullerene type. Based on theoretical data and dataof spectral and chromatographic analysis, optimal conditions of metal fullerene layer formation in barium carbonate and petroleum pitch mediums were determined. Low cohesion of layer, modified in barium carbonate medium, with metal basis was discovered. That was caused by limited carbon diffusion in the volume of α-Fe. According to the detected mechanism of fullerenes formation on steel surface in gaseous medium, fullerenes are formed on catalytic centers – ferrum atoms, forming thin metal
9. Vibrational Spectra of Tetrahedral Fullerenes.
Cheng; Li; Tang
1999-01-01
From the topological structures of the following classes of tetrahedral fullerenes-(1) Cn(h, h; -i, i), Cn(h, 0; -i, 2i), Cn(2h + i, -h + i; i, i), Cn(h - i, h + 2i; -i, 2i), and Cn(h, i; 0, i) for Td symmetry; (2) Cn(h, k; k, h), Cn(h, k; -h - k, k), and Cn(h, k; -h, h + k) for Th symmetry; (3) Cn(h, k; i, j) for T symmetry-we have obtained theoretically the formulas for the numbers of their IR and Raman active modes for all of the tetrahedral fullerenes through the decomposition of their nuclear motions into irreducible representations by means of group theory. Copyright 1999 Academic Press.
10. Photodiodes based on fullerene semiconductor
Voz, C.; Puigdollers, J.; Cheylan, S.; Fonrodona, M.; Stella, M.; Andreu, J.; Alcubilla, R.
2007-01-01
Fullerene thin films have been deposited by thermal evaporation on glass substrates at room temperature. A comprehensive optical characterization was performed, including low-level optical absorption measured by photothermal deflection spectroscopy. The optical absorption spectrum reveals a direct bandgap of 2.3 eV and absorption bands at 2.8 and 3.6 eV, which are related to the creation of charge-transfer excitons. Various photodiodes on indium-tin-oxide coated glass substrates were also fabricated, using different metallic contacts in order to compare their respective electrical characteristics. The influence of a poly(3,4-ethylenedioxythiophene) poly(styrenesulfonate) buffer layer between the indium-tin-oxide electrode and the fullerene semiconductor is also demonstrated. These results are discussed in terms of the workfunction for each electrode. Finally, the behaviour of the external quantum efficiency is analyzed for the whole wavelength spectrum
11. Photophysics of fullerenes: Thermionic emission
Compton, R.N.; Tuinman, A.A.; Huang, J.
1996-01-01
Multiphoton ionization of fullerenes using long-pulse length lasers occurs mainly through vibrational autoionization. In many cases the laser ionization can be described as thermionic in analogy to the boiling off of electrons from a filament. Thermionic emission manifests itself as a delayed emission of electrons following pulsed laser excitation. Klots has employed quasiequilibrium theory to calculate rate constants for thermionic emission from fullerenes which seem to quantitatively account for the observed delayed emission times and the measured electron energy distributions. The theory of Klots also accounts for the thermionic emission of C 60 excited by a low power CW Argon Ion laser. Recently Klots and Compton have reviewed the evidence for thermionic emission from small aggregates where mention was also made of experiments designed to determine the effects of externally applied electric fields on thermionic emission rates. The authors have measured the fullerene ion intensity as a function of the applied electric field and normalized this signal to that produced by single photon ionization of an atom in order to correct for all collection efficiency artifacts. The increase in fullerene ion signal relative to that of Cs + is attributed to field enhanced thermionic emission. From the slope of the Schottky plot they obtain a temperature of approximately 1,000 K. This temperature is comparable to but smaller than that estimated from measurements of the electron kinetic energies. This result for field enhanced thermionic emission is discussed further by Klots and Compton. Thermionic emission from neutral clusters has long been known for autodetachment from highly excited negative ions. Similarly, electron attachment to C 60 in the energy range from 8 to 12 eV results in C 60 anions with lifetimes in the range of microseconds. Quasiequilibrium theory (QET) calculations are in reasonable accord with these measurements
12. Photophysics of fullerenes: Thermionic emission
Compton, R.N. [Univ. of Tennessee, Knoxville, TN (United States)]|[Oak Ridge National Lab., TN (United States); Tuinman, A.A. [Univ. of Tennessee, Knoxville, TN (United States); Huang, J. [Ames Lab., IA (United States)
1996-09-01
Multiphoton ionization of fullerenes using long-pulse length lasers occurs mainly through vibrational autoionization. In many cases the laser ionization can be described as thermionic in analogy to the boiling off of electrons from a filament. Thermionic emission manifests itself as a delayed emission of electrons following pulsed laser excitation. Klots has employed quasiequilibrium theory to calculate rate constants for thermionic emission from fullerenes which seem to quantitatively account for the observed delayed emission times and the measured electron energy distributions. The theory of Klots also accounts for the thermionic emission of C{sub 60} excited by a low power CW Argon Ion laser. Recently Klots and Compton have reviewed the evidence for thermionic emission from small aggregates where mention was also made of experiments designed to determine the effects of externally applied electric fields on thermionic emission rates. The authors have measured the fullerene ion intensity as a function of the applied electric field and normalized this signal to that produced by single photon ionization of an atom in order to correct for all collection efficiency artifacts. The increase in fullerene ion signal relative to that of Cs{sup +} is attributed to field enhanced thermionic emission. From the slope of the Schottky plot they obtain a temperature of approximately 1,000 K. This temperature is comparable to but smaller than that estimated from measurements of the electron kinetic energies. This result for field enhanced thermionic emission is discussed further by Klots and Compton. Thermionic emission from neutral clusters has long been known for autodetachment from highly excited negative ions. Similarly, electron attachment to C{sub 60} in the energy range from 8 to 12 eV results in C{sub 60} anions with lifetimes in the range of microseconds. Quasiequilibrium theory (QET) calculations are in reasonable accord with these measurements.
13. The quest for inorganic fullerenes
Pietsch, Susanne; Dollinger, Andreas; Strobel, Christoph H.; Ganteför, Gerd, E-mail: gerd.gantefoer@uni-konstanz.de, E-mail: ydkim91@skku.edu [Department of Physics, University of Konstanz, D-78457 Konstanz (Germany); Park, Eun Ji; Kim, Young Dok, E-mail: gerd.gantefoer@uni-konstanz.de, E-mail: ydkim91@skku.edu [Department of Chemistry, Sungkyunkwan University, 440-746 Suwon (Korea, Republic of); Seo, Hyun Ook [Center for Free-Electron Laser Science/DESY, D-22607 Hamburg (Germany); Idrobo, Juan-Carlos [Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pennycook, Stephen J. [Department of Materials Science and Engineering, National University of Singapore, Singapore 117575 (Singapore)
2015-10-07
Experimental results of the search for inorganic fullerenes are presented. Mo{sub n}S{sub m}{sup −} and W{sub n}S{sub m}{sup −} clusters are generated with a pulsed arc cluster ion source equipped with an annealing stage. This is known to enhance fullerene formation in the case of carbon. Analogous to carbon, the mass spectra of the metal chalcogenide clusters produced in this way exhibit a bimodal structure. The species in the first maximum at low mass are known to be platelets. Here, the structure of the species in the second maximum is studied by anion photoelectron spectroscopy, scanning transmission electron microscopy, and scanning tunneling microcopy. All experimental results indicate a two-dimensional structure of these species and disagree with a three-dimensional fullerene-like geometry. A possible explanation for this preference of two-dimensional structures is the ability of a two-element material to saturate the dangling bonds at the edges of a platelet by excess atoms of one element. A platelet consisting of a single element only cannot do this. Accordingly, graphite and boron might be the only materials forming nano-spheres because they are the only single element materials assuming two-dimensional structures.
14. Fullerenes as a new type of ligands for transition metals
Sokolov, V.I.
2007-01-01
Fullerenes are considered as ligands in transition metal π-complexes. The following aspects are discussed: metals able to form π-complexes with fullerenes (Zr, V, Ta, Mo, W, Re, Ru, etc.); haptic numbers; homo- and hetero ligand complexes; ligand compatibility with fullerenes for different metals, including fullerenes with a disturbed structure of conjugation [ru
15. Fullerenic structures and such structures tethered to carbon materials
Goel, Anish; Howard, Jack B.; Vander Sande, John B.
2010-01-05
The fullerenic structures include fullerenes having molecular weights less than that of C.sub.60 with the exception of C.sub.36 and fullerenes having molecular weights greater than C.sub.60. Examples include fullerenes C.sub.50, C.sub.58, C.sub.130, and C.sub.176. Fullerenic structure chemically bonded to a carbon surface is also disclosed along with a method for tethering fullerenes to a carbon material. The method includes adding functionalized fullerene to a liquid suspension containing carbon material, drying the suspension to produce a powder, and heat treating the powder.
16. A plasma arc reactor for fullerene research
Anderson, T. T.; Dyer, P. L.; Dykes, J. W.; Klavins, P.; Anderson, P. E.; Liu, J. Z.; Shelton, R. N.
1994-12-01
A modified Krätschmer-Huffman reactor for the mass production of fullerenes is presented. Fullerene mass production is fundamental for the synthesis of higher and endohedral fullerenes. The reactor employs mechanisms for continuous graphite-rod feeding and in situ slag removal. Soot collects into a Soxhlet extraction thimble which serves as a fore-line vacuum pump filter, thereby easing fullerene separation from soot. Thermal gravimetric analysis (TGA) for yield determination is reported. This TGA method is faster and uses smaller samples than Soxhlet extraction methods which rely on aromatic solvents. Production of 10 g of soot per hour is readily achieved utilizing this reactor. Fullerene yields of 20% are attained routinely.
17. The first stable lower fullerene: C36
Piskoti, C.; Zettl, A.
1998-01-01
A new pure carbon material, presumably composed of thirty six carbon atom molecules, has been synthesized and isolated in milligram quantities. It appears as though these molecules have a closed cage structure making them the smallest member of a new class of molecules known as fullerenes, most notably of which is the soccer ball shaped C 60 . However, unlike other known fullerenes, any closed, fullerene-like C 36 cage will necessarily contain fused pentagon rings. Therefore, this molecule apparently violates the isolated pentagon rule, a criterion which requires isolated pentagons for stability in fullerene molecules. Striking parallels between this problem and the synthesis of other fused five member fused ring systems will be discussed. Also, it will be shown that certain biological structures known as clathrin behave in a manner which gives excellent predictions about fullerenes and nanotubes. These predictions help to explain the presence of abundant quantities of C 36 in arced graphite soot. copyright 1998 American Institute of Physics
18. A Molecular-Scale Understanding of Cohesion and Fracture in P3HT:Fullerene Blends
Tummala, Naga Rajesh
2015-04-21
Quantifying cohesion and understanding fracture phenomena in thin-film electronic devices are necessary for improved materials design and processing criteria. For organic photovoltaics (OPVs), the cohesion of the photoactive layer portends its mechanical flexibility, reliability, and lifetime. Here, the molecular mechanism for the initiation of cohesive failure in bulk heterojunction (BHJ) OPV active layers derived from the semiconducting polymer poly-(3-hexylthiophene) [P3HT] and two mono-substituted fullerenes is examined experimentally and through molecular-dynamics simulations. The results detail how, under identical conditions, cohesion significantly changes due to minor variations in the fullerene adduct functionality, an important materials consideration that needs to be taken into account across fields where soluble fullerene derivatives are used.
19. Seaweed Polysaccharides and Derived Oligosaccharides Stimulate Defense Responses and Protection Against Pathogens in Plants
Alejandra Moenne
2011-11-01
Full Text Available Plants interact with the environment by sensing “non-self” molecules called elicitors derived from pathogens or other sources. These molecules bind to specific receptors located in the plasma membrane and trigger defense responses leading to protection against pathogens. In particular, it has been shown that cell wall and storage polysaccharides from green, brown and red seaweeds (marine macroalgae corresponding to ulvans, alginates, fucans, laminarin and carrageenans can trigger defense responses in plants enhancing protection against pathogens. In addition, oligosaccharides obtained by depolymerization of seaweed polysaccharides also induce protection against viral, fungal and bacterial infections in plants. In particular, most seaweed polysaccharides and derived oligosaccharides trigger an initial oxidative burst at local level and the activation of salicylic (SA, jasmonic acid (JA and/or ethylene signaling pathways at systemic level. The activation of these signaling pathways leads to an increased expression of genes encoding: (i Pathogenesis-Related (PR proteins with antifungal and antibacterial activities; (ii defense enzymes such as pheylalanine ammonia lyase (PAL and lipoxygenase (LOX which determine accumulation of phenylpropanoid compounds (PPCs and oxylipins with antiviral, antifugal and antibacterial activities and iii enzymes involved in synthesis of terpenes, terpenoids and/or alkaloids having antimicrobial activities. Thus, seaweed polysaccharides and their derived oligosaccharides induced the accumulation of proteins and compounds with antimicrobial activities that determine, at least in part, the enhanced protection against pathogens in plants.
20. The primary study on protective effects of vallinin derivative on cell injury induced by radiation
Zheng Hong; Wang Siying; Yan Yuqian; Wang Lin; Xu Qinzhi; Cong Jianbo; Zhou Pingkun
2008-01-01
In this paper, the protective effects of vallinin derivative VND3207 on cell injury induced by radiation were studied by the methods of methyl thiazolyl tetrazolium colorimetric assay (MTT) and electron spin resonance (ESR). At first, MTF method was used to evaluate the cytotoxicity of vallinin derivatives (VND3202-VND3209) in HFS cells. Then, MTT method was used to measure the proliferation activity of HeLa cells with 2 Gy irradiation treated with vallinin derivatives and measure the proliferation of AHH-1 cells treated with VND3207 before exposed to 4 Gy irradiation. And ESR detected the antioxidation activity of vallinin and VND3207. The results showed that VND3207 and VND3206 presented no toxin within 50 panol/L, and VND3207 and VND3209 had no proliferous effects on HeLa cells while VND3206 could expedite the tumor cell proliferation at 30 μmol/L, and by comrades VND3208 showed increased radiosensitivity of the HeLa cells. For the AHH1 cells exposed to 4 Gy irradiation, VND3207 presented the protective effects against radiation injury. ESR results also suggested that VND3207 could clean out free radicals. Its effect was far more potent than that of vanillin. From this study we primarily screened out the vallinin derivative VND3207 which has protective effects on cell injury induced by radiation and provided data for future research work. (authors)
1. Iron-fullerene mixture plasma
Biri, S.; Fekete, E.
2004-01-01
Complete text of publication follows. In many laboratories new materials useful for nanotechnology and medical applications are searched and studied. In the ECR labo- ratory one of our future goals is to produce endohedral fullerene molecules (e.g Fe C 60 ) in large quantity. If this comes true, it will be possible to make building blocks for nanoparts, an ultra-contrast medium of MRI, and a magnetic nano-particle for treatment of cancer. For this experiment some modifications were carried out on the ATOMKI-ECRIS [1]. The waveguide of the 14.5 GHz microwave generator was divided in order to couple very low powers (1 watt or less) into the plasma. The C 60 component of the plasma was produced by using a simple oven. Among known methods (oven, sputtering, electron bombardment, compounds containing Fe), we have chosen the evaporation of ferrocene [Fe(C 5 H 5 ) 2 ] powder to introduce Fe atoms into the plasma. The ferrocene chamber was connected to one of the two gas feeding lines and the evaporation rate was controlled by needle valve. The extraction voltage had to be kept as low as 600V, because of the low mass-energy product of our bending magnet. First we developed independently the rough working conditions for single-charged dense iron and fullerene plasmas. Then a clean fullerene plasma was made. The temperature of the oven was about 450 deg C. The bending magnet was set to the C 60 peak (M=720) and about 50-100 nA intensity of single-charged fullerene peak was obtained. Then the magnet was set to the position of the searched Fe C 60 or FeC 60 peak (M=776) and the ferrocene valve was opened. A very difficult and long tuning followed. Finally we found a new large peak with higher mass than C 60 . In Figure 1 the centre of the new big peak on the right side is located at M=776 which corresponds to FeC 60 and/or Fe C 60 molecules. The peak is wide and shows some structure. We think it may contain impurities attached to the C 58 , C 59 , C 60 and FeC 60 molecules. We
2. Machine Phase Fullerene Nanotechnology: 1996
Globus, Al; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
NASA has used exotic materials for spacecraft and experimental aircraft to good effect for many decades. In spite of many advances, transportation to space still costs about 10,000 per pound. Drexler has proposed a hypothetical nanotechnology based on diamond and investigated the properties of such molecular systems. These studies and others suggest enormous potential for aerospace systems. Unfortunately, methods to realize diamonoid nanotechnology are at best highly speculative. Recent computational efforts at NASA Ames Research Center and computation and experiment elsewhere suggest that a nanotechnology of machine phase functionalized fullerenes may be synthetically relatively accessible and of great aerospace interest. Machine phase materials are (hypothetical) materials consisting entirely or in large part of microscopic machines. In a sense, most living matter fits this definition. To begin investigation of fullerene nanotechnology, we used molecular dynamics to study the properties of carbon nanotube based gears and gear/shaft configurations. Experiments on C60 and quantum calculations suggest that benzyne may react with carbon nanotubes to form gear teeth. Han has computationally demonstrated that molecular gears fashioned from (14,0) single-walled carbon nanotubes and benzyne teeth should operate well at 50-100 gigahertz. Results suggest that rotation can be converted to rotating or linear motion, and linear motion may be converted into rotation. Preliminary results suggest that these mechanical systems can be cooled by a helium atmosphere. Furthermore, Deepak has successfully simulated using helical electric fields generated by a laser to power fullerene gears once a positive and negative charge have been added to form a dipole. Even with mechanical motion, cooling, and power; creating a viable nanotechnology requires support structures, computer control, a system architecture, a variety of components, and some approach to manufacture. Additional 3. Fullerene C[sub 60 Koruga, D; Hameroff, S; Sundareshan, M [Univ. of Arizona, Tucson, AZ (United States); Withers, J; Loutfy, R [MER Corp., Tucson, AZ (United States) 1993-01-01 This book, one of the first to be published in the exciting field of fullerenes, includes a short history of scientific discovery, as well as one possible answer to the question: for what purposes can C[sub 60] be utilized. The book opens with a review of the life of Buckminster Fuller. Modern history of fivefold symmetry and the icosahedron began between 1984 and 1985, when Shechtman and his research team opened a new branch in crystallography (fivefold symmetry) and when the Kroto/Smalley research team discovered the C[sub 60] molecule (truncated icosahedron). Production of solid C[sub 60] by the Huffman/Kraeschner research team in 1990 provided a new stimulus for research by producing C[sub 60] in macroscopic amounts for use by the scientific and technological community. This achievement led to developments such as Koruga's August 1992 creation of the dimer C[sub 116] using scanning tunneling engineering and Loutfy's hydrogenation of C[sub 60] and construction of the first Ni/C[sub 60] rechargeable batteries in December 1992. New inventions based on C[sub 60] will continue to be forthcoming, particularly in the areas of superconductivity, quantum devices, and molecular electronic devices. Discovery of the C[sub 60] molecule (Kroto/Smalley), production of solid C[sub 60] (Huffman/Kraeschmer) and technological inventions such as C[sub 116] (Koruga) have been chance discoveries. A short history of these discoveries is detailed in the book along with the results of the authors' Fullerene research efforts, including atomic resolution images of Fullerene C[sub 60], Ni/C[sub 60] batteries, nanotechnology of C[sub 60], comparison of C[sub 60] with biological systems, and others. As Fullerene C[sub 60] will require control engineering, an overview of control systems, in particular, general and optimal control of the Schroedinger equation, is contained. Some experimental and theoretical work of other researchers are also presented. 140 figs., 4 tabs., 342 refs. 4. Boron Fullerenes: A First-Principles Study Gonzalez Szwacki Nevill 2007-01-01 Full Text Available AbstractA family of unusually stable boron cages was identified and examined using first-principles local-density functional method. The structure of the fullerenes is similar to that of the B12icosahedron and consists of six crossing double-rings. The energetically most stable fullerene is made up of 180 boron atoms. A connection between the fullerene family and its precursors, boron sheets, is made. We show that the most stable boron sheets are not necessarily precursors of very stable boron cages. Our finding is a step forward in the understanding of the structure of the recently produced boron nanotubes. 5. Co-exposure with fullerene may strengthen health effects of organic industrial chemicals. Maili Lehto Full Text Available In vitro toxicological studies together with atomistic molecular dynamics simulations show that occupational co-exposure with C60 fullerene may strengthen the health effects of organic industrial chemicals. The chemicals studied are acetophenone, benzaldehyde, benzyl alcohol, m-cresol, and toluene which can be used with fullerene as reagents or solvents in industrial processes. Potential co-exposure scenarios include a fullerene dust and organic chemical vapor, or a fullerene solution aerosolized in workplace air. Unfiltered and filtered mixtures of C60 and organic chemicals represent different co-exposure scenarios in in vitro studies where acute cytotoxicity and immunotoxicity of C60 and organic chemicals are tested together and alone by using human THP-1-derived macrophages. Statistically significant co-effects are observed for an unfiltered mixture of benzaldehyde and C60 that is more cytotoxic than benzaldehyde alone, and for a filtered mixture of m-cresol and C60 that is slightly less cytotoxic than m-cresol. Hydrophobicity of chemicals correlates with co-effects when secretion of pro-inflammatory cytokines IL-1β and TNF-α is considered. Complementary atomistic molecular dynamics simulations reveal that C60 co-aggregates with all chemicals in aqueous environment. Stable aggregates have a fullerene-rich core and a chemical-rich surface layer, and while essentially all C60 molecules aggregate together, a portion of organic molecules remains in water. 6. Transformation of methano[60]fullerenes in dihydrofullerofuranes induced by electron transfer Yanilkin, V.V.; Toropchina, A.V.; Morozov, V.I.; Nastapova, N.V.; Gubskaya, V.P.; Sibgatullina, F.G.; Azancheev, N.M.; Efremov, Yu.Ya.; Nuretdinov, I.A. 2004-01-01 The electrochemical reduction of methano[60]fullerenes (61-acetyl-61-(diethoxyphosphoryl)methano-60-fullerene 1, 61-acetyl-61-(diisopropoxyphosphoryl)methano-60-fullerene 2, 61-(2,2-diethoxyacetyl)-61-(diethoxy-phosphoryl)methano-60-fullerene 3, 61-phenyl-61-(1,2-dioxo-3,3-dimethyl-buthyl)methano-60-fullerene 4) in o-dichlorobenzene-DMF (3:1 v/v)/0.1 M Bu 4 NBF 4 on a glass-carbon electrode proceeds in a few steps. The reversible transfer of the first electron results in the formation of radical anions registered by ESR method. The subsequent reduction proceeds differently because of the various stability of anionic intermediates. The radical anions of the methanofullerenes 3 and 4 are less stable than the radical anions of compounds 1 and 2 and less stable than the radical anions of methanofullerenes, which contain an ester and/or a phosphonate group. The opening of a cyclopropane ring occurs during the stage of the formation of radical trianions of methanofullerenes 1, 2. The same process for compounds 3, 4 proceeds slowly in radical anions and fast in dianions. The opening of cyclopropane ring for all compounds is not accompanied by the elimination of methanogroup and results in the formation of dihydrofullerenofurane derivatives. The transformation of methanofullerene 3 induced by single electron transfer proceeds via a chain reaction mechanism 7. Characterization of the Structural, Mechanical, and Electronic Properties of Fullerene Mixtures: A Molecular Simulations Description Tummala, Naga Rajesh 2017-10-06 We investigate mixtures of fullerenes and fullerene derivatives, the most commonly used electron accepting materials in organic solar cells, by using a combination of molecular dynamics and density functional theory methods. Our goal is to describe how mixing affects the molecular packing, mechanical properties, and electronic parameters (site energy disorder, electronic couplings) of interest for solar-cell applications. Specifically, we consider mixtures of: (i) C60 and C70; (ii) C60, C70, and C84, and (iii) PC61BM and PC71BM. 8. Hydrogenated fullerenes in space: FT-IR spectra analysis El-Barbary, A. A. 2016-01-01 Fullerenes and hydrogenated fullerenes are found in circumstellar and interstellar environments. But the determination structures for the detected bands in the interstellar and circumstellar space are not completely understood so far. For that purpose, the aim of this article is to provide all possible infrared spectra for C 20 and C 60 fullerenes and their hydrogenated fullerenes. Density Functional theory (DFT) is applied using B3LYP exchange-functional with basis set 6–31G(d, p). The Fourier transform infrared spectroscopy (FT-IR) is found to be capable of distinguishing between fullerenes, mono hydrogenated fullerenes and fully hydrogenated fullerenes. In addition, deposition of one hydrogen atom outside the fully hydrogenated fullerenes is found to be distinguished by forming H 2 molecule at peak around 4440 cm −1 . However, deposition of one hydrogen atom inside the fully hydrogenated fullerenes cannot be distinguished. The obtained spectral structures are analyzed and are compared with available experimental results. 9. Derivation of Intervention levels for Protection of the Public in a Radiological Emergency in Korea Lee, Jong Tai; Lee, Goan Yup; Khang, Byung Oui; Oh, Ki Hoon; Kim, Chang Kyu 2001-01-01 Intervention levels for protection of the public in a radiological emergency are theoretically derived by the cost-benefit approach with the concept of justification and optimization. Intervention levels on the sheltering, evacuation, temporary relocation and permanent resettlement for protection of the public are estimated with the cost to protective countermeasures and the value from dose averted which are the site specific parameters. As a result, it is confirmed that IAEA guidelines for intervention levels are applicable to the radiological emergency in Korea. Optimum ranges of 5 - 10 mSv/2days for sheltering, 25 - 130 mSv/week for evacuation, 15 - 90 mSv/month for temporary relocation and 600 - 3,500 mSv/lifetime for permanent resettlement for intervention levels are also provided. The result can be applied as useful data to update intervention levels under the theoretical background in Korea 10. Supramolecular Control of Oligothienylenevinylene-Fullerene Interactions: Evidence for a Ground-State EDA Complex McClenaghan, N.D.; Grote, Z.; Darriet, K.; Zimine, M.Y.; Williams, R.M.; De Cola, L.; Bassani, D.M. 2005-01-01 Complementary hydrogen-bonding interactions between a barbituric acid-substituted fullerene derivative (1) and corresponding receptor (2) bearing thienylenevinylene units are used to assemble a 1:1 supramolecular complex ( K ) 5500 M-1). Due to the close proximity of the redox-active moieties within 11. Nonlinear absorption of fullerene- and nanotubes-doped liquid crystal systems Kamanina, N.; Reshak, Ali H; Vasiliev, P.Y.; Vangonen, A. I.; Studeonov, V. I.; Usanov, Y. E.; Ebothe, J.; Gondek, E.; Wojcik, W.; Danel, A. 2009-01-01 Roč. 41, č. 3 (2009), s. 391-394 ISSN 1386-9477 Institutional research plan: CEZ:AV0Z60870520 Keywords : nonlinear absorption properties * organic electrooptical systems * liquid crystal * fullerene s * nanotubes * PVK-derivatives Subject RIV: BO - Biophysics Impact factor: 1.177, year: 2009 12. Characterizing the Polymer:Fullerene Intermolecular Interactions Sweetnam, Sean; Vandewal, Koen; Cho, Eunkyung; Risko, Chad; Coropceanu, Veaceslav; Salleo, Alberto; Bredas, Jean-Luc; McGehee, Michael D. 2016-01-01 the polymer and fullerene, there is not a consensus on the nature of these interactions. In this work, we use a combination of Raman spectroscopy, charge transfer state absorption, and density functional theory calculations to show that the intermolecular 13. Packing and Disorder in Substituted Fullerenes Tummala, Naga Rajesh; Elroby, Shaaban Ali Kamel; Aziz, Saadullah G.; Risko, Chad; Coropceanu, Veaceslav; Bredas, Jean-Luc 2016-01-01 Fullerenes are ubiquitous as electron-acceptor and electron-transport materials in organic solar cells. Recent synthetic strategies to improve the solubility and electronic characteristics of these molecules have translated into a tremendous 14. Adsorption of amino acids by fullerenes and fullerene nanowhiskers Hashizume, Hideo; Hirata, Chika; Fujii, Kazuko; Miyazawa, Kun'ichi 2015-12-01 We have investigated the adsorption of some amino acids and an oligopeptide by fullerene (C60) and fullerene nanowhiskers (FNWs). C60 and FNWs hardly adsorbed amino acids. Most of the amino acids used have a hydrophobic side chain. Ala and Val, with an alkyl chain, were not adsorbed by the C60 or FNWs. Trp, Phe and Pro, with a cyclic structure, were not adsorbed by them either. The aromatic group of C60 did not interact with the side chain. The carboxyl or amino group, with the frame structure of an amino acid, has a positive or negative charge in solution. It is likely that the C60 and FNWs would not prefer the charged carboxyl or amino group. Tri-Ala was adsorbed slightly by the C60 and FNWs. The carboxyl or amino group is not close to the center of the methyl group of Tri-Ala. One of the methyl groups in Tri-Ala would interact with the aromatic structure of the C60 and FNWs. We compared our results with the theoretical interaction of 20 bio-amino acids with C60. The theoretical simulations showed the bonding distance between C60 and an amino acid and the dissociation energy. The dissociation energy was shown to increase in the order, Val changed a little by C60. In our study Try and Tyr were hardly adsorbed by C60 and FNWs. These amino acids did not show a different adsorption behavior compared with other amino acids. The adsorptive behavior of mono-amino acids might be different from that of polypeptides. 15. Adsorption of amino acids by fullerenes and fullerene nanowhiskers Hashizume, Hideo; Hirata, Chika; Fujii, Kazuko; Miyazawa, Kun’ichi 2015-01-01 We have investigated the adsorption of some amino acids and an oligopeptide by fullerene (C 60 ) and fullerene nanowhiskers (FNWs). C 60 and FNWs hardly adsorbed amino acids. Most of the amino acids used have a hydrophobic side chain. Ala and Val, with an alkyl chain, were not adsorbed by the C 60 or FNWs. Trp, Phe and Pro, with a cyclic structure, were not adsorbed by them either. The aromatic group of C 60 did not interact with the side chain. The carboxyl or amino group, with the frame structure of an amino acid, has a positive or negative charge in solution. It is likely that the C 60 and FNWs would not prefer the charged carboxyl or amino group. Tri-Ala was adsorbed slightly by the C 60 and FNWs. The carboxyl or amino group is not close to the center of the methyl group of Tri-Ala. One of the methyl groups in Tri-Ala would interact with the aromatic structure of the C 60 and FNWs. We compared our results with the theoretical interaction of 20 bio-amino acids with C 60 . The theoretical simulations showed the bonding distance between C 60 and an amino acid and the dissociation energy. The dissociation energy was shown to increase in the order, Val < Phe < Pro < Asp < Ala < Trp < Tyr < Arg < Leu. However, the simulation was not consistent with our experimental results. The adsorption of albumin (a protein) by C 60 showed the effect on the side chains of Try and Trp. The structure of albumin was changed a little by C 60 . In our study Try and Tyr were hardly adsorbed by C 60 and FNWs. These amino acids did not show a different adsorption behavior compared with other amino acids. The adsorptive behavior of mono-amino acids might be different from that of polypeptides. (paper) 16. Ability of Fullerene to Accumulate Hydrogen Bubenchikov Mikhail A 2016-01-01 Full Text Available In the present paper, using a modification of the LJ-potential and the continuum approach, we define С60-H2 (He potentials, as well as interaction energy of two fullerene particles. The proposed approach allows to calculate interactions between carbon structures of any character (wavy graphenes, nanotubes, etc.. The obtained results allowed to localize global sorption zones both inside the particle and on the outer surface of the fullerene. 17. Neuron-derived IgG protects neurons from complement-dependent cytotoxicity. Zhang, Jie; Niu, Na; Li, Bingjie; McNutt, Michael A 2013-12-01 Passive immunity of the nervous system has traditionally been thought to be predominantly due to the blood-brain barrier. This concept must now be revisited based on the existence of neuron-derived IgG. The conventional concept is that IgG is produced solely by mature B lymphocytes, but it has now been found to be synthesized by murine and human neurons. However, the function of this endogenous IgG is poorly understood. In this study, we confirm IgG production by rat cortical neurons at the protein and mRNA levels, with 69.0 ± 5.8% of cortical neurons IgG-positive. Injury to primary-culture neurons was induced by complement leading to increases in IgG production. Blockage of neuron-derived IgG resulted in more neuronal death and early apoptosis in the presence of complement. In addition, FcγRI was found in microglia and astrocytes. Expression of FcγR I in microglia was increased by exposure to neuron-derived IgG. Release of NO from microglia triggered by complement was attenuated by neuron-derived IgG, and this attenuation could be reversed by IgG neutralization. These data demonstrate that neuron-derived IgG is protective of neurons against injury induced by complement and microglial activation. IgG appears to play an important role in maintaining the stability of the nervous system. 18. Enthalpies of sublimation of fullerenes by thermogravimetry Martínez-Herrera, Melchor; Campos, Myriam; Torres, Luis Alfonso; Rojas, Aarón, E-mail: arojas@cinvestav.mx 2015-12-20 Graphical abstract: - Highlights: • Enthalpies of sublimation of fullerenes were measured by thermogravimetry. • Results of enthalpies of sublimation are comparable with data reported in literature. • Not previously reported enthalpy of sublimation of C{sub 78} is supplied in this work. • Enthalpies of sublimation show a strong dependence with the number of carbon atoms in the cluster. • Enthalpies of sublimation are congruent with dispersion forces ruling cohesion of solid fullerene. - Abstract: The enthalpies of sublimation of fullerenes, as measured in the interval of 810–1170 K by thermogravimetry and applying the Langmuir equation, are reported. The detailed experimental procedure and its application to fullerenes C{sub 60}, C{sub 70}, C{sub 76}, C{sub 78} and C{sub 84} are supplied. The accuracy and uncertainty associated with the experimental results of the enthalpy of sublimation of these fullerenes show that the reliability of the measurements is comparable to that of other indirect high-temperature methods. The results also indicate that the enthalpy of sublimation increases proportionally to the number of carbon atoms in the cluster but there is also a strong correlation between the enthalpy of sublimation and the polarizability of each fullerene. 19. Photoinduced energy and electron transfer in fullerene- oligothiophene-fullerene triads Hal, Paul A. van; Knol, Joop; Langeveld-Voss, Bea M.W.; Meskers, Stefan C.J.; Hummelen, J.C.; Janssen, René A.J. 2000-01-01 A series of fullerene-oligothiophene-fullerene (C60-nT-C60) triads with n = 3, 6, or 9 thiophene units has been synthesized, and their photophysical properties have been studied using photoinduced absorption and fluorescence spectroscopy in solution and in the solid state as thin films. The results 20. Atomic nitrogen encapsulated in fullerenes: realization of a chemical Faraday cage Lips, K. 2000-01-01 Fullerenes, C 60 and C 70 , are ideal containers for atomic nitrogen. We will show by electron paramagnetic resonance (EPR) experiments that nitrogen in C 60 keeps its atomic ground state configuration and resides in the center of the cage. This is the first time that atomic nitrogen is stabilized at ambient conditions. The inert shell of the fullerene protects the highly reactive nitrogen from undergoing chemical reactions with the surroundings. The fullerene cage is the chemical analogue of the Faraday cage in case of electrical fields, i.e. it shields off the chemical reactivity. As for the free nitrogen atom, the spins of the three p-electrons of nitrogen in C 60 are parallel (S = 3/2) and the atom has spherical symmetry. Due to the center position of nitrogen in C 60 , extremely sharp EPR lines are observed. This reflects the absence of a strong host-guest interaction and shows that the individuality of nitrogen in the fullerenes is preserved. Further evidence for the almost interaction-free suspension of nitrogen in the fullerene cages is provided by g-factor measurements. These investigations show that magnetic shielding of the host molecules can account for the observed differences between N rate at C 60 and N rate at C 70 . The fullerene cage can be chemically modified without destroying the endohedral complex. The chemical modifications change the symmetry of the molecule which is observed through an additional fine structure in the EPR spectrum. Influences of the modifications on the stability of N rate at C 60 will be discussed. (orig.) 1. Production of Endohedral Fullerenes by Ion Implantation Diener, M.D.; Alford, J. M.; Mirzadeh, S. 2007-05-31 The empty interior cavity of fullerenes has long been touted for containment of radionuclides during in vivo transport, during radioimmunotherapy (RIT) and radioimaging for example. As the chemistry required to open a hole in fullerene is complex and exceedingly unlikely to occur in vivo, and conformational stability of the fullerene cage is absolute, atoms trapped within fullerenes can only be released during extremely energetic events. Encapsulating radionuclides in fullerenes could therefore potentially eliminate undesired toxicity resulting from leakage and catabolism of radionuclides administered with other techniques. At the start of this project however, methods for production of transition metal and p-electron metal endohedral fullerenes were completely unknown, and only one method for production of endohedral radiofullerenes was known. They therefore investigated three different methods for the production of therapeutically useful endohedral metallofullerenes: (1) implantation of ions using the high intensity ion beam at the Oak Ridge National Laboratory (ORNL) Surface Modification and Characterization Research Center (SMAC) and fullerenes as the target; (2) implantation of ions using the recoil energy following alpha decay; and (3) implantation of ions using the recoil energy following neutron capture, using ORNL's High Flux Isotope Reactor (HFIR) as a thermal neutron source. While they were unable to obtain evidence of successful implantation using the ion beam at SMAC, recoil following alpha decay and neutron capture were both found to be economically viable methods for the production of therapeutically useful radiofullerenes. In this report, the procedures for preparing fullerenes containing the isotopes {sup 212}Pb, {sup 212}Bi, {sup 213}Bi, and {sup 177}Lu are described. None of these endohedral fullerenes had ever previously been prepared, and all of these radioisotopes are actively under investigation for RIT. Additionally, the chemistry for 2. Fullerene surfactants and their use in polymer solar cells Jen, Kwan-Yue; Yip, Hin-Lap; Li, Chang-Zhi 2015-12-15 Fullerene surfactant compounds useful as interfacial layer in polymer solar cells to enhance solar cell efficiency. Polymer solar cell including a fullerene surfactant-containing interfacial layer intermediate cathode and active layer. 3. Derived limits for radiological protection against ionizing radiation based on ICRP-60 recommendations Jang, Si Young; Lee, Byung Soo 2000-01-01 In Korea the dose limits are reduced and are set at the ICRP-60 limits. However, derived limits tabulated as MPC in air and water are sill specified in Notice Nol 98-12. There are some discrepancies between the primary dose limits and MPCs in air and water. Therefore, in order to accept ICRP-60 recommendations fully, derived limits such as ALI, DAC, ECL for radiological protection against ionizing radiation based on ICRP-60 recommendations were calculated using modified methods of those of 10 CFR part 20, dose limits and committed effective dose coefficients of the Basic Safety Standards of the IAEA. The derived limits in this study were also compared with those prescribed in 10 CFR part 20 as well as MPCs of Notice No.98-12 in order to analyze the impact of implementing derived limits on nuclear facilities. ECLs in air and water for the control of radioactive discharge into the environment in this study are shown to have lower values (i.e. more conservative), for most part, than those in Notice No. 98-12. Especially, for uranium elements, ECLs in water are approximately a magnitude in the order of two lower than those in Notice No. 98-12. (author) 4. Facile preparation of amine and amino acid adducts of [60]fullerene using chlorofullerene C60Cl6 as a precursor. Kornev, Alexey B; Khakina, Ekaterina A; Troyanov, Sergey I; Kushch, Alla A; Peregudov, Alexander; Vasilchenko, Alexey; Deryabin, Dmitry G; Martynenko, Vyacheslav M; Troshin, Pavel A 2012-06-04 We report a general synthetic approach to the preparation of highly functionalized amine and amino acid derivatives of [60]fullerene starting from readily available chlorofullerene C(60)Cl(6). The synthesized water-soluble amino acid derivative of C(60) demonstrated pronounced antiviral activity, while the cationic amine-based compound showed strong antibacterial action in vitro. 5. Non-fullerene electron acceptors for organic photovoltaic devices Jenekhe, Samson A.; Li, Haiyan; Earmme, Taeshik; Ren, Guoqiang 2017-11-07 Non-fullerene electron acceptors for highly efficient organic photovoltaic devices are described. The non-fullerene electron acceptors have an extended, rigid, .pi.-conjugated electron-deficient framework that can facilitate exciton and charge derealization. The non-fullerene electron acceptors can physically mix with a donor polymer and facilitate improved electron transport. The non-fullerene electron acceptors can be incorporated into organic electronic devices, such as photovoltaic cells. 6. Comparison of the Protective Efficacy of DNA and Baculovirus-Derived Protein Vaccines for EBOLA Virus in Guinea Pigs Mellquist-Riemenschneider, Jenny L; Garrison, Aura R; Geisbert, Joan B; Saikh, Kamal U; Heidebrink, Kelli D 2003-01-01 .... Previously, a priming dose of a DNA vaccine expressing the glycoprotein (GP) gene of MARV followed by boosting with recombinant baculovirus-derived GP protein was found to confer protective immunity to guinea pigs (Hevey et al., 2001... 7. Protection of vanillin derivative VND3207 on plasmid DNA damage induced by different LET ionizing radiation Xu Huihui; Wang Li; Sui Li; Guan Hua; Wang Yu; Liu Xiaodan; Zhang Shimeng; Xu Qinzhi; Wang Xiao; Zhou Pingkun 2011-01-01 Objective: To evaluate the radioprotective effect of vanillin derivative VND3207 on DNA damage induced by different LET ionizing radiation. Methods: The plasmid DNA in liquid was irradiated by 60 Co γ-rays, proton or 7 Li heavy ion with or without VND3207. The conformation changes of plasmid DNA were assessed by agarose gel electrophoresis and the quantification was done using gel imaging system. Results: The DNA damage induced by proton and 7 Li heavy ion was much more serious as compared with that by 60 Co γ-rays, and the vanillin derivative VND3207 could efficiently decrease the DNA damage induced by all three types of irradiation sources, which was expressed as a significantly reduced ratio of open circular form (OC) of plasmid DNA. The radioprotective effect of VND3207 increased with the increasing of drug concentration. The protective efficiencies of 200 μmol/L VND3207 were 85.3% (t =3.70, P=0.033), 73.3% (t=10.58, P=0.017) and 80.4% (t=8.57, P=0.008) on DNA damage induction by 50 Gy of γ-rays, proton and 7 Li heavy ion, respectively. It seemed that the radioprotection of VND3207 was more effective on DNA damage induced by high LET heavy ion than that by proton. Conclusions: VND3207 has a protective effect against the genotoxicity of different LET ionizing radiation, especially for γ-rays and 7 Li heavy ion. (authors) 8. Contrasting bonding behavior of thiol molecules on carbon fullerene structures Mixteco-Sanchez, J.C.; Guirado-Lopez, R.A. 2003-01-01 We have performed semiempirical as well as ab initio density-functional theory (DFT) calculations at T=0 to analyze the equilibrium configurations and electronic properties of spheroidal C 60 as well as of cylindrical armchair (5,5) and (8,8) fullerenes passivated with SCH 3 and S(CH 2 ) 2 CH 3 thiols. Our structural results reveal that the lowest-energy configurations of the adsorbates strongly depend on their chain length and on the structure of the underlying substrate. In the low-coverage regime, both SCH 3 and S(CH 2 ) 2 CH 3 molecules prefer to organize into a molecular cluster on one side of the C 60 surface, providing thus a less protective organic coating for the carbon structure. However, with increasing the number of adsorbed thiols, a transition to a more uniform distribution is obtained, which actually takes place for six and eight adsorbed molecules when using S(CH 2 ) 2 CH 3 and SCH 3 chains, respectively. In contrast, for the tubelike arrangements at the low-coverage regime, a quasi-one-dimensional zigzag organization of the adsorbates along the tubes is always preferred. The sulfur-fullerene bond is considerably strong and is at the origin of outward and lateral displacements of the carbon atoms, leading to the stabilization of three-membered rings on the surface (spheroidal structures) as well as to sizable nonuniform radial deformations (cylindrical configurations). The electronic spectrum of our thiol-passivated fullerenes shows strong variations in the energy difference between the highest occupied and lowest unoccupied molecular orbitals as a function of the number and distribution of adsorbed thiols, opening thus the possibility to manipulate the transport properties of these compounds by means of selective adsorption mechanisms 9. Fullerene C{sub 70} decorated TiO{sub 2} nanowires for visible-light-responsive photocatalyst Cho, Er-Chieh [Department of Clinical Pharmacy, School of Pharmacy, College of Pharmacy, Taipei Medical University, Taipei 110, Taiwan (China); Ciou, Jing-Hao [Department of Fragrance and Cosmetic Science, Kaohsiung Medical University, Kaohsiung 80708, Taiwan (China); Zheng, Jia-Huei; Pan, Job [Department of Clinical Pharmacy, School of Pharmacy, College of Pharmacy, Taipei Medical University, Taipei 110, Taiwan (China); Hsiao, Yu-Sheng, E-mail: yshsiao@mail.mcut.edu.tw [Department of Materials Engineering, Ming Chi University of Technology, New Taipei City 24301, Taiwan (China); Lee, Kuen-Chan, E-mail: kclee@kmu.edu.tw [Department of Fragrance and Cosmetic Science, Kaohsiung Medical University, Kaohsiung 80708, Taiwan (China); Huang, Jen-Hsien, E-mail: 295604@cpc.com.tw [Department of Green Material Technology, Green Technology Research Institute, CPC Corporation, Kaohsiung 30010, Taiwan (China) 2015-11-15 Graphical abstract: - Highlights: • TiO{sub 2} nanowire decorated with C{sub 60} and C{sub 70} derivatives has been synthesized. • The fullerenes impede the charge recombination due to its high electron affinity. • The fullerenes expand the utilization of solar light from UV to visible light. • The modified-TiO{sub 2} has great biocompatibility. - Abstract: In this study, we have synthesized C{sub 60} and C{sub 70}-modified TiO{sub 2} nanowire (NW) through interfacial chemical bonding. The results indicate that the fullerenes (C{sub 60} and C{sub 70} derivatives) can act as sinks for photogenerated electrons in TiO{sub 2}, while the fullerene/TiO{sub 2} is illuminated under ultraviolet (UV) light. Therefore, in comparison to the pure TiO{sub 2} NWs, the modified TiO{sub 2} NWs display a higher photocatalytic activity under UV irradiation. Moreover, the fullerenes also can function as a sensitizer to TiO{sub 2} which expand the utilization of solar light from UV to visible light. The results reveal that the C{sub 70}/TiO{sub 2} NWs show a significant photocatalytic activity for degradation of methylene blue (MB) in visible light region. To better understand the mechanism responsible for the effect of fullerenes on the photocatalytic properties of TiO{sub 2}, the electron only devices and photoelectrochemical cells based on fullerenes/TiO{sub 2} are also fabricated and evaluated. 10. Cytoprotective dibenzoylmethane derivatives protect cells from oxidative stress-induced necrotic cell death. Hegedűs, Csaba; Lakatos, Petra; Kiss-Szikszai, Attila; Patonay, Tamás; Gergely, Szabolcs; Gregus, Andrea; Bai, Péter; Haskó, György; Szabó, Éva; Virág, László 2013-06-01 Screening of a small in-house library of 1863 compounds identified 29 compounds that protected Jurkat cells from hydrogen peroxide-induced cytotoxicity. From the cytoprotective compounds eleven proved to possess antioxidant activity (ABTS radical scavenger effect) and two were found to inhibit poly(ADP-ribosyl)ation (PARylation), a cytotoxic pathway operating in severely injured cells. Four cytoprotective dibenzoylmethane (DBM) derivatives were investigated in more detail as they did not scavenge hydrogen peroxide nor did they inhibit PARylation. These compounds protected cells from necrotic cell death while caspase activation, a parameter of apoptotic cell death was not affected. Hydrogen peroxide activated extracellular signal regulated kinase (ERK1/2) and p38 MAP kinases but not c-Jun N-terminal kinase (JNK). The cytoprotective DBMs suppressed the activation of Erk1/2 but not that of p38. Cytoprotection was confirmed in another cell type (A549 lung epithelial cells), indicating that the cytoprotective effect is not cell type specific. In conclusion we identified DBM analogs as a novel class of cytoprotective compounds inhibiting ERK1/2 kinase and protecting from necrotic cell death by a mechanism independent of poly(ADP-ribose) polymerase inhibition. Copyright © 2013 Elsevier Ltd. All rights reserved. 11. Natural bizbenzoquinoline derivatives protect zebrafish lateral line sensory hair cells from aminoglycoside toxicity Matthew eKruger 2016-03-01 Full Text Available Moderate to severe hearing loss affects 360 million people worldwide and most often results from damage to sensory hair cells. Hair cell damage can result from aging, genetic mutations, excess noise exposure, and certain medications including aminoglycoside antibiotics. Aminoglycosides are effective at treating infections associated with cystic fibrosis and other life-threatening conditions such as sepsis, but cause hearing loss in 20-30% of patients. It is therefore imperative to develop new therapies to combat hearing loss and allow safe use of these potent antibiotics. We approach this drug discovery question using the larval zebrafish lateral line because zebrafish hair cells are structurally and functionally similar to mammalian inner ear hair cells and respond similarly to toxins. We screened a library of 502 natural compounds in order to identify novel hair cell protectants. Our screen identified four bisbenzylisoquinoline derivatives: berbamine, E6 berbamine, hernandezine, and isotetrandrine, each of which robustly protected hair cells from aminoglycoside-induced damage. Using fluorescence microscopy and electrophysiology, we demonstrated that the natural compounds confer protection by reducing antibiotic uptake into hair cells and showed that hair cells remain functional during and after incubation in E6 berbamine. We also determined that these natural compounds do not reduce antibiotic efficacy. Together, these natural compounds represent a novel source of possible otoprotective drugs that may offer therapeutic options for patients receiving aminoglycoside treatment. 12. INFRARED STUDY OF FULLERENE PLANETARY NEBULAE García-Hernández, D. A.; Acosta-Pulido, J. A.; Manchado, A.; Villaver, E.; García-Lario, P.; Stanghellini, L.; Shaw, R. A.; Cataldo, F. 2012-01-01 We present a study of 16 planetary nebulae (PNe) where fullerenes have been detected in their Spitzer Space Telescope spectra. This large sample of objects offers a unique opportunity to test conditions of fullerene formation and survival under different metallicity environments because we are analyzing five sources in our own Galaxy, four in the Large Magellanic Cloud (LMC), and seven in the Small Magellanic Cloud (SMC). Among the 16 PNe studied, we present the first detection of C 60 (and possibly also C 70 ) fullerenes in the PN M 1–60 as well as of the unusual ∼6.6, 9.8, and 20 μm features (attributed to possible planar C 24 ) in the PN K 3–54. Although selection effects in the original samples of PNe observed with Spitzer may play a potentially significant role in the statistics, we find that the detection rate of fullerenes in C-rich PNe increases with decreasing metallicity (∼5% in the Galaxy, ∼20% in the LMC, and ∼44% in the SMC) and we interpret this as a possible consequence of the limited dust processing occurring in Magellanic Cloud (MC) PNe. CLOUDY photoionization modeling matches the observed IR fluxes with central stars that display a rather narrow range in effective temperature (∼30,000-45,000 K), suggesting a common evolutionary status of the objects and similar fullerene formation conditions. Furthermore, the data suggest that fullerene PNe likely evolve from low-mass progenitors and are usually of low excitation. We do not find a metallicity dependence on the estimated fullerene abundances. The observed C 60 intensity ratios in the Galactic sources confirm our previous finding in the MCs that the fullerene emission is not excited by the UV radiation from the central star. CLOUDY models also show that line- and wind-blanketed model atmospheres can explain many of the observed [Ne III]/[Ne II] ratios using photoionization, suggesting that possibly the UV radiation from the central star, and not shocks, is triggering the decomposition 13. Micelle-encapsulated fullerenes in aqueous electrolytes Ala-Kleme, T., E-mail: timo.ala-kleme@utu.fi [Department of Chemistry, University of Turku, 20014 Turku (Finland); Maeki, A.; Maeki, R.; Kopperoinen, A.; Heikkinen, M.; Haapakka, K. [Department of Chemistry, University of Turku, 20014 Turku (Finland) 2013-03-15 Different micellar particles Mi(M{sup +}) (Mi=Triton X-100, Triton N-101 R, Triton CF-10, Brij-35, M{sup +}=Na{sup +}, K{sup +}, Cs{sup +}) have been prepared in different aqueous H{sub 3}BO{sub 3}/MOH background electrolytes. It has been observed that these particles can be used to disperse the highly hydrophobic spherical [60]fullerene (1) and ellipsoidal [70]fullerene (2). This dispersion is realised as either micelle-encapsulated monomers Mi(M{sup +})1{sub m} and Mi(M{sup +})2{sub m} or water-soluble micelle-bound aggregates Mi(M{sup +})1{sub agg} and Mi(M{sup +})2{sub agg}, where especially the hydration degree and polyoxyethylene (POE) thickness of the micellar particle seems to play a role of vital importance. Further, the encapsulation microenvironment of 1{sub m} was found to depend strongly on the selected monovalent electrolyte cation, i.e., the encapsulated 1{sub m} is accommodated in the more hydrophobic microenvironment the higher the cationic solvation number is. - Highlights: Black-Right-Pointing-Pointer Different micellar particles is used to disperse [60]fullerene and [70]fullerene. Black-Right-Pointing-Pointer Fullerene monomers or aggregates are dispersed encaging or bounding by micelles. Black-Right-Pointing-Pointer Effective facts are hydration degree and polyoxyethylene thickness of micelle. 14. Development of derived limits for radiological protection against ionizing radiation based on ICRP-60 recommendations Jang, S. Y.; Lee, B. S. 1999-01-01 Derived limits such as the Annual Limit on Intake (ALI), Derived Air Concentration (DAC) and Effluent Concentration Limit (ECL) for radiological protection against ionizing radiation based on ICRP-60 recommendations were calculated using dose limits and committed effective dose coefficients of the basic Safety Standards of IAEA (i.e. safety series 115; BSS-96). Derived limits regarding occupational exposure were derived using methodologies of ICRP-61 and dose limit stated in ICRP -60. ECL in air and water for the control of radioactive discharge into the environment were derived using methodologies of 10 CFR part 20 and dose limit stated in ICRP-60. In order to analyze the impact of implementing derived limits on nuclear facilities, the derived values in this study were compared with those prescribed in 10 CFR part 20 as well as the Maximum Permissible Concentrations (MPC) of Notice No. 98-12 of the Ministry of Science and Technology (MOST). According to the comparison results, ECLs in air and water for the control of radioactive discharge into the environment in this study are shown to have lower values (i.e. more conservative), for most part, than those in Notice No. 98-12. These differences are due to the reduction of dose limit, adoption of a weighting factor for age-dependency in dose coefficients, and application of new respiratory tract model and bio-kinetics model. Especially, for uranium elements (i.e., 235 U, 238 U, etc.), which are governing ones in the nuclear fuel industries, ECLs in water are approximately a magnitude in the order of two lower than those in Notice No. 98-12. These are attributable to the adoption of a weighting factor for age-dependency in dose coefficients, newly recommended dose coefficients for ingestion pathway, and reduction of dose limit. It was found out that the differences in ECLs in water for uranium elements originated mostly from ingestion dose coefficients recommended by BSS-96. (author). 6 refs., 2 tabs., 5 figs 15. Endothelium-Derived 5-Methoxytryptophan Protects Endothelial Barrier Function by Blocking p38 MAPK Activation. Ling-Yun Chu Full Text Available The endothelial junction is tightly controlled to restrict the passage of blood cells and solutes. Disruption of endothelial barrier function by bacterial endotoxins, cytokines or growth factors results in inflammation and vascular damage leading to vascular diseases. We have identified 5-methoxytryptophan (5-MTP as an anti-inflammatory factor by metabolomic analysis of conditioned medium of human fibroblasts. Here we postulated that endothelial cells release 5-MTP to protect the barrier function. Conditioned medium of human umbilical vein endothelial cells (HUVECs prevented endothelial hyperpermeability and VE-cadherin downregulation induced by VEGF, LPS and cytokines. We analyzed the metabolomic profile of HUVEC conditioned medium and detected 5-MTP but not melatonin, serotonin or their catabolites, which was confirmed by enzyme-linked immunosorbent assay. Addition of synthetic pure 5-MTP preserved VE-cadherin and maintained barrier function despite challenge with pro-inflammatory mediators. Tryptophan hydroxylase-1, an enzyme required for 5-MTP biosynthesis, was downregulated in HUVECs by pro-inflammatory mediators and it was accompanied by reduction of 5-MTP. 5-MTP protected VE-cadherin and prevented endothelial hyperpermeability by blocking p38 MAPK activation. A chemical inhibitor of p38 MAPK, SB202190, exhibited a similar protective effect as 5-MTP. To determine whether 5-MTP prevents vascular hyperpermeability in vivo, we evaluated the effect of 5-MTP administration on LPS-induced murine microvascular permeability with Evans blue. 5-MTP significantly prevented Evans blue dye leakage. Our findings indicate that 5-MTP is a new class of endothelium-derived molecules which protects endothelial barrier function by blocking p38 MAPK. 16. Molecular Mechanisms Responsible for Neuron-Derived Conditioned Medium (NCM-Mediated Protection of Ischemic Brain. Chi-Hsin Lin Full Text Available The protective value of neuron-derived conditioned medium (NCM in cerebral ischemia and the underlying mechanism(s responsible for NCM-mediated brain protection against cerebral ischemia were investigated in the study. NCM was first collected from the neuronal culture growing under the in vitro ischemic condition (glucose-, oxygen- and serum-deprivation or GOSD for 2, 4 or 6 h. Through the focal cerebral ischemia (bilateral CCAO/unilateral MCAO animal model, we discovered that ischemia/reperfusion (I/R-induced brain infarction was significantly reduced by NCM, given directly into the cistern magna at the end of 90 min of CCAO/MCAO. Immunoblocking and chemical blocking strategies were applied in the in vitro ischemic studies to show that NCM supplement could protect microglia, astrocytes and neurons from GOSD-induced cell death, in a growth factor (TGFβ1, NT-3 and GDNF and p-ERK dependent manner. Brain injection with TGFβ1, NT3, GDNF and ERK agonist (DADS alone or in combination, therefore also significantly decreased the infarct volume of ischemic brain. Moreover, NCM could inhibit ROS but stimulate IL-1β release from GOSD-treated microglia and limit the infiltration of IL-β-positive microglia into the core area of ischemic brain, revealing the anti-oxidant and anti-inflammatory activities of NCM. In overall, NCM-mediated brain protection against cerebral ischemia has been demonstrated for the first time in S.D. rats, due to its anti-apoptotic, anti-oxidant and potentially anti-glutamate activities (NCM-induced IL-1β can inhibit the glutamate-mediated neurotoxicity and restriction upon the infiltration of inflammatory microglia into the core area of ischemic brain. The therapeutic potentials of NCM, TGFβ1, GDNF, NT-3 and DADS in the control of cerebral ischemia in human therefore have been suggested and require further investigation. 17. Graphene macro-assembly-fullerene composite for electrical energy storage Campbell, Patrick G.; Baumann, Theodore F.; Biener, Juergen; Merrill, Matthew; Montalvo, Elizabeth; Worsley, Marcus A.; Biener, Monika M.; Hernandez, Maira Raquel Ceron 2018-01-16 Disclosed here is a method for producing a graphene macro-assembly (GMA)-fullerene composite, comprising providing a GMA comprising a three-dimensional network of graphene sheets crosslinked by covalent carbon bonds, and incorporating at least 20 wt. % of at least one fullerene compound into the GMA based on the initial weight of the GMA to obtain a GMA-fullerene composite. Also described are a GMA-fullerene composite produced, an electrode comprising the GMA-fullerene composite, and a supercapacitor comprising the electrode and optionally an organic or ionic liquid electrolyte in contact with the electrode. 18. C{sub 60} fullerene decoration of carbon nanotubes Demin, V. A., E-mail: victordemin88@gmail.com [Russian Academy of Sciences, Emanuel Institute of Biochemical Physics (Russian Federation); Blank, V. D.; Karaeva, A. R.; Kulnitskiy, B. A.; Mordkovich, V. Z. [Technological Institute for Superhard and Novel Carbon Materials (Russian Federation); Parkhomenko, Yu. N. [National University of Science and Technology MISiS (Russian Federation); Perezhogin, I. A.; Popov, M. Yu. [Technological Institute for Superhard and Novel Carbon Materials (Russian Federation); Skryleva, E. A. [National University of Science and Technology MISiS (Russian Federation); Urvanov, S. A. [Technological Institute for Superhard and Novel Carbon Materials (Russian Federation); Chernozatonskii, L. A. [Russian Academy of Sciences, Emanuel Institute of Biochemical Physics (Russian Federation) 2016-12-15 A new fully carbon nanocomposite material is synthesized by the immersion of carbon nanotubes in a fullerene solution in carbon disulfide. The presence of a dense layer of fullerene molecules on the outer nanotube surface is demonstrated by TEM and XPS. Fullerenes are redistributed on the nanotube surface during a long-term action of an electron beam, which points to the existence of a molecular bond between a nanotube and fullerenes. Theoretical calculations show that the formation of a fullerene shell begins with the attachment of one C{sub 60} molecule to a defect on the nanotube surface. 19. Fascinating serendipity some adventures in fullerene chemistry Braun, T.; Rauch, H. 2001-01-01 The lecture is divided to four chapters. Chapter one gives a short overview on the notion of serendipity and the serendipitous discovery of the fullerenes, the third allotropic form of carbon and will try to highlight why this discovery can be considered a revolution in chemistry. The second and third chapters present some results of the author's research group. Neutron irradiation of C 60 in a nuclear reactor has also made possible the serendipitous discovery of a new procedure for synthesis of endohedral C 60 compounds exemplified by the synthesis of many endohedral radio-fullerenes of * X at C 60 type. The fourth chapter of the lecture deals with 'Capture-captive chemistry' as a new typology for molecular containers including fullerenes. (author) 20. Site specific atomic polarizabilities in endohedral fullerenes and carbon onions Zope, Rajendra R., E-mail: rzope@utep.edu; Baruah, Tunna [Department of Physics, The University of Texas at El Paso, El Paso, Texas 79958 (United States); Computational Science Program, The University of Texas at El Paso, El Paso, Texas 79958 (United States); Bhusal, Shusil; Basurto, Luis [Department of Physics, The University of Texas at El Paso, El Paso, Texas 79958 (United States); Jackson, Koblar [Physics Department and Science of Advanced Materials Ph.D. Program, Central Michigan University, Mt. Pleasant, Michigan 48859 (United States) 2015-08-28 We investigate the polarizability of trimetallic nitride endohedral fullerenes by partitioning the total polarizability into site specific components. This analysis indicates that the polarizability of the endohedral fullerene is essentially due to the outer fullerene cage and has insignificant contribution from the encapsulated unit. Thus, the outer fullerene cages effectively shield the encapsulated clusters and behave like Faraday cages. The polarizability of endohedral fullerenes is slightly smaller than the polarizability of the corresponding bare carbon fullerenes. The application of the site specific polarizabilities to C{sub 60}@C{sub 240} and C{sub 60}@C{sub 180} onions shows that, compared to the polarizability of isolated C{sub 60} fullerene, the encapsulation of the C{sub 60} in C{sub 240} and C{sub 180} fullerenes reduces its polarizability by 75% and 83%, respectively. The differences in the polarizability of C{sub 60} in the two onions is a result of differences in the bonding (intershell electron transfer), fullerene shell relaxations, and intershell separations. The site specific analysis further shows that the outer atoms in a fullerene shell contribute most to the fullerene polarizability. 1. Site specific atomic polarizabilities in endohedral fullerenes and carbon onions Zope, Rajendra R.; Baruah, Tunna; Bhusal, Shusil; Basurto, Luis; Jackson, Koblar 2015-01-01 We investigate the polarizability of trimetallic nitride endohedral fullerenes by partitioning the total polarizability into site specific components. This analysis indicates that the polarizability of the endohedral fullerene is essentially due to the outer fullerene cage and has insignificant contribution from the encapsulated unit. Thus, the outer fullerene cages effectively shield the encapsulated clusters and behave like Faraday cages. The polarizability of endohedral fullerenes is slightly smaller than the polarizability of the corresponding bare carbon fullerenes. The application of the site specific polarizabilities to C 60 @C 240 and C 60 @C 180 onions shows that, compared to the polarizability of isolated C 60 fullerene, the encapsulation of the C 60 in C 240 and C 180 fullerenes reduces its polarizability by 75% and 83%, respectively. The differences in the polarizability of C 60 in the two onions is a result of differences in the bonding (intershell electron transfer), fullerene shell relaxations, and intershell separations. The site specific analysis further shows that the outer atoms in a fullerene shell contribute most to the fullerene polarizability 2. Site specific atomic polarizabilities in endohedral fullerenes and carbon onions Zope, Rajendra R.; Bhusal, Shusil; Basurto, Luis; Baruah, Tunna; Jackson, Koblar 2015-08-01 We investigate the polarizability of trimetallic nitride endohedral fullerenes by partitioning the total polarizability into site specific components. This analysis indicates that the polarizability of the endohedral fullerene is essentially due to the outer fullerene cage and has insignificant contribution from the encapsulated unit. Thus, the outer fullerene cages effectively shield the encapsulated clusters and behave like Faraday cages. The polarizability of endohedral fullerenes is slightly smaller than the polarizability of the corresponding bare carbon fullerenes. The application of the site specific polarizabilities to C60@C240 and C60@C180 onions shows that, compared to the polarizability of isolated C60 fullerene, the encapsulation of the C60 in C240 and C180 fullerenes reduces its polarizability by 75% and 83%, respectively. The differences in the polarizability of C60 in the two onions is a result of differences in the bonding (intershell electron transfer), fullerene shell relaxations, and intershell separations. The site specific analysis further shows that the outer atoms in a fullerene shell contribute most to the fullerene polarizability. 3. Laser controlled magnetism in hydrogenated fullerene films Makarova, Tatiana L.; Shelankov, Andrei L.; Kvyatkovskii, Oleg E.; Zakharova, Irina B.; Buga, Sergei G.; Volkov, Aleksandr P. 2011-01-01 Room temperature ferromagnetic-like behavior in fullerene photopolymerized films treated with monatomic hydrogen is reported. The hydrogen treatment controllably varies the paramagnetic spin concentration and laser induced polymerization transforms the paramagnetic phase to a ferromagnetic-like one. Excess laser irradiation destroys magnetic ordering, presumably due to structural changes, which was continuously monitored by Raman spectroscopy. We suggest an interpretation of the data based on first-principles density-functional spin-unrestricted calculations which show that the excess spin from mono-atomic hydrogen is delocalized within the host fullerene and the laser-induced polymerization promotes spin exchange interaction and spin alignment in the polymerized phase. 4. Water around fullerene shape amphiphiles: A molecular dynamics simulation study of hydrophobic hydration Varanasi, S. R., E-mail: s.raovaranasi@uq.edu.au, E-mail: guskova@ipfdd.de; John, A. [Institut Theorie der Polymere, Leibniz-Institut für Polymerforschung Dresden e.V., Hohe Straße 6, Dresden D-01069 (Germany); Guskova, O. A., E-mail: s.raovaranasi@uq.edu.au, E-mail: guskova@ipfdd.de [Institut Theorie der Polymere, Leibniz-Institut für Polymerforschung Dresden e.V., Hohe Straße 6, Dresden D-01069 (Germany); Dresden Center for Computational Materials Science (DCMS), Technische Universität Dresden, Dresden D-01069 (Germany); Sommer, J.-U. [Institut Theorie der Polymere, Leibniz-Institut für Polymerforschung Dresden e.V., Hohe Straße 6, Dresden D-01069 (Germany); Dresden Center for Computational Materials Science (DCMS), Technische Universität Dresden, Dresden D-01069 (Germany); Institut für Theoretische Physik, Technische Universität Dresden, Zellescher Weg 17, Dresden D-01069 (Germany) 2015-06-14 Fullerene C{sub 60} sub-colloidal particle with diameter ∼1 nm represents a boundary case between small and large hydrophobic solutes on the length scale of hydrophobic hydration. In the present paper, a molecular dynamics simulation is performed to investigate this complex phenomenon for bare C{sub 60} fullerene and its amphiphilic/charged derivatives, so called shape amphiphiles. Since most of the unique properties of water originate from the pattern of hydrogen bond network and its dynamics, spatial, and orientational aspects of water in solvation shells around the solute surface having hydrophilic and hydrophobic regions are analyzed. Dynamical properties such as translational-rotational mobility, reorientational correlation and occupation time correlation functions of water molecules, and diffusion coefficients are also calculated. Slower dynamics of solvent molecules—water retardation—in the vicinity of the solutes is observed. Both the topological properties of hydrogen bond pattern and the “dangling” –OH groups that represent surface defects in water network are monitored. The fraction of such defect structures is increased near the hydrophobic cap of fullerenes. Some “dry” regions of C{sub 60} are observed which can be considered as signatures of surface dewetting. In an effort to provide molecular level insight into the thermodynamics of hydration, the free energy of solvation is determined for a family of fullerene particles using thermodynamic integration technique. 5. Fullerene C70 as a p-type donor in organic photovoltaic cells Zhuang, Taojun; Wang, Xiao-Feng; Sano, Takeshi; Kido, Junji; Hong, Ziruo; Li, Gang; Yang, Yang 2014-01-01 Fullerenes and their derivatives have been widely used as n-type materials in organic transistor and photovoltaic devices. Though it is believed that they shall be ambipolar in nature, there have been few direct experimental proofs for that. In this work, fullerene C 70 , known as an efficient acceptor, has been employed as a p-type electron donor in conjunction with 1,4,5,8,9,11-hexaazatriphenylene hexacarbonitrile as an electron acceptor in planar-heterojunction (PHJ) organic photovoltaic (OPV) cells. High fill factors (FFs) of more than 0.70 were reliably achieved with the C 70 layer even up to 100 nm thick in PHJ cells, suggesting the superior potential of fullerene C 70 as the p-type donor in comparison to other conventional donor materials. The optimal efficiency of these unconventional PHJ cells was 2.83% with a short-circuit current of 5.33 mA/cm 2 , an open circuit voltage of 0.72 V, and a FF of 0.74. The results in this work unveil the potential of fullerene materials as donors in OPV devices, and provide alternative approaches towards future OPV applications. 6. Water around fullerene shape amphiphiles: A molecular dynamics simulation study of hydrophobic hydration Varanasi, S. R.; John, A.; Guskova, O. A.; Sommer, J.-U. 2015-01-01 Fullerene C 60 sub-colloidal particle with diameter ∼1 nm represents a boundary case between small and large hydrophobic solutes on the length scale of hydrophobic hydration. In the present paper, a molecular dynamics simulation is performed to investigate this complex phenomenon for bare C 60 fullerene and its amphiphilic/charged derivatives, so called shape amphiphiles. Since most of the unique properties of water originate from the pattern of hydrogen bond network and its dynamics, spatial, and orientational aspects of water in solvation shells around the solute surface having hydrophilic and hydrophobic regions are analyzed. Dynamical properties such as translational-rotational mobility, reorientational correlation and occupation time correlation functions of water molecules, and diffusion coefficients are also calculated. Slower dynamics of solvent molecules—water retardation—in the vicinity of the solutes is observed. Both the topological properties of hydrogen bond pattern and the “dangling” –OH groups that represent surface defects in water network are monitored. The fraction of such defect structures is increased near the hydrophobic cap of fullerenes. Some “dry” regions of C 60 are observed which can be considered as signatures of surface dewetting. In an effort to provide molecular level insight into the thermodynamics of hydration, the free energy of solvation is determined for a family of fullerene particles using thermodynamic integration technique 7. Functionalized Fullerene Targeting Human Voltage-Gated Sodium Channel, hNav1.7. Hilder, Tamsyn A; Robinson, Anna; Chung, Shin-Ho 2017-08-16 Mutations of hNa v 1.7 that cause its activities to be enhanced contribute to severe neuropathic pain. Only a small number of hNa v 1.7 specific inhibitors have been identified, most of which interact with the voltage-sensing domain of the voltage-activated sodium ion channel. In our previous computational study, we demonstrated that a [Lys 6 ]-C 84 fullerene binds tightly (affinity of 46 nM) to Na v Ab, the voltage-gated sodium channel from the bacterium Arcobacter butzleri. Here, we extend this work and, using molecular dynamics simulations, demonstrate that the same [Lys 6 ]-C 84 fullerene binds strongly (2.7 nM) to the pore of a modeled human sodium ion channel hNa v 1.7. In contrast, the fullerene binds only weakly to a mutated model of hNa v 1.7 (I1399D) (14.5 mM) and a model of the skeletal muscle hNa v 1.4 (3.7 mM). Comparison of one representative sequence from each of the nine human sodium channel isoforms shows that only hNa v 1.7 possesses residues that are critical for binding the fullerene derivative and blocking the channel pore. 8. Novel vanillin derivatives: Synthesis, anti-oxidant, DNA and cellular protection properties. Scipioni, Matteo; Kay, Graeme; Megson, Ian; Kong Thoo Lin, Paul 2018-01-01 Antioxidants have been the subject of intense research interest mainly due to their beneficial properties associated with human health and wellbeing. Phenolic molecules, such as naturally occurring Resveratrol and Vanillin, are well known for their anti-oxidant properties, providing a starting point for the development of new antioxidants. Here we report, for the first time, the synthesis of a number of new vanillin through the reductive amination reaction between vanillin and a selection of amines. All the compounds synthesised, exhibited strong antioxidant properties in DPPH, FRAP and ORAC assays, with compounds 1b and 2c being the most active. The latter also demonstrated the ability to protect plasmid DNA from oxidative damage in the presence of the radical initiator AAPH. At cellular level, neuroblastoma SH-SY5Y cells were protected from oxidative damage (H 2 O 2 , 400 μM) with both 1b and 2c. The presence of a tertiary amino group, along with the number of vanillin moieties in the molecule contribute for the antioxidant activity. Furthermore, the delocalization of the electron pair of the nitrogen and the presence of an electron donating substituent to enhance the antioxidant properties of this new class of compounds. In our opinion, vanillin derivatives 1b and 2c described in this work can provide a viable platform for the development of antioxidant based therapeutics. Copyright © 2017 Elsevier Masson SAS. All rights reserved. 9. Protective effect of bile acid derivatives in phalloidin-induced rat liver toxicity Herraez, Elisa; Macias, Rocio I.R.; Vazquez-Tato, Jose; Hierro, Carlos; Monte, Maria J.; Marin, Jose J.G. 2009-01-01 Phalloidin causes severe liver damage characterized by marked cholestasis, which is due in part to irreversible polymerization of actin filaments. Liver uptake of this toxin through the transporter OATP1B1 is inhibited by the bile acid derivative BALU-1, which does not inhibit the sodium-dependent bile acid transporter NTCP. The aim of the present study was to investigate whether BALU-1 prevents liver uptake of phalloidin without impairing endogenous bile acid handling and hence may have protective effects against the hepatotoxicity induced by this toxin. In anaesthetized rats, i.v. administration of BALU-1 increased bile flow more than taurocholic acid (TCA). Phalloidin administration decreased basal (- 60%) and TCA-stimulated bile flow (- 55%) without impairing bile acid output. Phalloidin-induced cholestasis was accompanied by liver necrosis, nephrotoxicity and haematuria. In BALU-1-treated animals, phalloidin-induced cholestasis was partially prevented. Moreover haematuria was not observed, which was consistent with histological evidences of BALU-1-prevented injury of liver and kidney tissue. HPLC-MS/MS analysis revealed that BALU-1 was secreted in bile mainly in non-conjugated form, although a small proportion ( TCA > DHCA > UDCA. In conclusion, BALU-1 is able to protect against phalloidin-induced hepatotoxicity, probably due to an inhibition of the liver uptake and an enhanced biliary secretion of this toxin. 10. Radiological protection optimization derived from radiation induced lesions in interventional cardiology finding Vano, E.; Arranz, L.; Sastre, J.M.; Ferrer, N. 1997-01-01 Interventional Cardiology is one of the specialties in which patients are submitted to the greatest radiation doses with x ray systems used for diagnostic purposes and then, it is also a specialty of high occupational radiation risk. In the last years, several cases of radiation induced lesions produced on patients derived of new complex interventional procedures have been described. As consequence, different rules for avoiding this kind of incidents have been recommended by International Organisations and regulatory Bodies. Nevertheless it has been devoted relatively few attention to the evaluation of the occupational risks that inevitably are also high in these facilities. In this work, some cases of radioinduced skin lesions produced on patients submitted to cardiac ablation procedures are described. Radiological protection considerations of interest for the regulatory Bodies are made, that permit to minimize the probability of these incidents, in what to the X-rays equipment is referred as well as to the operation procedures and level of radiation protection training of the medical specialists. (author) 11. A suggested approach for deriving risk criteria in radiation protection and land use planning Cameron, R.F.; Corran, E.R. 1994-01-01 In radiation protection, tolerability has been determined by setting a limit on the dose received recognizing that there is an unavoidable background level of radiation to which we are all exposed. This dose is sometimes associated with a cancer fatality coefficient to convert the dose to a probability of fatality, but it is recognised that fatality is not immediate but arises (if at all) many years after the exposure. In other hazardous industry, tolerability is based on satisfying annual fatality risk limits for the number of immediate fatalities. These limits vary with the type of land use proposed. This raises the questions of how such risks should be compared and, in particular, whether there is a basis for common risk measures to be derived. Unless this can be done, inappropriate comparisons will continue to be made. In this paper, a method is suggested for deriving measures of risk to individuals and to communities, both for activities involving radiation exposure and for accidents with other hazardous materials. The method is based on taking account of the difference between continuing releases and accidental transient releases. It is argued that the continuous releases, the lifetime risk is the most appropriate parameter both for radiation and hazardous material exposure. For accident situations both individual and societal risk curves can be drawn which take account of the difference between acute and latent fatalities. Some problems associated with societal risk curves are discussed and suggestions for their use given. 11 refs., 3 figs 12. Adiponectin and plant-derived mammalian adiponectin homolog exert a protective effect in murine colitis Arsenescu, Violeta 2011-04-11 Background: Hypoadiponectinemia has been associated with states of chronic inflammation in humans. Mesenteric fat hypertrophy and low adiponectin have been described in patients with Crohn\\'s disease. We investigated whether adiponectin and the plant-derived homolog, osmotin, are beneficial in a murine model of colitis. Methods: C57BL/6 mice were injected (i.v.) with an adenoviral construct encoding the full-length murine adiponectin gene (AN+DSS) or a reporter-LacZ (Ctr and V+DSS groups) prior to DSS colitis protocol. In another experiment, mice with DSS colitis received either osmotin (Osm+DSS) or saline (DSS) via osmotic pumps. Disease progression and severity were evaluated using body weight, stool consistency, rectal bleeding, colon lengths, and histology. In vitro experiments were carried out in bone marrow-derived dendritic cells. Results: Mice overexpressing adiponectin had lower expression of proinflammatory cytokines (TNF, IL-1β), adipokines (angiotensin, osteopontin), and cellular stress and apoptosis markers. These mice had higher levels of IL-10, alternative macrophage marker, arginase 1, and leukoprotease inhibitor. The plant adiponectin homolog osmotin similarly improved colitis outcome and induced robust IL-10 secretion. LPS induced a state of adiponectin resistance in dendritic cells that was reversed by treatment with PPARγ agonist and retinoic acid. Conclusion: Adiponectin exerted protective effects during murine DSS colitis. It had a broad activity that encompassed cytokines, chemotactic factors as well as processes that assure cell viability during stressful conditions. Reducing adiponectin resistance or using plant-derived adiponectin homologs may become therapeutic options in inflammatory bowel disease. © 2011 Springer Science+Business Media, LLC. 13. Optical limiting properties of fullerenes and related materials Riggs, Jason Eric Optical limiting properties of fullerene C60 and different C60 derivatives (methano-, pyrrolidino-, and amino-) towards nanosecond laser pulses at 532 nm were studied. The results show that optical limiting responses of the C60 derivatives are similar to those of the parent C60 despite their different linear absorption and emission properties. For C60 and the derivatives in room-temperature solutions of varying concentrations and optical path length, the optical limiting responses are strongly concentration dependent. The concentration dependence is not due to any optical artifacts since the results obtained under the same experimental conditions for reference systems show no such dependence. Similarly, optical limiting results of fullerenes are strongly dependent on the medium viscosity, with responses in viscous media weaker than that in room-temperature solutions. The solution concentration and medium viscosity dependencies are not limited to fullerenes. In fact, the results from a systematic investigation of several classes of nonlinear absorptive organic dyes show that the optical limiting responses are also concentration and medium viscosity dependent. Interestingly, however, such dependencies are uniquely absent in the optical limiting responses of metallophthalocyanines. In classical photophysics, the strong solution concentration and medium viscosity dependencies are indicative of significant contributions from photoexcited-state bimolecular processes. Thus, the experimental results are discussed in terms of a significantly modified five-level reverse saturable absorption mechanism. Optical limiting properties of single-walled and multiple-walled carbon nanotubes toward nanosecond laser pulses at 532 nm were also investigated. When suspended in water, the single-walled and multiple-walled carbon nanotubes exhibit essentially the same optical limiting responses, and the results are also comparable with those of carbon black aqueous suspension. For 14. Nature of the Binding Interactions between Conjugated Polymer Chains and Fullerenes in Bulk Heterojunction Organic Solar Cells Ravva, Mahesh Kumar 2016-10-24 Blends of π-conjugated polymers and fullerene derivatives are ubiquitous as the active layers of organic solar cells. However, a detailed understanding of the weak noncovalent interactions at the molecular level between the polymer chains and fullerenes is still lacking and could help in the design of more efficient photoactive layers. Here, using a combination of long-range corrected density functional theory calculations and molecular dynamic simulations, we report a thorough characterization of the nature of binding between fullerenes (C60 and PC61BM) and poly(benzo[1,2-b:4,5-b′]dithiophene–thieno[3,4-c]pyrrole-4,6-dione) (PBDTTPD) chains. We illustrate the variations in binding strength when the fullerenes dock on the electron-rich vs electron-poor units of the polymer as well as the importance of the role played by the polymer and fullerene side chains and the orientations of the PC61BM molecules with respect to the polymer backbones. 15. Electronic stopping in ion-fullerene collisions Schlathölter, T.A.; Hadjar, O.; Hoekstra, R.A.; Morgenstern, R.W.H. The electronic friction experienced by a multiply charged ion interacting with the valence electrons of a single fullerene is an important aspect of the collision dynamics. It manifests itself in a considerable loss of projectile kinetic energy transferred to the target, resulting in excitation. The 16. Study of the Si fullerene cage isomers Fthenakis, Z.G.; Havenith, R.W.A.; Menon, M.; Fowler, P.W. 2005-01-01 We present the results of a study on the structural and electronic properties of the Si38 fullerene isomers, which are constructed by making all possible permutations among their pentagons and hexagons. These structures were firstly fully optimized with a tight binding molecular dynamics method and 17. Thiamakrocykly pro komplexaci fullerenů Holý, Petr; Buchta, Michal; Rybáček, Jiří; Závada, Jiří 2009-01-01 Roč. 5, č. 9 (2009), s. 186-187 ISSN 1336-7242. [Zjazd chemikov /61./. 07.09.2009-11.09.2009, Tatranské Matliare] R&D Projects: GA AV ČR IAA400550704 Institutional research plan: CEZ:AV0Z40550506 Keywords : makrocycles * alkylation * fullerene s Subject RIV: CC - Organic Chemistry 18. Spectroscopy on Polymer-Fullerene Photovoltaic Cells Dyakonov, V.; Riedel, I.; Godovsky, D.; Parisi, J.; Ceuster, J. De; Goovaerts, E.; Hummelen, J.C. 2000-01-01 We investigate the electrical transport properties of ITO/conjugated polymer-fullerene/Al photovoltaic cells and the role of defect states with current-voltage studies, admittance spectroscopy, and electron spin resonance technique. In the temperature range 293-40K, the characteristic step in the 19. Fullerene monolayer formation by spray coating Červenka, Jiří; Flipse, C.F.J. 2010-01-01 Roč. 21, č. 6 (2010), 065302/1-065302/7 ISSN 0957-4484 Institutional research plan: CEZ:AV0Z10100521 Keywords : monolayer * spray coating * fullerene * atomic force microscopy * scanning tunnelling microscopy * electronic structure * graphite * gold Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.644, year: 2010 20. Fullerenes, PAHs, Amino Acids and High Energy Astrophysics Susana Iglesias-Groth 2014-12-01 Full Text Available We present theoretical, observational and laboratory work on the spectral properties of fullerenes and hydrogenated fullerenes. Fullerenes in its various forms (individual, endohedral, hydrogenated, etc. can contribute to the UV bump in the extinction curves measured in many lines of sight of the Galaxy. They can also produce a large number of absorption features in the optical and near infrared which could be associated with diffuse interstellar bands. We summarise recent laboratory work on the spectral characterisation of fullerenes and hydrogenated fullerenes (for a range of temperatures. The recent detection of mid-IR bands of fullerenes in various astrophysical environments (planetary nebulae, reflection nebulae provide additional evidence for a link between fullerene families and diffuse interstellar bands. We describe recent observational work on near IR bands of C60+ in a protoplanetary nebula which support fullerene formation during the post-AGB phase. We also report on the survival of fullerenes to irradiation by high energy particles and gamma photons and laboratory work to explore the chemical reactions that take place when fullerenes are exposed to this radiations in the presence of water, ammonia and other molecules as a potential path to form amino acids. 1. Compositional and electric field dependence of the dissociation of charge transfer excitons in alternating polyfluorene copolymer/fullerene blends Veldman, D.; Ipek, Ö.; Meskers, S.C.J.; Sweelssen, J.; Koetse, M.M.; Veenstra, S.C.; Kroon, J.M.; Bavel, S.S. van; Loos, J.; Janssen, R.A.J. 2008-01-01 The electro-optical properties of thin films of electron donor-acceptor blends of a fluorene copolymer (PF10TBT) and a fullerene derivative (PCBM) were studied. Transmission electron microscopy shows that in these films nanocrystalline PCBM clusters are formed at high PCBM content. For all 2. Search for fullerenes in stone meteorites Oester, M. Y.; Kuechl, D.; Sipiera, P. P.; Welch, C. J. 1994-07-01 The possibility of identifying fullerenes in stony meteorites became apparent from a paper given by Radicati de Brozolo. In this paper it was reported that fullerenes were present in the debris resulting from a collision between a micrometeoroid and an orbiting satellite. This fact generated sufficient curiosity to initiate a search for the presence of fullerenes in various stone meteorites. In the present study seven ordinary chondrites (al-Ghanim L6 (find), Dimmitt H4 (find), Lazbuddie LL5 (find), New Concord H5 (fall), Silverton H4 (find), Springlake L6 (find), and Umbarger L3/6 (find)). Four carbonaceous chondrites (ALH 83100 C2 (find), ALH 83108 C30 (find), Allende CV3 (fall), and Murchison CM2 (fall), and one achondrite (Monticello How (find)) were analyzed for the presence of fullerenes. The analytical procedure employed was as follows: 100 mg of meteorite was ground up with a mortar and pestle; 10 mL of toluene was then added and the mixture was refluxed for 90 min; this mixture was then filtered through a short column of silica; a 50 microliter sample was then analyzed by high pressure liquid chromatography (HPLC) using a Buckyclutcher I column with a mobile phase consisting of equal volumes of toluene and hexane at a flow rate of 1.00 mg per minute, with detection at 330 and 600 nm. Three of the meteorites, Allende, Murchison, and al-Ghanim, gave HPLC traces containing peaks with similar retention times to the HPLC trace of an authentic fullerene C60. However, further analysis using an HPLC instrument equipped with a diode-array detector failed to confirm any of the substances detected in the three meteorites as C60. Additional analyses will be conducted to identify what the HPLC traces actually represent. 3. Erythropoietin-derived nonerythropoietic peptide ameliorates experimental autoimmune neuritis by inflammation suppression and tissue protection. Yuqi Liu Full Text Available Experimental autoimmune neuritis (EAN is an autoantigen-specific T-cell-mediated disease model for human demyelinating inflammatory disease of the peripheral nervous system. Erythropoietin (EPO has been known to promote EAN recovery but its haematopoiesis stimulating effects may limit its clinic application. Here we investigated the effects and potential mechanisms of an EPO-derived nonerythropoietic peptide, ARA 290, in EAN. Exogenous ARA 290 intervention greatly improved EAN recovery, improved nerve regeneration and remyelination, and suppressed nerve inflammation. Furthermore, haematopoiesis was not induced by ARA 290 during EAN treatment. ARA 290 intervention suppressed lymphocyte proliferation and altered helper T cell differentiation by inducing increase of Foxp3+/CD4+ regulatory T cells and IL-4+/CD4+ Th2 cells and decrease of IFN-γ+/CD4+ Th1 cells in EAN. In addition, ARA 290 inhibited inflammatory macrophage activation and promoted its phagocytic activity. In vitro, ARA 290 was shown to promote Schwann cell proliferation and inhibit its inflammatory activation. In summary, our data demonstrated that ARA 290 could effectively suppress EAN by attenuating inflammation and exerting direct cell protection, indicating that ARA 290 could be a potent candidate for treatment of autoimmune neuropathies. 4. Combinations of Ashwagandha leaf extracts protect brain-derived cells against oxidative stress and induce differentiation. Navjot Shah Full Text Available Ashwagandha, a traditional Indian herb, has been known for its variety of therapeutic activities. We earlier demonstrated anticancer activities in the alcoholic and water extracts of the leaves that were mediated by activation of tumor suppressor functions and oxidative stress in cancer cells. Low doses of these extracts were shown to possess neuroprotective activities in vitro and in vivo assays.We used cultured glioblastoma and neuroblastoma cells to examine the effect of extracts (alcoholic and water as well as their bioactive components for neuroprotective activities against oxidative stress. Various biochemical and imaging assays on the marker proteins of glial and neuronal cells were performed along with their survival profiles in control, stressed and recovered conditions. We found that the extracts and one of the purified components, withanone, when used at a low dose, protected the glial and neuronal cells from oxidative as well as glutamate insult, and induced their differentiation per se. Furthermore, the combinations of extracts and active component were highly potent endorsing the therapeutic merit of the combinational approach.Ashwagandha leaf derived bioactive compounds have neuroprotective potential and may serve as supplement for brain health. 5. Memory operation mechanism of fullerene-containing polymer memory Nakajima, Anri, E-mail: anakajima@hiroshima-u.ac.jp; Fujii, Daiki [Research Institute for Nanodevice and Bio Systems, Hiroshima University, 1-4-2 Kagamiyama, Higashihiroshima, Hiroshima 739-8527 (Japan) 2015-03-09 The memory operation mechanism in fullerene-containing nanocomposite gate insulators was investigated while varying the kind of fullerene in a polymer gate insulator. It was cleared what kind of traps and which positions in the nanocomposite the injected electrons or holes are stored in. The reason for the difference in the easiness of programming was clarified taking the role of the charging energy of an injected electron into account. The dependence of the carrier dynamics on the kind of fullerene molecule was investigated. A nonuniform distribution of injected carriers occurred after application of a large magnitude programming voltage due to the width distribution of the polystyrene barrier between adjacent fullerene molecules. Through the investigations, we demonstrated a nanocomposite gate with fullerene molecules having excellent retention characteristics and a programming capability. This will lead to the realization of practical organic memories with fullerene-containing polymer nanocomposites. 6. DF-1, A Nontoxic Carbon Fullerene Based Antioxidant, is Effective as a Biomedical Countermeasure Against Radiation Theriot, Corey A.; Casey, Rachael; Conyers, Jodie; Wu, Honglu 2010-01-01 A long-term goal of radiation research is the mitigation of inherent risks of radiation exposure. Thus the study and development of safe agents, whether biomedical or dietary, that act as effective radioprotectors is an important step in accomplishing this long-term goal. Some of the most effective agents to date have been aminothiols and their derivatives. Unfortunately, most of these agents have side effects such as nausea, vomiting, hypotension, weakness, and fatigability. For example, nausea and emesis occur in most patients treated with WR-2721 (Amifostine), requiring the use of effective antiemetics, with hypotension being the dose-limiting side effect in patients treated. Clearly, the need for a radioprotector that is both effective and safe still exists. Development of biocompatible nano-materials for radioprotection is a promising emerging technology that could be exploited to address the need to minimize biological effects when exposure is unavoidable. Testing free radical scavenging nanoparticles for potential use in radioprotection is exciting and highly relevant. Initial investigations presented here demonstrate the ability of a particular functionalized carbon fullerene nanoparticle, (DF-1), to act as an effective radioprotector. DF-1 was first identified as the most promising candidate in a screen of several functionalized carbon fullerenes based on lack of toxicity and antioxidant therapeutic potential against oxidative injuries (i.e. organ reperfusion and ionizing radiation). Subsequently, DF-1 has been shown to reduce chromosome aberration yield and cell death, as well as overall ROS levels in human lymphocytes and fibroblasts after exposure to gamma radiation and energetic protons while demonstrating no associated toxicity. The dose-reducing factor of DF-1 at LD50 is nearly 2.0 for gamma radiation. In addition, DF-1 treatment also significantly prevented cell cycle arrest after exposure. Finally, DF-1 markedly attenuated COX2 upregulation in cell 7. Generation, Characterization and Applications of Fullerenes Liu, Shengzhong A contact-arc sputtering configuration has been adopted and optimized in order to generate fullerene-containing soot. Several stages of design improvements have made our equipment more effective in terms of yield and production rate. Upon modification of Wudl's Soxhlet separation procedure, we have been able to significantly speed up C_ {60} separation and higher fullerene enrichment. At least ten more separable HPLC peaks after C_ {84} have been observed for the first time. Preliminary laser desorption time of flight mass spectra suggest that our enriched higher fullerene sample possibly contains, C_{86}, C_{88}, C_ {90}, C_{92} , C_{94} and C _{96} in addition to the previously isolated smaller fullerenes C_ {60}, C_{70} , C_{76}, C _{78}(D_2), C_{78}(C_ {rm 2v}) and C_{84 }. Among these, C_{86 }, C_{88}, C_{92} show up for the first time in separable amounts and the controversial species --C_{94} appears present too. HPLC has been successfully used for high fullerene separation, pure C_{76}, C_{84} samples so far having been obtained. Fullerene decomposition (especially of higher fullerenes) in the column has been clearly identified. We defined HPLC peaks indicate that the oxidation process may follow certain "well defined" routes. A yellow epoxide band containing various oxides of C_{60 } has been extracted and characterized using mass spectrometry. Characterizations of pure C _{60} and C_{70 } include HPLC, mass spectrometry, vibrational IR and Raman spectroscopy, STM, TEM etc. Our Raman measurements completed the full assignment of C_{60 } fundamental modes and supplied more structural information on C_{70}. STM imaging supplied clear pictures of both C_ {60} and C_{70} molecular topologies. Especially for C _{70}, both the long and the short axes of the molecule have been clearly resolved. TEM observations involving imaging, diffraction and electron energy loss spectroscopy of crystalline C_{60} and C_{70} were performed. The room temperature lattice 8. Multiply-negatively charged aluminium clusters and fullerenes Walsh, Noelle 2008-07-15 Multiply negatively charged aluminium clusters and fullerenes were generated in a Penning trap using the 'electron-bath' technique. Aluminium monoanions were generated using a laser vaporisation source. After this, two-, three- and four-times negatively charged aluminium clusters were generated for the first time. This research marks the first observation of tetra-anionic metal clusters in the gas phase. Additionally, doubly-negatively charged fullerenes were generated. The smallest fullerene dianion observed contained 70 atoms. (orig.) 9. The role of fullerene shell upon stuffed atom polarization potential Amusia, M. Ya.; Chernysheva, L. V. 2015-01-01 We have demonstrated that the polarization of the fullerene shell considerably alters the polarization potential of an atom, stuffed inside a fullerene. This essentially affects the electron elastic scattering phases as well as corresponding cross-sections. We illustrate the general trend by concrete examples of electron scattering by endohedrals of Neon and Argon. To obtain the presented results, we have suggested a simplified approach that permits to incorporate the effect of fullerenes pol... 10. Analytical and molecular dynamics studies on the impact loading of single-layered graphene sheet by fullerene Hosseini-Hashemi, Shahrokh; Sepahi-Boroujeni, Amin; Sepahi-Boroujeni, Saeid 2018-04-01 Normal impact performance of a system including a fullerene molecule and a single-layered graphene sheet is studied in the present paper. Firstly, through a mathematical approach, a new contact law is derived to describe the overall non-bonding interaction forces of the "hollow indenter-target" system. Preliminary verifications show that the derived contact law gives a reliable picture of force field of the system which is in good agreements with the results of molecular dynamics (MD) simulations. Afterwards, equation of the transversal motion of graphene sheet is utilized on the basis of both the nonlocal theory of elasticity and the assumptions of classical plate theory. Then, to derive dynamic behavior of the system, a set including the proposed contact law and the equations of motion of both graphene sheet and fullerene molecule is solved numerically. In order to evaluate outcomes of this method, the problem is modeled by MD simulation. Despite intrinsic differences between analytical and MD methods as well as various errors arise due to transient nature of the problem, acceptable agreements are established between analytical and MD outcomes. As a result, the proposed analytical method can be reliably used to address similar impact problems. Furthermore, it is found that a single-layered graphene sheet is capable of trapping fullerenes approaching with low velocities. Otherwise, in case of rebound, the sheet effectively absorbs predominant portion of fullerene energy. 11. Supramolecular solubilization of fullerenes and radio-fullerenes in aqueous media Braun, T. 1999-01-01 In this paper we are dealing with the supramolecular complexation of fullerenes C 60 , C 70 , some functionalized fullerenes and of the dumbbell structured C 120 dimer, with two host molecules, namely γ-cyclo-dextrin (GCD), and sulfocalix[8]arene in order to make them soluble in water. Previous investigations by others have shown that the reactions of some mentioned fullerenes and cyclo-dextrins and calixarenes are very slow and tedious in liquid phase as a result of solvatation effects. That we have decided to pursue the supramolecular complexation as solid-solid reactions by using mechanochemical activation in a ball mill. A mechanochemical treatment was used to enhance chemical reactivity in solid-solid reactions in which GCD give a complex with the C 60 as 2:1 host-guest complex. The calix[8]arene complex with C 60 molecule has been prepared. The sulfonated form of the host is well soluble in water. Endohedral radio-fullerenes of the XandC60 type (where * X is a rare gas, e.g. Ar, Xe, Kr, radionuclide) were prepared by nuclear recoil after neutron irradiation, a method developed by the author The endohedrally labelled fullerenes were then mechanochemically complexed into a labelled supramolecular complex with cyclo-dextrin and calixarene hosts. (author) 12. Szeged Matrix Property Indices as Descriptors to Characterize Fullerenes Jäntschi Lorentz 2016-12-01 Full Text Available Fullerenes are class of allotropes of carbon organized as closed cages or tubes of carbon atoms. The fullerenes with small number of atoms were not frequently investigated. This paper presents a detailed treatment of total strain energy as function of structural feature extracted from isomers of C40 fullerene using Szeged Matrix Property Indices (SMPI. The paper has a two-fold structure. First, the total strain energy of C40 fullerene isomers (40 structures was linked with SMPI descriptors under two scenarios, one which incorporate just the SMPI descriptors and the other one which contains also five calculated properties (dipole moment, scf-binding-energy, scf-core-energy, scf-electronic-energy, and heat of formation. Second, the performing models identified on C40 fullerene family or the descriptors of these models were used to predict the total strain energy on C42 fullerene isomers. The obtained results show that the inclusion of properties in the pool of descriptors led to the reduction of accurate linear models. One property, namely scf-binding-energy proved a significant contribution to total strain energy of C40 fullerene isomers. However, the top-three most performing models contain just SMPI descriptors. A model with four descriptors proved most accurate model and show fair abilities in prediction of the same property on C42 fullerene isomers when the approach considered the descriptors identified on C40 as the predicting descriptors for C42 fullerene isomers. 13. Hydrogenated fullerenes in space: FT-IR spectra analysis El-Barbary, A. A. [Physics Department, Faculty of Education, Ain-Shams University, Cairo, Egypt Physics Department, Faculty of Science, Jazan University, Jazan (Saudi Arabia) 2016-06-10 Fullerenes and hydrogenated fullerenes are found in circumstellar and interstellar environments. But the determination structures for the detected bands in the interstellar and circumstellar space are not completely understood so far. For that purpose, the aim of this article is to provide all possible infrared spectra for C{sub 20} and C{sub 60} fullerenes and their hydrogenated fullerenes. Density Functional theory (DFT) is applied using B3LYP exchange-functional with basis set 6–31G(d, p). The Fourier transform infrared spectroscopy (FT-IR) is found to be capable of distinguishing between fullerenes, mono hydrogenated fullerenes and fully hydrogenated fullerenes. In addition, deposition of one hydrogen atom outside the fully hydrogenated fullerenes is found to be distinguished by forming H{sub 2} molecule at peak around 4440 cm{sup −1}. However, deposition of one hydrogen atom inside the fully hydrogenated fullerenes cannot be distinguished. The obtained spectral structures are analyzed and are compared with available experimental results. 14. Deriving freshwater quality criteria of sulphocyanic sodium for the protection of aquatic life in China 1998-01-01 The freshwater quality criteria of sulphocyanic sodium(NaSCN) were studied on the basis of the features of the aquaticbiota in China, and with Reference to U.S.EPA's guidelines. Acutetests were performed on twelve different domestic species todetermine 48h-EC50/96h-EC50 (or 96h-LC50) values for NaSCN. 21dsurvival-reproduction test with Daphnia magna, 60d fry-juvenilepart life stage test with Carassius auratus gibelio and 96h growthinhibition test with Lemna minor were also conducted to estimatelower chronic limit/upper chronic limit values. In the acute tests,D.magna was the most sensitive species to NaSCN followed by Tilapiamossambia, Cyprinus carpio and C.auratus gibelio in turn. The finalacute value of NaSCN was 2.699 mg/L. In the chronic tests,reproduction of daphnids were significantly reduced by NaSCN at 1.0mg/L. Acute-to-chronic ratios ranged from 5.96 to 19.1. A finalchronic value of 0.2530 mg/L was obtained and a final plant valuewas 1346 mg/L. A criterion maximum concentration (1.349 mg/L) anda criterion continuous concentration (0.2530 mg/L) were derivedrespectively. The results of this study may provide useful data toderive national WQC for NaSCN as well as the procedures of derivingWQC of other chemicals for the protection of aquatic biota in China. 15. Ultrafast spectroscopic investigation of a fullerene poly(3-hexylthiophene) dyad Banerji, Natalie; Seifter, Jason; Wang, Mingfeng; Vauthey, Eric; Wudl, Fred; Heeger, Alan J. 2011-08-01 We present the femtosecond spectroscopic investigation of a covalently linked dyad, PCB-P3HT, formed by a segment of the conjugated polymer P3HT (regioregular poly(3-hexylthiophene)) that is end capped with the fullerene derivative PCB ([6,6]-phenyl-C61-butyric acid ester), adapted from PCBM. The fluorescence of the P3HT segment in tetrahydrofuran (THF) solution is reduced by 64% in the dyad compared to a control compound without attached fullerene (P3HT-OH). Fluorescence upconversion measurements reveal that the partial fluorescence quenching of PCB-P3HT in THF is multiphasic and occurs on an average time scale of 100 ps, in parallel to excited-state relaxation processes. Judging from ultrafast transient absorption experiments, the origin of the quenching is excitation energy transfer from the P3HT donor to the PCB acceptor. Due to the much higher solubility of P3HT compared to PCB in THF, the PCB-P3HT dyad molecules self-assemble into micelles. When pure C60 is added to the solution, it is incorporated into the fullerene-rich center of the micelles. This dramatically increases the solubility of C60 but does not lead to significant additional quenching of the P3HT fluorescence by the C60 contained in the micelles. In PCB-P3HT thin films drop-cast from THF, the micelle structure is conserved. In contrast to solution, quantitative and ultrafast (microscopy images. Ultrafast charge separation occurs also for the fibrous morphology, but the transient absorption experiments show fast loss of part of the charge carriers due to intensity-induced recombination and annihilation processes and monomolecular interfacial trap-mediated or geminate recombination. The yield of the long-lived charge carriers in the highly organized fibers is however comparable to that obtained with annealed P3HT:PCBM blends. PCB-P3HT can therefore be considered as an active material in organic photovoltaic devices. 16. The impact of electrostatic interactions on ultrafast charge transfer at Ag 29 nanoclusters–fullerene and CdTe quantum dots–fullerene interfaces Ahmed, Ghada H. 2015-11-09 A profound understanding of charge transfer (CT) at semiconductor quantum dots (QDs) and nanoclusters (NCs) interfaces is extremely important to optimize the energy conversion efficiency in QDs and NCs-based solar cell devices. Here, we report on the ground- and excited-state interactions at the interface of two different bimolecular non-covalent donor-acceptor (D-A) systems using steady-state and femtosecond transient absorption (fs-TA) spectroscopy with broadband capabilities. We systematically investigate the electrostatic interactions between the positively charged fullerene derivative C60-(N,N dimethylpyrrolidinium iodide) (CF) employed as an efficient molecular acceptor and two different donor molecules: Ag29 nanoclusters (NCs) and CdTe quantum dots (QDs). For comparison purposes, we also monitor the interaction of each donor molecule with the neutral fullerene derivative C60-(malonic acid)n, which has minimal electrostatic interactions. Our steady-state and time-resolved data demonstrate that both QDs and NCs have strong interfacial electrostatic interactions and dramatic fluorescence quenching when the CF derivative is present. In other words, our results reveal that only CF can be in close molecular proximity with the QDs and NCs, allowing ultrafast photoinduced CT to occur. It turned out that the intermolecular distances, electronic coupling and subsequently CT from the excited QDs or NCs to fullerene derivatives can be controlled by the interfacial electrostatic interactions. Our findings highlight some of the key variable components for optimizing CT at QDs and NCs interfaces, which can also be applied to other D-A systems that rely on interfacial CT. © The Royal Society of Chemistry 2016. 17. Modified denatured lysozyme effectively solubilizes fullerene c60 nanoparticles in water Siepi, Marialuisa; Politi, Jane; Dardano, Principia; Amoresano, Angela; De Stefano, Luca; Monti, Daria Maria; Notomista, Eugenio 2017-08-01 Fullerenes, allotropic forms of carbon, have very interesting pharmacological effects and engineering applications. However, a very low solubility both in organic solvents and water hinders their use. Fullerene C60, the most studied among fullerenes, can be dissolved in water only in the form of nanoparticles of variable dimensions and limited stability. Here the effect on the production of C60 nanoparticles by a native and denatured hen egg white lysozyme, a highly basic protein, has been systematically studied. In order to obtain a denatured, yet soluble, lysozyme derivative, the four disulfides of the native protein were reduced and exposed cysteines were alkylated by 3-bromopropylamine, thus introducing eight additional positive charges. The C60 solubilizing properties of the modified denatured lysozyme proved to be superior to those of the native protein, allowing the preparation of biocompatible highly homogeneous and stable C60 nanoparticles using lower amounts of protein, as demonstrated by dynamic light scattering, transmission electron microscopy and atomic force microscopy studies. This lysozyme derivative could represent an effective tool for the solubilization of other carbon allotropes. 18. Formation and properties of electroactive fullerene based films with a covalently attached ferrocenyl redox probe Wysocka-Zolopa, Monika; Winkler, Krzysztof; Caballero, Ruben; Langa, Fernando 2011-01-01 Highlights: → Formation of redox active films of ferrocene derivatives of C 60 and palladium. → Fullerene moieties are covalently bonded to palladium atoms to form a polymeric network. → Electrochemical activity at both positive and negative potentials. → Charge transfer processes accompanied by transport of supporting electrolyte to and from the polymer layers. - Abstract: Redox active films have been produced via electrochemical reduction in a solution containing palladium(II) acetate and ferrocene derivatives of C 60 (Fc-C 60 and bis-Fc-C 60 ). In these films, fullerene moieties are covalently bonded to palladium atoms to form a polymeric network. Fc-C 60 /Pd and bis-Fc-C 60 /Pd films form uniform and relatively smooth layers on the electrode surface. These films are electrochemically active in both the positive and negative potential regions. At negative potentials, reduction of fullerene moiety takes place resulting in voltammetric behavior resembles typical of conducting polymers. In the positive potential range, oxidation of ferrocene is responsible for the formation of a sharp and symmetrical peak on the voltammograms. In this potential range, studied films behave as typical redox polymers. The charge associated with the oxidation process depends on the number of ferrocene units attached to the C 60 moiety. Oxidation and reduction of these redox active films are accompanied by transport of supporting electrolyte to and from the polymer layer. Films also show a higher permeability to anions than to cations. 19. Electron transport in doped fullerene molecular junctions Kaur, Milanpreet; Sawhney, Ravinder Singh; Engles, Derick The effect of doping on the electron transport of molecular junctions is analyzed in this paper. The doped fullerene molecules are stringed to two semi-infinite gold electrodes and analyzed at equilibrium and nonequilibrium conditions of these device configurations. The contemplation is done using nonequilibrium Green’s function (NEGF)-density functional theory (DFT) to evaluate its density of states (DOS), transmission coefficient, molecular orbitals, electron density, charge transfer, current, and conductance. We conclude from the elucidated results that Au-C16Li4-Au and Au-C16Ne4-Au devices behave as an ordinary p-n junction diode and a Zener diode, respectively. Moreover, these doped fullerene molecules do not lose their metallic nature when sandwiched between the pair of gold electrodes. 20. Preparation of fullerene/glass composites Mattes, Benjamin R.; McBranch, Duncan W.; Robinson, Jeanne M.; Koskelo, Aaron C.; Love, Steven P. 1995-01-01 Synthesis of fullerene/glass composites. A direct method for preparing solid solutions of C.sub.60 in silicon dioxide (SiO.sub.2) glass matrices by means of sol-gel chemistry is described. In order to produce highly concentrated fullerene-sol-gel-composites it is necessary to increase the solubility of these "guests" in a delivery solvent which is compatible with the starter sol (receiving solvent). Sonication results in aggregate disruption by treatment with high frequency sound waves, thereby accelerating the rate of hydrolysis of the alkoxide precursor, and the solution process for the C.sub.60. Depending upon the preparative procedure, C.sub.60 dispersed within the glass matrix as microcrystalline domains, or dispersed as true molecular solutions of C.sub.60 in a solid glass matrix, is generated by the present method. 1. Fullerenes, nanotubes, onions and related carbon structures Rao, C N.R.; Seshadri, Ram; Govindaraj, A; Sen, Rahul [Solid State and Structural Chemistry Unit, CSIR Centre of Excellence in Chemistry and Materials Research Centre, Indian Institute of Science, Bangalore (India) 1995-12-01 Fullerenes, containing five- and six-membered carbon rings, of which C{sub 6}0 and C{sub 7}0 are the prominent members, exhibit phase transitions associated with orientational ordering. When C{sub 6}0 is suitably doped with electrons, it shows novel superconducting and magnetic properties. We review these and other properties of fullerenes in bulk or in film form along with the preparative and structural aspects. Carbon nanotubes and onions (hyperfullerenes) are the other forms of carbon whose material properties have aroused considerable interest. Besides discussing these new forms of carbon, we briefly introduce other possible forms, such as those involving five-, six- and seven-membered rings and hybrids between diamond and graphite 2. Lateral translation of covalently bound fullerenes Humphry, M J; Beton, P H; Keeling, D L; Fawcett, R H J; Moriarty, P; Butcher, M J; Birkett, P R; Walton, D R M; Taylor, R; Kroto, H W 2006-01-01 Lateral manipulation of fullerenes on clean silicon surfaces may be induced by either an attractive or repulsive interaction between adsorbed molecules and the tip of a scanning probe microscope, and can result in a complex response arising from molecular rolling. The model for rolling is supported by new results which show that manipulation is suppressed for adsorbed functionalized fullerenes due to the presence of phenyl sidegroups. The influence of varying the dwell time of the tip during manipulation is also reported. By reducing this time to a value which is less than the response time of the feedback control loop it is possible to induce manipulation in a quasi-constant height mode which is accompanied by large increases/decreases in current 3. Boron hydride analogues of the fullerenes Quong, A.A.; Pederson, M.R.; Broughton, J.Q. 1994-01-01 The BH moiety is isoelectronic with C. We have studied the stability of the (BH) 60 analogue of the C 60 fullerene as well as the dual-structure (BH) 32 icosahedron, both of them being putative structures, by performing local-density-functional electronic calculations. To aid in our analysis, we have also studied other homologues of these systems. We find that the latter, i.e., the dual structure, is the more stable although the former is as stable as one of the latter's lower homologues. Boron hydrides, it seems, naturally form the dual structures used in algorithmic optimization of complex fullerene systems. Fully relaxed geometries are reported as well as electron affinities and effective Hubbard U parameters. These systems form very stable anions and we conclude that a search for BH analogues of the C 60 alkali-metal supeconductors might prove very fruitful 4. Photovoltaic properties of conjugated polymer/fullerene composites on large area flexible substrates Desta Gebeyehu 2000-06-01 Full Text Available In this paper we present measurements of the photovoltaic response of bulk donor-acceptor heterojunction between the conjugated polymer, poly(3-octylthiophene, P3OT, (as a donor, D and fullerene (methanofullerene, (as acceptor, A, deposited between indium tin oxide and aluminum electrodes. The innovation involves the substrate, which is a polymer foil instead of glass. These devices are based on ultrafast, reversible, metastable photoinduced electron transfer and charge separation. We also present the efficiency and stability studies on large area (6 cm x 6 cm flexible plastic solar cells with monochromatic energy conversion efficiency (e of about 1.5% and carrier collection efficiency of nearly 20%. Further more, we have investigated the surface network morphology of these films layers by atomic force microscope (AFM. The development of solar cells based on composites of organic conjugated semi-conducting polymers with fullerene derivatives can provide a new method in the exploitation of solar energy. 5. Pentacene–fullerene bulk-heterojunction solar cell: A computational study Pramanik, Anup [Department of Chemistry, Visva-Bharati University, Santiniketan 731235 (India); Sarkar, Sunandan [Department of Chemistry, Visva-Bharati University, Santiniketan 731235 (India); Dept. of Physical Chemistry, Palacký University, Olomouc (Czech Republic); Pal, Sougata [Department of Chemistry, University of Gour Banga, Malda 732103 (India); Sarkar, Pranab, E-mail: pranab.sarkar@visva-bharati.ac.in [Department of Chemistry, Visva-Bharati University, Santiniketan 731235 (India) 2015-06-12 We perform DFT/TDDFT calculations to study the optoelectronic properties of some pentacene-based organic molecules and their derivatives, which can serve as donor moiety when blended with fullerene acceptors in the bulk-heterojunction solar cell model. We are motivated by a recent experiment in which an unoptimized device was shown to have a good photovoltaic performance and we aim to further improve the efficiency of this device. We try to optimize the photovoltaic properties on the basis of a quantum-mechanical calculation of the frontier energy levels and of the absorption properties of individual molecules and of the molecule–fullerine composite. - Highlights: • Optoelectronic properties of pentacene–fullerene nanocomposites are presented. • Photovoltaic properties of the nanocomposites are predicted. • DFT/TDDFT results are in well agreement with available experimental results. • Calculated results give a direction for optimizing device performance. 6. Fullerene nanostructure design with cluster ion impacts Lavrentiev, Vasyl; Vacík, Jiří; Naramoto, H.; Narumi, K. 2009-01-01 Roč. 483, - (2009), s. 479-483 ISSN 0925-8388 R&D Projects: GA AV ČR IAA200480702; GA AV ČR IAA400100701; GA AV ČR(CZ) KAN400480701 Institutional research plan: CEZ:AV0Z10480505 Keywords : fullerene films, clusters C60+ * cluster ion implantation * patterning Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 2.135, year: 2009 7. PREFACE: Fullerene Nano Materials (Symposium of IUMRS-ICA2008) Miyazawa, Kun'ichi; Fujita, Daisuke; Wakahara, Takatsugu; Kizuka, Tokushi; Matsuishi, Kiyoto; Ochiai, Yuichi; Tachibana, Masaru; Ogata, Hironori; Mashino, Tadahiko; Kumashiro, Ryotaro; Oikawa, Hidetoshi 2009-07-01 This volume contains peer-reviewed invited and contributed papers that were presented in Symposium N 'Fullerene Nano Materials' at the IUMRS International Conference in Asia 2008 (IUMRS-ICA 2008), which was held on 9-13 December 2008, at Nagoya Congress Center, Nagoya, Japan. Over twenty years have passed since the discovery of C60 in 1985. The discovery of superconductivity of C60 in 1991 suggested infinite possibilities for fullerenes. On the other hand, a new field of nanocarbon has been developed recently, based on novel functions of the low-dimensional fullerene nanomaterials that include fullerene nanowhiskers, fullerene nanotubes, fullerene nanosheets, chemically modified fullerenes, endohedral fullerenes, thin films of fullerenes and so forth. Electrical, electrochemical, optical, thermal, mechanical and various other properties of fullerene nanomaterials have been investigated and their novel and anomalous nature has been reported. Biological properties of fullerene nanomaterials also have been investigated both in medical applications and toxicity aspects. The recent research developments of fullerene nanomaterials cover a variety of categories owing to their functional diversity. This symposium aimed to review the progress in the state-of-the-art technology based on fullerenes and to offer the forum for active interdisciplinary discussions. 24 oral papers containing 8 invited papers and 22 poster papers were presented at the two-day symposium. Topics on the social acceptance of nanomaterials including fullerene were presented on the first day of the symposium. Biological impacts of nanomaterials and the importance of standardization of nanomaterials characterization were also shown. On the second day, the synthesis, properties, functions and applications of various fullerene nanomaterials were shown in both the oral and poster presentations. We are grateful to all invited speakers and many participants for valuable contributions and active discussions 8. Redox potentials and binding enhancement of fullerene and fullerene-cyclodextrin systems in water and dimethylsulfoxide Pospíšil, Lubomír; Hromadová, Magdaléna; Gál, Miroslav; Kocábová, Jana; Sokolová, Romana; Filippone, S.; Yang, J.; Guan, Z.; Rassat, A.; Zhang, Y. 2010-01-01 Roč. 48, č. 1 (2010), s. 153-162 ISSN 0008-6223 R&D Projects: GA ČR GA203/09/0705; GA ČR GA203/08/1157; GA ČR GP203/09/P502; GA MŠk LC510; GA MŠk ME09114; GA MŠk OC 140 Institutional research plan: CEZ:AV0Z40400503 Keywords : electrochemistry * fullerene s * fullerene -cyclodextrin systems Subject RIV: CG - Electrochemistry Impact factor: 4.893, year: 2010 9. Effect of uridine protecting groups on the diastereoselectivity of uridine-derived aldehyde 5’-alkynylation Raja Ben Othman 2017-08-01 Full Text Available The 5’-alkynylation of uridine-derived aldehydes is described. The addition of alkynyl Grignard reagents on the carbonyl group is significantly influenced by the 2’,3’-di-O-protecting groups (R1: O-alkyl groups led to modest diastereoselectivities (65:35 in favor of the 5’R-isomer, whereas O-silyl groups promoted higher diastereoselectivities (up to 99:1 in favor of the 5’S-isomer. A study related to this protecting group effect on the diastereoselectivity is reported. 10. Nuclear reactions and radionuclides in the study of fullerenes Nakahara, H.; Sueki, K.; Sato, W.; Akiyama, K. 2000-01-01 Radiochemical techniques have been applied in various ways to the study of fullerenes and metallofullerenes for the past several years, and they have provided invaluable information pertaining to the stability, structures, and formation of the novel carbon material. This paper reviews those experimental results that have fully shown the usefullness and uniqueness of radionuclides demonstrated in the field of fullerene science. (author) 11. Nanoencapsulation of Fullerenes in Organic Structures with Nonpolar Cavities Murthy, C. N. 2005-01-01 The formation of supramolecular structures, assemblies, and arrays held together by weak intermolecular interactions and non-covalent binding mimicking natural processes has been used in applications being anticipated in nanotechnology, biotechnology and the emerging field of nanomedicine. Encapsulation of C 60 fullerene by cyclic molecules like cyclodextrins and calixarenes has potential for a number of applications. Similarly, biomolecules like lysozyme also have been shown to encapsulate C 60 fullerene. This poster article reports the recent trends and the results obtained in the nanoencapsulation of fullerenes by biomolecules containing nonpolar cavities. Lysozyme was chosen as the model biomolecule and it was observed that there is no covalent bond formed between the bimolecule and the C 60 fullerene. This was confirmed from fluorescence energy transfer studies. UV-Vis studies further supported this observation that it is possible to selectively remove the C 60 fullerene from the nonpolar cavity. This behavior has potential in biomedical applications 12. Radiological protection effect on vanillin derivative VND3207 radiation-induced cytogenetic damage in mouse bone marrow cells Wang Chuangao; Wang Li; Zhou Pingkun; Wang Zhongwen; Hu Yongzhe; Jin Haiming; Zhang Xueqing; Chen Ying 2010-01-01 Objective: To study the protection of vanillin derivative VND3207 on the cytogenetic damage of mouse bone marrow cell induced by ionizing radiation. Methods: BALB/c mice were randomly divided into five groups: normal control group, 2 Gy dose irradiation group, and three groups of 2 Gy irradiation with VND3207 protection at doses of 10, 50 and 100 mg/kg, respectively. VND3207 was given by intragastric administration once a day for five days. Two hours after the last drug administration, the mice were irradiated with 2 Gy γ-rays. The changes of polychromatophilic erythroblasts micronuclei (MN), chromosome aberration (CA) and mitosis index (MI) of mouse bone marrow cells were observed at 24 and 48 h after irradiation. Results: Under the protection of VND3207 at the dosages 10, 50, 100 μmg/kg, the yields of poly-chromatophilic erythroblasts MN and CA of bone marrow cells were significantly decreased (t=2.36-4.26, P<0.05), and the marrow cells MI remained much higher level compared with the irradiated mice without drug protection (t=2.58, 2.01, P<0.05). The radiological protection effect was drug dose-dependent, and the administration of VND3207 at the dosage of 100 mg/kg resulted in reduction by 50 % and 65% in the yields of MN and CA, respectively. Conclusions: VND3207 had a good protection effect of on γ-ray induced cytogentic damage of mouse bone marrow cells. (authors) 13. Sol-gel derived C-SiC composites and protective coatings for sustained durability in the space environment Haruvy, Yair; Liedtke, Volker 2003-09-01 Composites and coatings were produced via the fast sol-gel process of a mixture of alkoxysilane precursors. The composites were comprised of carbon fibers, fabrics, or their precursors as reinforcement, and sol-gel-derived silicon carbide as matrix, aiming at high-temperature stable ceramics that can be utilized for re-entry structures. The protective coatings were comprised of fluorine-rich sol-gel derived resins, which exhibit high flexibility and coherence to provide sustained ATOX protection necessary for LEO space-exposed elements. For producing the composites, the sol-gel-derived resin is cast onto the reinforcement fibers/fabrics mat (carbon or its precursors) to produce a 'green' composite that is being cured. The 'green' composite is converted into a C-SiC composite via a gradual heat-pressure process under inert atmosphere, during which the organic substituents on the silicon atoms undergo internal oxidative pyrolysis via the schematic reaction: (SiRO3/2)n -> SiC + CO2 + H2O. The composition of the resultant silicon-oxi-carbide is tailorable via modifying the composition of the sol-gel reactants. The reinforcement, when made of carbon precursors, is converted into carbon during the heat-and-pressure processing as well. The C-SiC composites thus derived exhibit superior thermal stability and comparable thermal conductivity, combined with good mechanical strength features and failure resistance, which render them greatly applicable for re-entry shielding, heat-exchange pipes, and the like. Fluorine rich sol-gel derived coatings were developed as well, via the use of HF rich sol-gel process. These coatings provide oxidation-protection via the silica formation process, together with flexibility that allows 18,000 repetitive folding of the coating without cracking. 14. Single step fabrication method of fullerene/TiO2 composite photocatalyst for hydrogen production Kum, Jong Min; Cho, Sung Oh 2011-01-01 Hydrogen is one of the most promising alternative energy sources. Fossil fuel, which is the most widely used energy source, has two defects. One is CO 2 emission causing global warming. The other is exhaustion. On the other hand, hydrogen emits no CO 2 and can be produced by splitting water which is renewable and easily obtainable source. However, about 95% of hydrogen is derived from fossil fuel. It limits the merits of hydrogen. Hydrogen from fossil fuel is not a renewable energy anymore. To maximize the merits of hydrogen, renewability and no CO 2 emission, unconventional hydrogen production methods without using fossil fuel are required. Photocatalytic water-splitting is one of the unconventional hydrogen production methods. Photocatalytic water-splitting that uses hole/electron pairs of semiconductor is expectable way to produce clean and renewable hydrogen from solar energy. TiO 2 is the semiconductor material which has been most widely used as photocatalyst. TiO 2 shows high photocatalytic reactivity and stability in water. However, its wide band gap only absorbs UV light which is only 5% of sun light. To enhance the visible light responsibility, composition with fullerene based materials has been investigated. 1-2 Methano-fullerene carboxylic acid (FCA) is one of the fullerene based materials. We tried to fabricate FCA/TiO 2 composite using UV assisted single step method. The method not only simplified the fabrication procedures, but enhanced hydrogen production rate 15. Polymer Derived Rare Earth Silicate Nanocomposite Protective Coatings for Nuclear Thermal Propulsion Systems, Phase II National Aeronautics and Space Administration — Leveraging a rapidly evolving state-of-the-art technical base empowered by Phase I NASA SBIR funding, NanoSonic's polymer derived rare earth silicate EBCs will... 16. Polymer Derived Rare Earth Silicate Nanocomposite Protective Coatings for Nuclear Thermal Propulsion Systems, Phase I National Aeronautics and Space Administration — The objective of this Phase I SBIR program is to develop polymer derived rare earth silicate nanocomposite environmental barrier coatings (EBC) for providing... 17. Fullerenes vs fulleroids. Understanding their relative energies Warner, P.M. (Northeastern Univ., Boston, MA (United States)) 1994-11-30 Both force-field (MMPI) and AMI (restricted and unrestricted HF) calculations are herein used to investigate the underlying reasons for the fullerene-fulleroid structural dichotomies observed in carbene, silylene, nitrene, and oxygen adducts of C[sub 60]. Via the investigation of a series of model systems, it is demonstrated that curvature actually favors the open, fulleroid structure; this effect of curvature on the norcaradiene-cycloheptatriene equilibrium is general. Strategies for the creation of 6,6-bridged fulleroids are suggested. 29 refs., 6 tabs. 18. Thyroxin treatment protects against white matter injury in the immature brain via brain-derived neurotrophic factor. Hung, Pi-Lien; Huang, Chao-Ching; Huang, Hsiu-Mei; Tu, Dom-Gene; Chang, Ying-Chao 2013-08-01 Low level of thyroid hormone is a strong independent risk factor for white matter (WM) injury, a major cause of cerebral palsy, in preterm infants. Thyroxin upregulates brain-derived neurotrophic factor during development. We hypothesized that thyroxin protected against preoligodendrocyte apoptosis and WM injury in the immature brain via upregulation of brain-derived neurotrophic factor. Postpartum (P) day-7 male rat pups were exposed to hypoxic ischemia (HI) and intraperitoneally injected with thyroxin (T4; 0.2 mg/kg or 1 mg/kg) or normal saline immediately after HI at P9 and P11. WM damage was analyzed for myelin formation, axonal injury, astrogliosis, and preoligodendrocyte apoptosis. Neurotrophic factor expression was assessed by real-time polymerase chain reaction and immunohistochemistry. Neuromotor functions were measured using open-field locomotion (P11 and P21), inclined plane climbing (P11), and beam walking (P21). Intracerebroventricular injection of TrkB-Fc or systemic administration of 7,8-dihydroxyflavone was performed. On P11, the HI group had significantly lower blood T4 levels than the controls. The HI group showed ventriculomegaly and marked reduction of myelin basic protein immunoreactivities in the WM. T4 (1 mg/kg) treatment after HI markedly attenuated axonal injury, astrocytosis, and microgliosis, and increased preoligodendrocyte survival. In addition, T4 treatment significantly increased myelination and selectively upregulated brain-derived neurotrophic factor expression in the WM, and improved neuromotor deficits after HI. The protective effect of T4 on WM myelination and neuromotor performance after HI was significantly attenuated by TrkB-Fc. Systemic 7,8-dihydroxyflavone treatment ameliorated hypomyelination after HI injury. T4 protects against WM injury at both pathological and functional levels via upregulation of brain-derived neurotrophic factor-TrkB signaling in the immature brain. 19. Process for the preparation of protected dihydroxypropyl trialkylammonium salts and derivatives thereof Hollingsworth, Rawle I. (Haslett, MI); Wang, Guijun (East Lansing, MI) 2000-01-01 A process for the preparation of protected dihydroxypropyl trialkylammonium salts, particularly in chiral form is described. In particular, a process for the preparation of (2,2-dimethyl-1,3-dioxolan-4-ylmethyl)trialkylammonium salts, particularly in chiral form is described. Furthermore, a process is described wherein the (2,2-dimethyl-1,3-dioxolan-4ylmethyl)trialkylammonium salts is a 2,2-dimethyl-1,3-dioxolan-4-ylmethyl trimethylammonium salt, preferably in chiral form. The protected dihydroxypropyl trialkylammonium salts lead to L-carnitine (9) when in chiral form (5). 20. Process for the preparation of protected 3-amino-1,2-dihydroxypropane acetal and derivatives thereof Hollingsworth, R.I.; Wang, G. 2000-03-21 This application describes a process for producing protected 3-amino-1,2-dihydroxypropane acetal, particularly in chiral forms, for use as an intermediate in the preparation of various 3-carbon compounds which are chiral. In particular, the present invention relates to the process for preparation of 3-amino-1,2-dihydroxypropane isopropylidene acetal. The protected 3-amino-1,2-dihydroxypropane acetal is a key intermediate to the preparation of chiral 3-carbon compounds which in turn are intermediates to various pharmaceuticals. 1. Process for the preparation of protected dihydroxypropyl trialkylammonium salts and derivatives thereof Hollingsworth, R.I.; Wang, G. 2000-07-04 A process for the preparation of protected dihydroxypropyl trialkylammonium salts, particularly in chiral form is described. In particular, a process for the preparation of (2,2-dimethyl-1,3-dioxolan-4-ylmethyl)trialkylammonium salts, particularly in chiral form is described. Furthermore, a process is described wherein the (2,2-dimethyl-1,3-dioxolan-4ylmethyl)trialkylammonium salts is a 2,2-dimethyl-1,3-dioxolan-4-ylmethyl trimethylammonium salt, preferably in chiral form. The protected dihydroxypropyl trialkylammonium salts lead to L-carnitine when in chiral form. 2. Morphology control of polymer: Fullerene solar cells by nanoparticle self-assembly Zhang, Wenluan During the past two decades, research in the field of polymer based solar cells has attracted great effort due to their simple processing, mechanical flexibility and potential low cost. A standard polymer solar cell is based on the concept of a bulk-heterojunction composed of a conducting polymer as the electron donor and a fullerene derivative as the electron acceptor. Since the exciton lifetime is limited, this places extra emphasis on control of the morphology to obtain improved device performance. In this thesis, detailed characterization and novel morphological design of polymer solar cells was studied, in addition, preliminary efforts to transfer laboratory scale methods to industrialized device fabrication was made. Magnetic contrast neutron reflectivity was used to study the vertical concentration distribution of fullerene nanoparticles within poly(2,5-bis(3-tetradecylthiophen-2-yl)thieno[3,2- b]thiophene (pBTTT) thin film. Due to the wide space between the side chains of polymer, these fullerene nanoparticles intercalate between them creating a stable co-crystal structure. Therefore, a high volume fraction of fullerene was needed to obtain optimal device performance as phase separated conductive pathways are required and resulted in a homogeneous fullerene concentration profile through the film. Small angle neutron scattering was used to find there is amorphous fullerene even at lower concentration since it was previously believed that all fullerene formed a co-crystal. These fullerene molecules evolve into approximately 15 nm sized agglomerates at higher concentrations to improve electron transport. Unfortunately, thermal annealing gives these agglomerates mobility to form micrometer sized crystals and reduce the device performance. In standard poly(3-hexylthiophene) (P3HT):[6,6]-phenyl-C61-butyric acid methyl ester (PCMBM) solar cells, a higher concentration of PCBM at the cathode interface is desired due to the band alignment structure. This was 3. High Efficiency Conjugated Polymer Donor and Fullerene Derivative Acceptor Photovoltaic Materials for Polymer Solar Cells%聚合物太阳能电池高效共轭聚合物给体和富勒烯受体光伏材料 李永舫 2011-01-01 Polymer solar ceils (PSCs) are composed of a blend film (active layer) of a conjugated polymer donor and a fullerene derivative acceptor sandwiched between a transparent ITO positive electrode and a low workfunction metal negative electrode. PSCs have become the hot research field in recent years, due to their unique advantages of simple fabrication, low cost, light weight and capability to be fabricated into flexible devices. The present research focus is to improve their photovoltaic power conversion efficiency (PCE), and the key aspects for improving PCE are high efficiency photovoltaic materials. In this paper,I will mainly introduce the recent research progress of Institute of Chemistry, Chinese Academy of Sciences (ICCAS) on the new conjugated polymer donor and fullerene derivative photovoltaic materials, including the donor materials of two-dimensional conjugated polymers with conjugated side chains, conjugated polymers with electron-withdrawing substituents for lower HOMO energy levels, D-A copolymer with broad absorption and lower HOMO energy levels,and the acceptor materials of indene-C60 bisadduct (ICBA) and indene-CT0 bisadduct. The highest PCE of the PSCs based on the conjugated polymer donor materials reached 7.59%, which is one of the highest efficiencies reported in literatures so for. The PSCs based on P3HT as donor and our ICBA as acceptor showed PCE higher than 7~ ,which is the highest efficiency for the PSCs based on P3HT.%聚合物太阳能电池(PSC)由共轭聚合物给体和富勒烯衍生物受体的共混膜(活性层)夹在ITO透明导电玻璃正极和低功函数金属负极之间所组成,具有制备过程简单、成本低、重量轻、可制备成柔性器件等突出优点,近年来成为国内外研究前沿和热点。当前研究的焦点是提高器件的光电能量转换效率,而提高效率的关键是高效共轭聚合物给体和富勒烯衍生物受体光伏材料。本文将重点介 4. Simple method for determining fullerene negative ion formation★ Felfli, Zineb; Msezane, Alfred Z. 2018-04-01 A robust potential wherein is embedded the crucial core-polarization interaction is used in the Regge-pole methodology to calculate low-energy electron elastic scattering total cross section for the C60 fullerene in the electron impact energy range 0.02 ≤ E ≤ 10.0 eV. The energy position of the characteristic dramatically sharp resonance appearing at the second Ramsauer-Townsend minimum of the total cross section representing stable C60 - fullerene negative ion formation agrees excellently with the measured electron affinity of C60 [Huang et al., J. Chem. Phys. 140, 224315 (2014)]. The benchmarked potential and the Regge-pole methodology are then used to calculate electron elastic scattering total cross sections for selected fullerenes, from C54 through C240. The total cross sections are found to be characterized generally by Ramsauer-Townsend minima, shape resonances and dramatically sharp resonances representing long-lived states of fullerene negative ion formation. For the total cross sections of C70, C76, C78, and C84 the agreement between the energy positions of the very sharp resonances and the measured electron affinities is outstanding. Additionally, we compare our extracted energy positions of the resultant fullerene anions from our calculated total cross sections of the C86, C90 and C92 fullerenes with the estimated electron affinities ≥3.0 eV by the experiment [Boltalina et al., Rapid Commun. Mass Spectrom. 7, 1009 (1993)]. Resonance energy positions of other fullerenes, including C180 and C240 are also obtained. Most of the total cross sections presented in this paper are the first and only; our novel approach is general and should be applicable to other fullerenes as well and complex heavy atoms, such as the lanthanide atoms. We conclude with a remark on the catalytic properties of the fullerenes through their negative ions. 5. Fullerenes: prospects of using in medicine, biology and ecology D. V. Schur 2012-02-01 Full Text Available Results of our own research and academic literature data on the properties of fullerenes and carbon nanotubes are analysed and summarized. Chemical stability of the structure and low toxicity of fullerenes determine their usage in medical chemistry, pharmacology and cosmetology. Due to its mechanical strength the nanotubes have become the basis of clean construction and barrier materials. It is shown that a matrix based on fullerit C60 can be obtained. It allows to store up to 7.7 wt. % hydrogen with formation of hydrofullerit C60H60. The usage of fullerenes for accumulation and storage of hydrogen enhances the prospects of clean hydrogen energy development. 6. Inorganic Fullerene-Like Nanoparticles and Inorganic Nanotubes Reshef Tenne 2014-11-01 Full Text Available Fullerene-like nanoparticles (inorganic fullerenes; IF and nanotubes of inorganic layered compounds (inorganic nanotubes; INT combine low dimensionality and nanosize, enhancing the performance of corresponding bulk counterparts in their already known applications, as well as opening new fields of their own [1]. This issue gathers articles from the diverse area of materials science and is devoted to fullerene-like nanoparticles and nanotubes of layered sulfides and boron nitride and collects the most current results obtained at the interface between fundamental research and engineering.[... 7. Topological edge properties of C60+12n fullerenes A. Mottaghi 2013-06-01 Full Text Available A molecular graph M is a simple graph in which atoms and chemical bonds are the vertices and edges of M, respectively. The molecular graph M is called a fullerene graph, if M is the molecular graph of a fullerene molecule. It is well-known that such molecules exist for even integers n ≥ 24 or n = 20. The aim of this paper is to investigate the topological properties of a class of fullerene molecules containing 60 + 12n carbon atoms. 8. Continuum simulations of water flow past fullerene molecules Popadic, A.; Praprotnik, M.; Koumoutsakos, P. 2015-01-01 We present continuum simulations of water flow past fullerene molecules. The governing Navier-Stokes equations are complemented with the Navier slip boundary condition with a slip length that is extracted from related molecular dynamics simulations. We find that several quantities of interest...... as computed by the present model are in good agreement with results from atomistic and atomistic-continuum simulations at a fraction of the cost. We simulate the flow past a single fullerene and an array of fullerenes and demonstrate that such nanoscale flows can be computed efficiently by continuum flow... 9. CyAN satellite-derived Cyanobacteria products in support of Public Health Protection The timely distribution of satellite-derived cyanoHAB data is necessary for adaptive water quality management decision-making and for targeted deployment of existing government and non-government water quality monitoring resources. The Cyanobacteria Assessment Network (CyAN) is a... 10. The radioprotective effects of carboxy fullerene C3 on AHH-1 cell Shan, Husheng; Cai, Jianming; Huang, Yuecheng; Cui, Jianguo; Liu, Hanchen; Sun, Ding; Zhao, Fang; Dong, Junru; Li, Bailong 2008-01-01 Purpose: To investigate the radioprotective effects of carboxy fullerene C 3 on AHH-1 cell and it's prospective as a novel radioprotectant. Materials and Methods: Carboxy fullerene C 3 was prepared by chemical synthesis and trypan blue rejection test was performed to detect its cytotoxicity to AHH-1 cell. Then different concentration of C 3 was used to treat AHH-1 cells after radiated with 60 Coγ ray. Annexin-V/PI staining and flow cytometry assay were applied to assess the cell proliferation and apoptosis after irradiation. Results: C 3 showed little toxicity to AHH-1 cells with little change of trypan blue rejection rate during the drug concentration range 0-400 mg/L (P>0.05). We found in this study C 3 had good radioprotective effects to AHH-1 cell radiated with 1-8 Gy γ-ray. When the concentration was 10 mg/L, C 3 showed protection effects to AHH-1 cell irradiated with 4 Gy γ -ray, which was enhanced with increase of C 3 concentration. When the final concentration reached 200-400 mg/L, the cell survival rate after irradiation was similar to that of non-irradiated control cells(P >0.05). And the irradiation induced apoptosis and death rate were significantly lower than that of single radiation group cells(P 3 were time-dependant, and the best protection effects were observed when the C 3 was administered before irradiation (0-24 h). Conclusion: Carboxy fullerene C 3 has good radioprotective effects to AHH-1 cell, which is dose-dependent, and the higher concentration of C 3 is, the better protective effects it shows. In the effective drug concentration range of this study, C 3 do little harm on the survival rate of AHH-1 cell, which suggest that C 3 as a novel promising radioprotectant deserve to be further investigated. (author) 11. Synthesis and structure of the first fullerene complex of titanium Cp{sub 2}Ti({eta}{sup 2}-C{sub 60}) Burlakov, V.V.; Usatov, A.V.; Lyssenko, K.A.; Antipin, M.Yu.; Novikov, Yu.N.; Shur, V.B. [Russian Academy of Sciences, Moscow (Russian Federation). A.N. Nesmeyanov Inst. of Organoelement Compounds 1999-11-01 The first fullerene complex of titanium Cp{sub 2}Ti({eta}{sup 2}-C{sub 60}) has been synthesized by reaction of the bis(trimethylsilyl)-acetylene complex of titanocene Cp{sub 2}Ti({eta}{sup 2}-Me{sub 3}SiC{sub 2}SiMe{sub 3}) with an equimolar amount of fullerene-60 in toluene at room temperature under argon. An X-ray diffraction study of the complex has shown that it has the structure of a titanacyclopropane derivative. (orig.) 12. Electron scattering on metal clusters and fullerenes Solov'yov, A.V. 2001-01-01 This paper gives a survey of physical phenomena manifesting themselves in electron scattering on atomic clusters. The main emphasis is made on electron scattering on fullerenes and metal clusters, however some results are applicable to other types of clusters as well. This work is addressed to theoretical aspects of electron-cluster scattering, however some experimental results are also discussed. It is demonstrated that the electron diffraction plays important role in the formation of both elastic and inelastic electron scattering cross sections. It is elucidated the essential role of the multipole surface and volume plasmon excitations in the formation of electron energy loss spectra on clusters (differential and total, above and below ionization potential) as well as the total inelastic scattering cross sections. Particular attention is paid to the elucidation of the role of the polarization interaction in low energy electron-cluster collisions. This problem is considered for electron attachment to metallic clusters and the plasmon enhanced photon emission. Finally, mechanisms of electron excitation widths formation and relaxation of electron excitations in metal clusters and fullerenes are discussed. (authors) 13. High Speed Ultraviolet Phototransistors Based on an Ambipolar Fullerene Derivative Huang, Wentao 2018-03-13 Combining high charge carrier mobility with ambipolar transport in light-absorbing organic semiconductors is highly desirable as it leads to enhanced charge photogeneration, and hence improved performance, in various optoelectronic devices including solar cells and photodetectors. Here we report the development of [6,6]-phenyl-C61-butyric acid methyl ester (PC61BM)-based ultraviolet (UV) phototransistors with balanced electron and hole transport characteristics. The latter is achieved by fine-tuning the source–drain electrode work function using a self-assembled monolayer. Opto/electrical characterization of as-prepared ambipolar PC61BM phototransistors reveals promising photoresponse, particularly in the UV-A region (315–400 nm), with a maximum photosensitivity and responsivity of 9 × 103 and 3 × 103 A/W, respectively. Finally, the temporal response of the PC61BM phototransistors is found to be high despite the long channel length (10 s of μm) with typical switching times of <2 ms. 14. High Speed Ultraviolet Phototransistors Based on an Ambipolar Fullerene Derivative Huang, Wentao; Lin, Yen-Hung; Anthopoulos, Thomas D. 2018-01-01 Combining high charge carrier mobility with ambipolar transport in light-absorbing organic semiconductors is highly desirable as it leads to enhanced charge photogeneration, and hence improved performance, in various optoelectronic devices including solar cells and photodetectors. Here we report the development of [6,6]-phenyl-C61-butyric acid methyl ester (PC61BM)-based ultraviolet (UV) phototransistors with balanced electron and hole transport characteristics. The latter is achieved by fine-tuning the source–drain electrode work function using a self-assembled monolayer. Opto/electrical characterization of as-prepared ambipolar PC61BM phototransistors reveals promising photoresponse, particularly in the UV-A region (315–400 nm), with a maximum photosensitivity and responsivity of 9 × 103 and 3 × 103 A/W, respectively. Finally, the temporal response of the PC61BM phototransistors is found to be high despite the long channel length (10 s of μm) with typical switching times of <2 ms. 15. μSR studies of fullerenes and their derivatives Prassides, K. 1997-01-01 The dynamic properties of pristine C 60 and C 70 are reviewed, emphasizing the results of the ZF- and ALC-μ + SR techniques. In C 60 , the fcc → sc transition is accompanied by a change in the dynamics from isotropic reorientational to quasi-random jump motion between nearly-degenerate orientations. C 70 is frozen on a timescale of 30 ns up to 170 K. At higher temperatures, the motion is found to be complex, consisting of a uniaxial rotation part together with a nutational or jumping motion of the unique axis. Anisotropy on the 30 ns timescale persists to 370 K, well into the fcc phase. The ZF-μ + SR technique has been also employed to study the magnetic properties of fullerides. In the organic salt (TDAE)C 60 , spontaneous magnetic order is directly observed below a Curie temperature of 16.1 K, higher than any other organic material. In the quasi-one-dimensional conductor CsC 60 , static magnetic order of a random nature is observed to develop in the vicinity of the metal-insulator transition at 30 K with no direct evidence of long range order present 16. Assessing Nature-Based Coastal Protection against Disasters Derived from Extreme Hydrometeorological Events in Mexico Octavio Pérez-Maqueo 2018-04-01 Full Text Available Natural ecosystems are expected to reduce the damaging effects of extreme hydrometeorological effects. We tested this prediction for Mexico by performing regression models, with two dependent variables: the occurrence of deaths and economic damages, at a state and municipality levels. For each location, the explanatory variables were the Mexican social vulnerability index (which includes socioeconomic aspects, local capacity to prevent and respond to an emergency, and the perception of risk and land use cover considering different vegetation types. We used the hydrometeorological events that have affected Mexico from 1970 to 2011. Our findings reveal that: (a hydrometeorological events affect both coastal and inland states, although damages are greater on the coast; (b the protective role of natural ecosystems only was clear at a municipality level: the presence of mangroves, tropical dry forest and tropical rainforest was related to a significant reduction in the occurrence of casualties. Social vulnerability was positively correlated with the occurrence of deaths. Natural ecosystems, both typically coastal (mangroves and terrestrial (tropical forests, which are located on the mountain ranges close to the coast function for storm protection. Thus, their conservation and restoration are effective and sustainable strategies that will help protect and develop the increasingly urbanized coasts. 17. Oxic microshield and local pH enhancement protects Zostera muelleri from sediment derived hydrogen sulphide. Brodersen, Kasper Elgetti; Nielsen, Daniel Aagren; Ralph, Peter J; Kühl, Michael 2015-02-01 Seagrass is constantly challenged with transporting sufficient O₂ from above- to belowground tissue via aerenchyma in order to maintain aerobic metabolism and provide protection against phytotoxins. Electrochemical microsensors were used in combination with a custom-made experimental chamber to analyse the belowground biogeochemical microenvironment of Zostera muelleri under changing environmental conditions. Measurements revealed high radial O₂ release of up to 500 nmol O2 cm(-2) h(-1) from the base of the leaf sheath, maintaining a c. 300-μm-wide plant-mediated oxic microzone and thus protecting the vital meristematic regions of the rhizome from reduced phytotoxic metabolites such as hydrogen sulphide (H₂S). H₂S intrusion was prevented through passive diffusion of O₂ to belowground tissue from leaf photosynthesis in light, as well as from the surrounding water column into the flow-exposed plant parts during darkness. Under water column hypoxia, high belowground H₂S concentrations at the tissue surface correlated with the inability to sustain the protecting oxic microshield around the meristematic regions of the rhizome. We also found increased pH levels in the immediate rhizosphere of Z. muelleri, which may contribute to further detoxification of H₂S through shifts in the chemical speciation of sulphide. Zostera muelleri can modify the geochemical conditions in its immediate rhizosphere, thereby reducing its exposure to H₂S. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust. 18. Evaluation of Chemical Warfare Agent Percutaneous Vapor Toxicity: Derivation of Toxicity Guidelines for Assessing Chemical Protective Ensembles. Watson, A.P. 2003-07-24 Percutaneous vapor toxicity guidelines are provided for assessment and selection of chemical protective ensembles (CPEs) to be used by civilian and military first responders operating in a chemical warfare agent vapor environment. The agents evaluated include the G-series and VX nerve agents, the vesicant sulfur mustard (agent HD) and, to a lesser extent, the vesicant Lewisite (agent L). The focus of this evaluation is percutaneous vapor permeation of CPEs and the resulting skin absorption, as inhalation and ocular exposures are assumed to be largely eliminated through use of SCBA and full-face protective masks. Selection of appropriately protective CPE designs and materials incorporates a variety of test parameters to ensure operability, practicality, and adequacy. One aspect of adequacy assessment should be based on systems tests, which focus on effective protection of the most vulnerable body regions (e.g., the groin area), as identified in this analysis. The toxicity range of agent-specific cumulative exposures (Cts) derived in this analysis can be used as decision guidelines for CPE acceptance, in conjunction with weighting consideration towards more susceptible body regions. This toxicity range is bounded by the percutaneous vapor estimated minimal effect (EME{sub pv}) Ct (as the lower end) and the 1% population threshold effect (ECt{sub 01}) estimate. Assumptions of exposure duration used in CPE certification should consider that each agent-specific percutaneous vapor cumulative exposure Ct for a given endpoint is a constant for exposure durations between 30 min and 2 hours. 19. The Influence of Solvent Additive on Polymer Solar Cells Employing Fullerene and Non-Fullerene Acceptors Song, Xin 2017-11-27 Small-molecule-based non-fullerene acceptors (NFAs) are emerging as a new field in organic photovoltaics, due to their structural versatility, the tunability of their energy levels, and their ease of synthesis. High-efficiency polymer donors have been tested with these non-fullerene acceptors in order to further boost the efficiency of organic solar cells. Most of the polymer:fullerene systems are optimized with solvent additives for high efficiency, while little attention has been paid to NFA-based solar cells so far. In this report, the effect of the most common additive, 1,8-diiodooctane (DIO), on PTB7-Th:PC71BM solar cells is investigated and it is compared with non-fullerene acceptor 3,9-bis(2-methylene-(3-(1,1-dicyanomethylene)-indanone))-5,5,11,11-tetrakis(4-hexylphenyl)-dithieno[2,3-d:2′,3′-d′]-s-indaceno-[1,2-b:5,6b′]di-thiophene (ITIC) devices. It is interesting that the high boiling solvent additive does have a negative impact on the power conversion efficiency when PTB7-Th is blended with ITIC acceptor. The solar cell devices are studied in terms of their optical, photophysical, and morphological properties and find out that PTB7-Th:ITIC devices with DIO results in coarser domains, reduced absorption strength, and slightly lower mobility, while DIO improves the absorption strength of the PTB7-Th:PC71BM blend film and increase the aggregation of PC71BM in the blend, resulting in higher fill factor and Jsc. 20. The Influence of Solvent Additive on Polymer Solar Cells Employing Fullerene and Non-Fullerene Acceptors Song, Xin; Gasparini, Nicola; Baran, Derya 2017-01-01 Small-molecule-based non-fullerene acceptors (NFAs) are emerging as a new field in organic photovoltaics, due to their structural versatility, the tunability of their energy levels, and their ease of synthesis. High-efficiency polymer donors have been tested with these non-fullerene acceptors in order to further boost the efficiency of organic solar cells. Most of the polymer:fullerene systems are optimized with solvent additives for high efficiency, while little attention has been paid to NFA-based solar cells so far. In this report, the effect of the most common additive, 1,8-diiodooctane (DIO), on PTB7-Th:PC71BM solar cells is investigated and it is compared with non-fullerene acceptor 3,9-bis(2-methylene-(3-(1,1-dicyanomethylene)-indanone))-5,5,11,11-tetrakis(4-hexylphenyl)-dithieno[2,3-d:2′,3′-d′]-s-indaceno-[1,2-b:5,6b′]di-thiophene (ITIC) devices. It is interesting that the high boiling solvent additive does have a negative impact on the power conversion efficiency when PTB7-Th is blended with ITIC acceptor. The solar cell devices are studied in terms of their optical, photophysical, and morphological properties and find out that PTB7-Th:ITIC devices with DIO results in coarser domains, reduced absorption strength, and slightly lower mobility, while DIO improves the absorption strength of the PTB7-Th:PC71BM blend film and increase the aggregation of PC71BM in the blend, resulting in higher fill factor and Jsc. 1. Thermodynamics of TMPC/PSd/Fullerene Nanocomposites: SANS Study Chua, Yang-Choo; Chan, Alice; Wong, Him-Cheng; Higgins, Julia S.; Cabral, João T. 2010-01-01 ) analysis demonstrate that 1-2 mass % of C60 fullerenes destabilizes a highly interacting mixture of poly(tetramethyl bisphenol A polycarbonate) and deuterated polystyrene (TMPC/PSd). We unequivocally corroborate these findings with time-resolved temperature 2. Electronic structure of single- and multiple-shell carbon fullerenes Lin, Y.; Nori, F. 1994-01-01 We study the electronic states of giant single-shell and the recently discovered nested multiple-shell carbon fullerenes within the tight-binding approximation. We use two different approaches, one based on iterations and the other on symmetry, to obtain the π-state energy spectra of large fullerene cages: C 240 , C 540 , C 960 , C 1500 , C 2160 , and C 2940 . Our iteration technique reduces the size of the problem by more than one order of magnitude (factors of ∼12 and 20), while the symmetry-based approach reduces it by a factor of 10. We also find formulas for the highest occupied and lowest unoccupied molecular orbital energies of C 60n 2 fullerenes as a function of n, demonstrating a tendency towards a metallic regime for increasing n. For multiple-shell fullerenes, we analytically obtain the eigenvalues of the intershell interaction 3. Electronic structure of C and Si fullerenes and fullerides Saito, S. 1996-01-01 Fullerenes, i.e., cage-structure clusters are now studied intensively as a building unit for a new class of materials. The electronic structure of C 60 and Si 20 fullerenes and their fullerides obtained in the framework of the density-functional theory is discussed with emphasis on the electronic as well as the geometrical hierarchy in superconducting fullerides. In both C 60 and Si 20 fullerides, the charge transfer from alkali atoms to fullerenes and the hybridization between alkaline-earth states and fullerene states are observed. Also A 3 C 60 and (Ba 3 Si 3 Na rate at Si 20 ) 2 superconductors are found to have high Fermi-level density of states, although the mechanism giving it is different in two materials. Interesting materials to be produced in the future are also discussed. (orig.) 4. High-Efficiency Fullerene Solar Cells Enabled by a Spontaneously Formed Mesostructured CuSCN-Nanowire Heterointerface Sit, Wai-Yu 2018-02-02 Fullerenes and their derivatives are widely used as electron acceptors in bulk-heterojunction organic solar cells as they combine high electron mobility with good solubility and miscibility with relevant semiconducting polymers. However, studies on the use of fullerenes as the sole photogeneration and charge-carrier material are scarce. Here, a new type of solution-processed small-molecule solar cell based on the two most commonly used methanofullerenes, namely [6,6]-phenyl-C61-butyric acid methyl ester (PC60BM) and [6,6]-phenyl-C71-butyric acid methyl ester (PC70BM), as the light absorbing materials, is reported. First, it is shown that both fullerene derivatives exhibit excellent ambipolar charge transport with balanced hole and electron mobilities. When the two derivatives are spin-coated over the wide bandgap p-type semiconductor copper (I) thiocyanate (CuSCN), cells with power conversion efficiency (PCE) of ≈1%, are obtained. Blending the CuSCN with PC70BM is shown to increase the performance further yielding cells with an open-circuit voltage of ≈0.93 V and a PCE of 5.4%. Microstructural analysis reveals that the key to this success is the spontaneous formation of a unique mesostructured p–n-like heterointerface between CuSCN and PC70BM. The findings pave the way to an exciting new class of single photoactive material based solar cells. 5. Fluidized bed combustion of refuse-derived fuel in presence of protective coal ash Ferrer, Eduardo [CIRCE, Universidad de Zaragoza, Maria de Luna, 3, Zaragoza (Spain); Aho, Martti [VTT Processes, P.O. Box 1603, 40101 Jyvaeskylae (Finland); Silvennoinen, Jaani; Nurminen, Riku-Ville [Kvaerner Power, P.O.Box 109, FIN-33101 Tampere (Finland) 2005-12-15 Combustion of refuse-derived fuel (RDF) alone or together with other biomass leads to superheater fouling and corrosion in efficient power plants (with high steam values) due to vaporization and condensation of alkali chlorides. In this study, means were found to raise the portion of RDF to 40% enb without risk to boilers. This was done by co-firing RDF with coal and optimizing coal quality. Free aluminum silicate in coal captured alkalies from vaporized alkali chlorides preventing Cl condensation to superheaters. Strong fouling and corrosion were simultaneously averted. Results from 100 kW and 4 MW CFB reactors are reported. (author) 6. Identification of fullerenes in iron-carbon alloys structure. KUZEEV Iskander Rustemovich 2017-11-01 Full Text Available Steels of various purposes are used in the construction industry, for example, as the reinforcement material in reinforced concrete structures. In the oil and gas industry, steel structures are used for storage and transportation of explosive toxic media. In this case the catastrophic damages might take place, that points at insufficiently deep knowledge about the processes running in structural materials when load is applied. Recent studies show that many properties of steel are set at the nanoscale level during crystallization from the molten metal and thermal treatment. To detect and identify fullerenes С60 and С70, which are independent nanoscale objects in steel structure, by various methods requires studying of how these objects influence on formation of steel properties. Iron atoms can serve as a catalyst and, interacting with large aromatic structures or fragments of the graphite planes, they form voluminous fullerene-type structures. The inverse phenomenon, i.e. influence of the formed nanoscale objects on structuring of the iron atoms, is also possible, as fullerene size is comparable with the size of the stable nucleus of the iron crystalline phase. The article discusses the issue of mechanisms of fullerenes formation in steels and cast irons. The most complicated issue in the study is the fullerenes identification by spectral methods as the quantity of released molecules is small. In order to increase the sensitivity of the fullerenes IR-spectrometry method, potassium bromide has been proposed to use. Dried and reduced sediment obtained as a result of dissolving iron matrix in steels is mixed with potassium bromide, the mixture becomes bright-orange. This fact points to presence of bromic fullerenes and to presence of fullerenes in the studied specimens. It is shown that the offered specimen preparation algorithm significantly increases sensitivity of the method. 7. Fullerenes: prospects of using in medicine, biology and ecology D. V. Schur; Z. Z. Matysina; S. Y. Zaginaichenko; N. P. Botsva; О. V. Elina 2012-01-01 Results of our own research and academic literature data on the properties of fullerenes and carbon nanotubes are analysed and summarized. Chemical stability of the structure and low toxicity of fullerenes determine their usage in medical chemistry, pharmacology and cosmetology. Due to its mechanical strength the nanotubes have become the basis of clean construction and barrier materials. It is shown that a matrix based on fullerit C60 can be obtained. It allows to store up to 7.7 wt. % hydro... 8. Experimental and computational studies of Si-doped fullerenes Billas, I.M.L.; Tast, F.; Branz, W.; Malinowski, N.; Heinebrodt, M.; Martin, T.P.; Boero, M.; Massobrio, C.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany) 1999-12-01 Silicon in-cage doped fullerenes result from laser-induced photofragmentation of mixed clusters of composition C{sub 60}Si{sub x}. These parent clusters are produced in a low pressure condensation cell, through the mixing of silicon vapor with a vapor containing the preformed C{sub 60} molecules. The geometric and the electronic structures of fullerenes substitutionally doped with one and two silicon atoms are studied by ab-initio calculations within density functional theory. (orig.) 9. Electronic Structure of Single- and Multiple-shell Carbon Fullerenes Lin, Yeong-Lieh; Nori, Franco 1993-01-01 We study the electronic states of giant single-shell and the recently discovered nested multi-shell carbon fullerenes within the tight-binding approximation. We use two different approaches, one based on iterations and the other on symmetry, to obtain the\\pi$-state energy spectra of large fullerene cages:$C_{240}$,$C_{540}$,$C_{960}$,$C_{1500}$,$C_{2160}$and$C_{2940}$. Our iteration technique reduces the dimensionality of the problem by more than one order of magnitude (factors of$\\...
10. Optimizing Conditions for Ultrasound Extraction of Fullerenes from Coal Matrices
Vítek, P.; Jehlička, J.; Frank, Otakar; Hamplová, Věra; Pokorná, Zdeňka; Juha, Libor; Boháček, J.
2009-01-01
Roč. 17, č. 2 (2009), s. 109-122 ISSN 1536-383X R&D Projects: GA ČR GA205/07/0772; GA ČR GA205/03/1468 Institutional research plan: CEZ:AV0Z40400503; CEZ:AV0Z10100520 Keywords : fullerene C60 * Ultrasound -assisted extraction * Extraction yield * Fullerene decomposition Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 0.710, year: 2009
11. Iridium-catalyzed direct synthesis of tryptamine derivatives from indoles: exploiting n-protected β-amino alcohols as alkylating agents.
Bartolucci, Silvia; Mari, Michele; Bedini, Annalida; Piersanti, Giovanni; Spadoni, Gilberto
2015-03-20
The selective C3-alkylation of indoles with N-protected ethanolamines involving the "borrowing hydrogen" strategy is described. This method provides convenient and sustainable access to several tryptamine derivatives.
12. PROCEDURES FOR THE DERIVATION OF EQUILIBRIUM PARTITIONING SEDIMENT BENCHMARKS (ESBS) FOR THE PROTECTION OF BENTHIC ORGANISMS: COMPENDIUM OF TIER 2 VALUES FOR NONIONIC ORGANICS
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations for 32 nonionic organic chemicals in sediment which are protective of the presence of freshwater and marine benthic organisms. The equilibrium partitioning (EqP) approach...
13. Deriving freshwater quality criteria for 2,4-dichlorophenol for protection of aquatic life in China
Yin Daqiang; Jin Hongjun; Yu Lingwei; Hu Shuangqing
2003-01-01
Criteria were established for an organic pollutant in freshwaters of China. - Freshwater quality criteria for 2,4-dichlorophenol (2,4-DCP) were developed with particular reference to the aquatic biota in China, and based on USEPA's guidelines. Acute toxicity tests were performed on nine different domestic species indigenous to China to determine 48-h LC 50 and 96-h LC 50 values for 2,4-DCP. In addition, 21 day survival-reproduction tests with Daphnia magna, 30-day embryo-larval tests with Carassius auratus, 60 day fry-juvenile test with Ctenopharyngodon idellus, 30 d early life stage tests with Bufo bufo gargarizans and 96 h growth inhibition tests with Scenedesms obliqaus were conducted, to estimate lower chronic limit (LCL) and upper chronic limit (UCL) values. The final acute value (FAV) was 2.49 mg/l 2,4-DCP. Acute-to-chronic ratios (ACR) ranged from 3.74 to 22.5. The final chronic value (FCV) and the final plant value (FPV) of 2.4-DCP were 0.212 mg/l and 7.07 mg/l respectively. Based on FAV, FCV, and FPV, a criteria maximum concentration (CMC) of 1.25 mg/l and a criterion continuous concentration (CCC) of 0.212 mg/l were derived. The results of this study provide useful data for deriving national or local water quality criteria for 2,4-DCP based on aquatic biota in China
14. Bone Marrow-Derived, Neural-Like Cells Have the Characteristics of Neurons to Protect the Peripheral Nerve in Microenvironment
Shi-lei Guo
2015-01-01
Full Text Available Effective repair of peripheral nerve defects is difficult because of the slow growth of new axonal growth. We propose that “neural-like cells” may be useful for the protection of peripheral nerve destructions. Such cells should prolong the time for the disintegration of spinal nerves, reduce lesions, and improve recovery. But the mechanism of neural-like cells in the peripheral nerve is still unclear. In this study, bone marrow-derived neural-like cells were used as seed cells. The cells were injected into the distal end of severed rabbit peripheral nerves that were no longer integrated with the central nervous system. Electromyography (EMG, immunohistochemistry, and transmission electron microscopy (TEM were employed to analyze the development of the cells in the peripheral nerve environment. The CMAP amplitude appeared during the 5th week following surgery, at which time morphological characteristics of myelinated nerve fiber formation were observed. Bone marrow-derived neural-like cells could protect the disintegration and destruction of the injured peripheral nerve.
15. Oscillation of nested fullerenes (carbon onions) in carbon nanotubes
Thamwattana, Ngamta; Hill, James M.
2008-01-01
Nested spherical fullerenes, which are sometimes referred to as carbon onions, of I h symmetries which have N(n) carbon atoms in the nth shell given by N(n) = 60n 2 are studied in this paper. The continuum approximation together with the Lennard-Jones potential is utilized to determine the resultant potential energy. High frequency nanoscale oscillators or gigahertz oscillators created from fullerenes and both single- and multi-walled carbon nanotubes have attracted much attention for a number of proposed applications, such as ultra-fast optical filters and ultra-sensitive nano-antennae that might impact on the development of computing and signalling nano-devices. Further, it is only at the nanoscale where such gigahertz frequencies can be achieved. This paper focuses on the interaction of nested fullerenes and the mechanics of such molecules oscillating in carbon nanotubes. Here we investigate such issues as the acceptance condition for nested fullerenes into carbon nanotubes, the total force and energy of the nested fullerenes, and the velocity and gigahertz frequency of the oscillating molecule. In particular, optimum nanotube radii are determined for which nested fullerenes oscillate at maximum velocity and frequency, which will be of considerable benefit for the design of future nano-oscillating devices
16. Fullerenes and endohedrals as “big atoms”
Amusia, M.Ya., E-mail: amusia@vms.huji.ac.il
2013-03-12
Highlights: ► Response of multi-electron atoms to radiation is determined by correlation effects. ► The response of fullerenes and endohedrals is characterized by strong resonances. ► Most important are confinement and Giant endohedral resonances. ► Fullerene is described as a zero-thickness polarizable shell. ► Electron exchange can play a very important role in inner shell ionization. - Abstract: We present the main features of the electronic structure of the heavy atoms that is best of all seen in photoionization. We acknowledge how important was and still is investigation of the interaction between atoms and low- and high frequency lasers with big intensity. We discuss the fullerenes and endohedrals as big atoms concentrating upon their most prominent features revealed in photoionization. Namely, we discuss reflection of photoelectron wave by the static potential that mimics the fullerenes electron shell and modification of the incoming photon beam under the action of the polarizable fullerenes shell. Both effects are clearly reflected in the photoionization cross-section. We discuss the possible features of interaction between laser field of both low and high frequency and high intensity upon fullerenes and endohedrals. We envisage prominent effects of multi-electron ionization and photon emission, including high-energy photons. We emphasize the important role that can be played by electron exchange in these processes.
17. Fullerenes and endohedrals as “big atoms”
Amusia, M.Ya.
2013-01-01
Highlights: ► Response of multi-electron atoms to radiation is determined by correlation effects. ► The response of fullerenes and endohedrals is characterized by strong resonances. ► Most important are confinement and Giant endohedral resonances. ► Fullerene is described as a zero-thickness polarizable shell. ► Electron exchange can play a very important role in inner shell ionization. - Abstract: We present the main features of the electronic structure of the heavy atoms that is best of all seen in photoionization. We acknowledge how important was and still is investigation of the interaction between atoms and low- and high frequency lasers with big intensity. We discuss the fullerenes and endohedrals as big atoms concentrating upon their most prominent features revealed in photoionization. Namely, we discuss reflection of photoelectron wave by the static potential that mimics the fullerenes electron shell and modification of the incoming photon beam under the action of the polarizable fullerenes shell. Both effects are clearly reflected in the photoionization cross-section. We discuss the possible features of interaction between laser field of both low and high frequency and high intensity upon fullerenes and endohedrals. We envisage prominent effects of multi-electron ionization and photon emission, including high-energy photons. We emphasize the important role that can be played by electron exchange in these processes
18. Pulvinic synthesis and evaluation of pulvinic derivatives like agents of protection against radiation ionising
Heurtaux, Benoit
2006-01-01
This work is devoted to the by-products preparation of mushrooms pigments, the pulvinic acids then to the evaluation of their oxidizing properties in the aim to find new compounds susceptible to be employed as protective drugs against ionizing radiation. An efficient method of symmetric pulvinic acids has been finalized. It is based on a double condensation of silylated cetenes acetals with oxalyl chloride. The treatment of the products got by D.B.U. leads to esters of corresponding pulvinic acids that are then saponified. The oxidizing properties have been studied. Then, the interaction between the pulvinic by-products and DNA are studied. Finally, the evaluation of radioprotective properties of the different synthesized compounds on different models (bacteria, eucaryotes cell and animal) is presented. (N.C.)
19. A Chlamydomonas-derived Human Papillomavirus 16 E7 vaccine induces specific tumor protection.
Olivia C Demurtas
Full Text Available The E7 protein of the Human Papillomavirus (HPV type 16, being involved in malignant cellular transformation, represents a key antigen for developing therapeutic vaccines against HPV-related lesions and cancers. Recombinant production of this vaccine antigen in an active form and in compliance with good manufacturing practices (GMP plays a crucial role for developing effective vaccines. E7-based therapeutic vaccines produced in plants have been shown to be active in tumor regression and protection in pre-clinical models. However, some drawbacks of in whole-plant vaccine production encouraged us to explore the production of the E7-based therapeutic vaccine in Chlamydomonas reinhardtii, an organism easy to grow and transform and fully amenable to GMP guidelines.An expression cassette encoding E7GGG, a mutated, attenuated form of the E7 oncoprotein, alone or as a fusion with affinity tags (His6 or FLAG, under the control of the C. reinhardtii chloroplast psbD 5' UTR and the psbA 3' UTR, was introduced into the C. reinhardtii chloroplast genome by homologous recombination. The protein was mostly soluble and reached 0.12% of total soluble proteins. Affinity purification was optimized and performed for both tagged forms. Induction of specific anti-E7 IgGs and E7-specific T-cell proliferation were detected in C57BL/6 mice vaccinated with total Chlamydomonas extract and with affinity-purified protein. High levels of tumor protection were achieved after challenge with a tumor cell line expressing the E7 protein.The C. reinhardtii chloroplast is a suitable expression system for the production of the E7GGG protein, in a soluble, immunogenic form. The production in contained and sterile conditions highlights the potential of microalgae as alternative platforms for the production of vaccines for human uses.
20. Roll-coating fabrication of flexible organic solar cells: comparison of fullerene and fullerene-free systems
Liu, Kuan; Larsen-Olsen, Thue Trofod; Lin, Yuze
2016-01-01
Flexible organic solar cells (OSCs) based on a blend of low-bandgap polymer donor PTB7-TH and nonfullerene small molecule acceptor IEIC were fabricated via a roll-coating process under ambient atmosphere. Both an indium tin oxide (ITO)-free substrate and a flexible ITO substrate were employed...... in these inverted OSCs. OSCs with flexible ITO and ITO-free substrates exhibited power conversion efficiencies (PCEs) up to 2.26% and 1.79%, respectively, which were comparable to those of the reference devices based on fullerene acceptors under the same conditions. This is the first example for all roll......-coating fabrication procedures for flexible OSCs based on non-fullerene acceptors with the PCE exceeding 2%. The fullerene-free OSCs exhibited better dark storage stability than the fullerene-based control devices....
1. Efficient Regular Perovskite Solar Cells Based on Pristine [70]Fullerene as Electron-Selective Contact.
Collavini, Silvia; Kosta, Ivet; Völker, Sebastian F; Cabanero, German; Grande, Hans J; Tena-Zaera, Ramón; Delgado, Juan Luis
2016-06-08
[70]Fullerene is presented as an efficient alternative electron-selective contact (ESC) for regular-architecture perovskite solar cells (PSCs). A smart and simple, well-described solution processing protocol for the preparation of [70]- and [60]fullerene-based solar cells, namely the fullerene saturation approach (FSA), allowed us to obtain similar power conversion efficiencies for both fullerene materials (i.e., 10.4 and 11.4 % for [70]- and [60]fullerene-based devices, respectively). Importantly, despite the low electron mobility and significant visible-light absorption of [70]fullerene, the presented protocol allows the employment of [70]fullerene as an efficient ESC. The [70]fullerene film thickness and its solubility in the perovskite processing solutions are crucial parameters, which can be controlled by the use of this simple solution processing protocol. The damage to the [70]fullerene film through dissolution during the perovskite deposition is avoided through the saturation of the perovskite processing solution with [70]fullerene. Additionally, this fullerene-saturation strategy improves the performance of the perovskite film significantly and enhances the power conversion efficiency of solar cells based on different ESCs (i.e., [60]fullerene, [70]fullerene, and TiO2 ). Therefore, this universal solution processing protocol widens the opportunities for the further development of PSCs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2. Status seminar on the application potential of fullerenes. Status seminar and panel discussion; Statusseminar Anwendungspotential der Fullerene. Vortraege und Podiumsdiskussion
Hoffschulz, H [comp.
1997-12-31
The application potential of fullerenes extends to the following areas: Owing to their similarity to active carbon the use of fullerenes as well as of the soot arising during their production in catalytic applications appears an interesting possibility. Structural modifications will permit influencing the catalytic properties of the employed substances. Addition of functional groups has led to a wide range of fullerne variants whose chemical properties and application potentials are still being studied. Polymers can be altered in their structure and properties by the integration of fullerenes. The possibility of increasing the photoconductivity of polymers in this way could be applied to photodetectors and solar cells, for example. Exposure to light causes fullerenes to polymerise and drastically reduces their solubility in commercial solvents. This may render them useful as a masking material in microstructuring. Diamond layers from fullerene vapour are very durable and can be manufactured in large sheets at comparatively low cost. In spite of their low density nanotubes are of incredible stiffness and as such an ideal component for composite materials. In monitors nanotubes can function as electron sources and replace the traditional cathode ray tube. A prerequisite for studying the properties of endohedral fullerenes is their availability in macroscopic amounts. In order to assess their potential it will first be necessary to develop suitable production methods. (orig./SR) [Deutsch] Folgende Anwendungspotentiale fuer Fullorene sind denkbar: - Die Verwandtschaft der Fullerene und des bei ihrer Erzeugung anfallenden Russes zur Aktivkohle sind fuer katalytische Anwendungen interessant, wobei die Katalyseeigenschaften durch Modifizierungen der Struktur veraendert werden koennen. - Mittlerweile stehen eine Vielzahl verschiedener Fulleren-Modifikationen durch Anbringen von funktionellen Gruppen zur Verfuegung, deren chemische Eigenschaften und Anwendungspotentiale
3. Deriving freshwater quality criteria for 2,4-dichlorophenol for protection of aquatic life in China
Yin Daqiang; Jin Hongjun; Yu Lingwei; Hu Shuangqing
2003-04-01
Criteria were established for an organic pollutant in freshwaters of China. - Freshwater quality criteria for 2,4-dichlorophenol (2,4-DCP) were developed with particular reference to the aquatic biota in China, and based on USEPA's guidelines. Acute toxicity tests were performed on nine different domestic species indigenous to China to determine 48-h LC{sub 50} and 96-h LC{sub 50} values for 2,4-DCP. In addition, 21 day survival-reproduction tests with Daphnia magna, 30-day embryo-larval tests with Carassius auratus, 60 day fry-juvenile test with Ctenopharyngodon idellus, 30 d early life stage tests with Bufo bufo gargarizans and 96 h growth inhibition tests with Scenedesms obliqaus were conducted, to estimate lower chronic limit (LCL) and upper chronic limit (UCL) values. The final acute value (FAV) was 2.49 mg/l 2,4-DCP. Acute-to-chronic ratios (ACR) ranged from 3.74 to 22.5. The final chronic value (FCV) and the final plant value (FPV) of 2.4-DCP were 0.212 mg/l and 7.07 mg/l respectively. Based on FAV, FCV, and FPV, a criteria maximum concentration (CMC) of 1.25 mg/l and a criterion continuous concentration (CCC) of 0.212 mg/l were derived. The results of this study provide useful data for deriving national or local water quality criteria for 2,4-DCP based on aquatic biota in China.
4. In yeast redistribution of Sod1 to the mitochondrial intermembrane space provides protection against respiration derived oxidative stress.
Klöppel, Christine; Michels, Christine; Zimmer, Julia; Herrmann, Johannes M; Riemer, Jan
2010-12-03
The antioxidative enzyme copper-zinc superoxide dismutase (Sod1) is an important cellular defence system against reactive oxygen species (ROS). While the majority of this enzyme is localized to the cytosol, about 1% of the cellular Sod1 is present in the intermembrane space (IMS) of mitochondria. These amounts of mitochondrial Sod1 are increased for certain Sod1 mutants that are linked to the neurodegenerative disease amyotrophic lateral sclerosis (ALS). To date, only little is known about the physiological function of mitochondrial Sod1. Here, we use the model system Saccharomyces cerevisiae to generate cells in which Sod1 is exclusively localized to the IMS. We find that IMS-localized Sod1 can functionally substitute wild type Sod1 and that it even exceeds the protective capacity of wild type Sod1 under conditions of mitochondrial ROS stress. Moreover, we demonstrate that upon expression in yeast cells the common ALS-linked mutant Sod1(G93A) becomes enriched in the mitochondrial fraction and provides an increased protection of cells from mitochondrial oxidative stress. Such an effect cannot be observed for the catalytically inactive mutant Sod1(G85R). Our observations suggest that the targeting of Sod1 to the mitochondrial IMS provides an increased protection against respiration-derived ROS. Copyright © 2010 Elsevier Inc. All rights reserved.
5. Specific features of fullerene-bearing thin film growth using ion beam vacuum sputtering of fullerene mixtures with B, Fe, Se, Gd and Na
Semenov, A.P.; Semenova, I.A.; Bulina, N.V.; Lopatin, V.A.; Karmanov, N.S.; Churilov, G.N.
2005-01-01
A new approach to the growth of films containing fullerenes and doping elements is described. It is suggested that a cluster mechanism of the target sputtering by accelerated ions makes possible the deposition of fullerenes on a substrate with a certain probability for dopant atoms being introduced into the cavities of fullerene molecules and a higher probability of the doping element introduction between fullerene molecules. The proposed method has been experimentally implemented by using an Ar ion beam to sputter C 60 /C 70 fullerene mixtures, synthesized in a plasmachemical reactor at a pressure of 10 5 Pa and containing a doping element, i.e. Fe, Na, B, Gd or Se. Micron-thick films containing C 60 and C 70 fullerenes and the corresponding dopant element, i.e. Fe, Na, B, Gd or Se, were grown from dopant-containing fullerene mixtures by ion beam sputtering in a vacuum of ∼10 -2 Pa [ru
6. Towards a fullerene-based quantum computer
Benjamin, Simon C; Ardavan, Arzhang; Briggs, G Andrew D; Britz, David A; Gunlycke, Daniel; Jefferson, John; Jones, Mark A G; Leigh, David F; Lovett, Brendon W; Khlobystov, Andrei N; Lyon, S A; Morton, John J L; Porfyrakis, Kyriakos; Sambrook, Mark R; Tyryshkin, Alexei M
2006-01-01
Molecular structures appear to be natural candidates for a quantum technology: individual atoms can support quantum superpositions for long periods, and such atoms can in principle be embedded in a permanent molecular scaffolding to form an array. This would be true nanotechnology, with dimensions of order of a nanometre. However, the challenges of realizing such a vision are immense. One must identify a suitable elementary unit and demonstrate its merits for qubit storage and manipulation, including input/output. These units must then be formed into large arrays corresponding to an functional quantum architecture, including a mechanism for gate operations. Here we report our efforts, both experimental and theoretical, to create such a technology based on endohedral fullerenes or 'buckyballs'. We describe our successes with respect to these criteria, along with the obstacles we are currently facing and the questions that remain to be addressed
7. Transplantation of human dental pulp-derived stem cells protects against heatstroke in mice.
Tseng, Ling-Shu; Chen, Sheng-Hsien; Lin, Mao-Tsun; Lin, Ying-Chu
2015-01-01
Stem cells from human exfoliated deciduous tooth pulp (SHED) is a promising approach for the treatment of stroke and spinal cord injury. In this study, we investigated the therapeutic effects of SHED for the treatment of multiple organ (including brain, particularly hypothalamus) injury in heatstroke mice. ICR male mice were exposed to whole body heating (WBH; 41.2°C, relative humidity 50-55%, for 1 h) and then returned to normal room temperature (26°C). We observed that intravenous administration of SHED immediately post-WBH exhibited the following therapeutic benefits for recovery after heatstroke: (a) inhibition of WBH-induced neurologic and thermoregulatory deficits; (b) reduction of WBH-induced ischemia, hypoxia, and oxidative damage to the brain (particularly the hypothalamus); (c) attenuation of WBH-induced increased plasma levels of systemic inflammatory response molecules, such as tumor necrosis factor-α and intercellular adhesion molecule-1; (d) improvement of WBH-induced hypothalamo-pituitary-adrenocortical (HPA) axis activity (as reflected by enhanced plasma levels of both adrenocorticotrophic hormone and corticosterone); and (e) attenuation of WBH-induced multiple organ apoptosis as well as lethality. In conclusion, post-WBH treatment with SHED reduced induction of proinflammatory cytokines and oxidative radicals, enhanced plasma induction of both adrenocorticotrophic hormone and corticosterone, and improved lethality in mouse heatstroke. The protective effect of SHED may be related to a decreased inflammatory response, decreased oxidative stress, and an increased HPA axis activity following the WBH injury.
8. Immunization with a Neural-Derived Peptide Protects the Spinal Cord from Apoptosis after Traumatic Injury
Roxana Rodríguez-Barrera
2013-01-01
Full Text Available Apoptosis is one of the most destructive mechanisms that develop after spinal cord (SC injury. Immunization with neural-derived peptides (INDPs such as A91 has shown to reduce the deleterious proinflammatory response and the amount of harmful compounds produced after SC injury. With the notion that the aforementioned elements are apoptotic inducers, we hypothesized that INDPs would reduce apoptosis after SC injury. In order to test this assumption, adult rats were subjected to SC contusion and immunized either with A91 or phosphate buffered saline (PBS; control group. Seven days after injury, animals were euthanized to evaluate the number of apoptotic cells at the injury site. Apoptosis was evaluated using DAPI and TUNEL techniques; caspase-3 activity was also evaluated. To further elucidate the mechanisms through which A91 exerts this antiapoptotic effects we quantified tumor necrosis factor-alpha (TNF-α. To also demonstrate that the decrease in apoptotic cells correlated with a functional improvement, locomotor recovery was evaluated. Immunization with A91 significantly reduced the number of apoptotic cells and decreased caspase-3 activity and TNF-α concentration. Immunization with A91 also improved the functional recovery of injured rats. The present study shows the beneficial effect of INDPs on preventing apoptosis and provides more evidence on the neuroprotective mechanisms exerted by this strategy.
9. Changes in Agglomeration of Fullerenes During Ingestion and Excretion in Thamnocephalus Platuyrus
The crustacean Thamnocephalus platyurus was exposed to aqueous suspensions of fullerenes C60 and C70. Aqueous fullerene suspensions were formed by stirring C60 and C70 as received from a commercial vendor in deionized water (term...
10. Protective effects of 4-phenylbutyrate derivatives on the neuronal cell death and endoplasmic reticulum stress.
Mimori, Seisuke; Okuma, Yasunobu; Kaneko, Masayuki; Kawada, Koichi; Hosoi, Toru; Ozawa, Koichiro; Nomura, Yasuyuki; Hamana, Hiroshi
2012-01-01
Endoplasmic reticulum (ER) stress responses play an important role in neurodegenerative diseases. Sodium 4-phenylbutyrate (4-PBA) is a terminal aromatic substituted fatty acid that has been used for the treatment of urea cycle disorders. 4-PBA possesses in vitro chemical chaperone activity and reduces the accumulation of Parkin-associated endothelin receptor-like receptor (Pael-R), which is involved in autosomal recessive juvenile parkinsonism (AR-JP). In this study, we show that terminal aromatic substituted fatty acids, including 3-phenylpropionate (3-PPA), 4-PBA, 5-phenylvaleric acid, and 6-phenylhexanoic acid, prevented the aggregation of lactalbumin and bovine serum albumin. Aggregation inhibition increased relative to the number of carbons in the fatty acids. Moreover, these compounds protected cells against ER stress-induced neuronal cell death. The cytoprotective effect correlated with the in vitro chemical chaperone activity. Similarly, cell viability decreased on treatment with tunicamycin, an ER stress inducer, and was dependent on the number of carbons in the fatty acids. Moreover, the expression of glucose-regulated proteins 94 and 78 (GRP94, 78) decreased according to the number of carbons in the fatty acids. Furthermore, we investigated the effects of these compounds on the accumulation of Pael-R in neuroblastoma cells. 3-PPA and 4-PBA significantly suppressed neuronal cell death caused by ER stress induced by the overexpression of Pael-R. Overexpressed Pael-R accumulated in the ER of cells. With 3-PPA and 4-PBA treatment, the localization of the overexpressed Pael-R shifted away from the ER to the cytoplasmic membrane. These results suggest that terminal aromatic substituted fatty acids are potential candidates for the treatment of neurodegenerative diseases.
11. 4-Hydroxy hexenal derived from docosahexaenoic acid protects endothelial cells via Nrf2 activation.
Full Text Available Recent studies have proposed that n-3 polyunsaturated fatty acids (n-3 PUFAs have direct antioxidant and anti-inflammatory effects in vascular tissue, explaining their cardioprotective effects. However, the molecular mechanisms are not yet fully understood. We tested whether n-3 PUFAs showed antioxidant activity through the activation of nuclear factor erythroid 2-related factor 2 (Nrf2, a master transcriptional factor for antioxidant genes. C57BL/6 or Nrf2(-/- mice were fed a fish-oil diet for 3 weeks. Fish-oil diet significantly increased the expression of heme oxygenase-1 (HO-1, and endothelium-dependent vasodilation in the aorta of C57BL/6 mice, but not in the Nrf2(-/- mice. Furthermore, we observed that 4-hydroxy hexenal (4-HHE, an end-product of n-3 PUFA peroxidation, was significantly increased in the aorta of C57BL/6 mice, accompanied by intra-aortic predominant increase in docosahexaenoic acid (DHA rather than that in eicosapentaenoic acid (EPA. Human umbilical vein endothelial cells were incubated with DHA or EPA. We found that DHA, but not EPA, markedly increased intracellular 4-HHE, and nuclear expression and DNA binding of Nrf2. Both DHA and 4-HHE also increased the expressions of Nrf2 target genes including HO-1, and the siRNA of Nrf2 abolished these effects. Furthermore, DHA prevented oxidant-induced cellular damage or reactive oxygen species production, and these effects were disappeared by an HO-1 inhibitor or the siRNA of Nrf2. Thus, we found protective effects of DHA through Nrf2 activation in vascular tissue, accompanied by intra-vascular increases in 4-HHE, which may explain the mechanism of the cardioprotective effects of DHA.
12. Modulation of gene expression of adenosine and metabotropic glutamate receptors in rat's neuronal cells exposed to L-glutamate and [60]fullerene.
Giust, Davide; Da Ros, Tatiana; Martín, Mairena; Albasanz, José Luis
2014-08-01
L-Glutamate (L-Glu) has been often associated not only to fundamental physiological roles, as learning and memory, but also to neuronal cell death and the genesis and development of important neurodegenerative diseases. Herein we studied the variation in the adenosine and metabotropic glutamate receptors expression induced by L-Glu treatment in rat's cortical neurons. The possibility to have structural alteration of the cells induced by L-Glu (100 nM, 1 and 10 microM) has been addressed, studying the modulation of microtubule associated protein-2 (MAP-2) and neurofilament heavy polypeptide (NEFH), natively associated proteins to the dendritic shape maintenance. Results showed that the proposed treatments were not destabilizing the cells, so the L-Glu concentrations were acceptable to investigate fluctuation in receptors expression, which were studied by RT-PCR. Interestingly, C60 fullerene derivative t3ss elicited a protective effect against glutamate toxicity, as demonstrated by MTT assay. In addition, t3ss compound exerted a different effect on the adenosine and metabotropic glutamate receptors analyzed. Interestingly, A(2A) and mGlu1 mRNAs were significantly decreased in conditions were t3ss neuroprotected cortical neurons from L-Glu toxicity. In summary, t3ss protects neurons from glutamate toxicity in a process that appears to be associated with the modulation of the gene expression of adenosine and metabotropic glutamate receptors.
13. Effects of alkyl chain length and substituent pattern of fullerene bis-adducts on film structures and photovoltaic properties of bulk heterojunction solar cells.
Tao, Ran; Umeyama, Tomokazu; Kurotobi, Kei; Imahori, Hiroshi
2014-10-08
A series of alkoxycarbonyl-substituted dihydronaphthyl-based [60]fullerene bis-adduct derivatives (denoted as C2BA, C4BA, and C6BA with the alkyl chain of ethyl, n-butyl, and n-hexyl, respectively) have been synthesized to investigate the effects of alkyl chain length and substituent pattern of fullerene bis-adducts on the film structures and photovoltaic properties of bulk heterojunction polymer solar cells. The shorter alkyl chain length caused lower solubility of the fullerene bis-adducts (C6BA > C4BA > C2BA), thereby resulting in the increased separation difficulty of respective bis-adduct isomers. The device performance based on poly(3-hexylthiophene) (P3HT) and the fullerene bis-adduct regioisomer mixtures was enhanced by shortening the alkyl chain length. When using the regioisomerically separated fullerene bis-adducts, the devices based on trans-2 and a mixture of trans-4 and e of C4BA exhibited the highest power conversion efficiencies of ca. 2.4%, which are considerably higher than those of the C6BA counterparts (ca. 1.4%) and the C4BA regioisomer mixture (1.10%). The film morphologies as well as electron mobilities of the P3HT:bis-adduct blend films were found to affect the photovoltaic properties considerably. These results reveal that the alkyl chain length and substituent pattern of fullerene bis-adducts significantly influence the photovoltaic properties as well as the film structures of bulk heterojunction solar cells.
14. The study of dielectric properties of the endohedral fullerenes
Bhusal, Shusil
Dielectric response of the metal nitride fullerenes is studied using the density functional theory at the all-electron level using generalized gradient approximation. The dielectric response is studied by computing the static dipole polarizabilities using the finite field method, i.e. by numerically differentiating the dipole moments with respect to electric field. The endohedral fullerenes studied in this work are Sc3N C68(6140), Sc3N C68(6146), Sc3N C70(7854), Sc3N C70(7960), Sc3N C76(17490), Sc3N C78(22010), Sc3N C80(31923), Sc3N C80(31924), Sc3N C82(39663), Sc3N C90(43), Sc3N C90(44), Sc3N C92(85), Sc3N C94(121), Sc3N C96(186), Sc3N C98(166). Using the Voronoi and Hirschfield approaches as implemented in our NRLMOL code, we determine the atomic contributions to the total polarizability. The site-specific contributions to the polarizability of endohedral fullerenes allowed us to determine the polarizability of two subsystems: the fullerene shell and the encapsulated Sc3N unit. Our results showed that the contributions to the total polarizability from the encapsulated Sc3N units are vanishingly small. Thus, the total polarizability of the endohedral fullerene is almost entirely due to the outer fullerene shell. These fullerenes are excellent molecular models of a Faraday cage.
15. Preparation and characterization of stable aqueous higher-order fullerenes
Aich, Nirupam; Flora, Joseph R V; Saleh, Navid B
2012-01-01
Stable aqueous suspensions of nC 60 and individual higher fullerenes, i.e. C 70 , C 76 and C 84 , are prepared by a calorimetric modification of a commonly used liquid–liquid extraction technique. The energy requirement for synthesis of higher fullerenes has been guided by molecular-scale interaction energy calculations. Solubilized fullerenes show crystalline behavior by exhibiting lattice fringes in high resolution transmission electron microscopy images. The fullerene colloidal suspensions thus prepared are stable with a narrow distribution of cluster radii (42.7 ± 0.8 nm, 46.0 ± 14.0 nm, 60 ± 3.2 nm and 56.3 ± 1.1 nm for nC 60 , nC 70 , nC 76 and nC 84 , respectively) as measured by time-resolved dynamic light scattering. The ζ-potential values for all fullerene samples showed negative surface potentials with similar magnitude ( − 38.6 ± 5.8 mV, − 39.1 ± 4.2 mV, − 38.9 ± 5.8 mV and − 41.7 ± 5.1 mV for nC 60 , nC 70 , nC 76 and nC 84 , respectively), which provide electrostatic stability to the colloidal clusters. This energy-based modified solubilization technique to produce stable aqueous fullerenes will likely aid in future studies focusing on better applicability, determination of colloidal properties, and understanding of environmental fate, transport and toxicity of higher-order fullerenes. (paper)
16. Chemically-induced photoreceptor degeneration and protection in mouse iPSC-derived three-dimensional retinal organoids
Shin-ichiro Ito
2017-10-01
Full Text Available Induced pluripotent stem cells (iPSCs, which can be differentiated into various tissues and cell types, have been used for clinical research and disease modeling. Self-organizing three-dimensional (3D tissue engineering has been established within the past decade and enables researchers to obtain tissues and cells that almost mimic in vivo development. However, there are no reports of practical experimental procedures that reproduce photoreceptor degeneration. In this study, we induced photoreceptor cell death in mouse iPSC-derived 3D retinal organoids (3D-retinas by 4-hydroxytamoxifen (4-OHT, which induces photoreceptor degeneration in mouse retinal explants, and then established a live-cell imaging system to measure degeneration-related properties. Furthermore, we quantified the protective effects of representative ophthalmic supplements for treating the photoreceptor degeneration. This drug evaluation system enables us to monitor drug effects in photoreceptor cells and could be useful for drug screening.
17. New calculation of derived limits for the 1960 radiation protection guides reflecting updated models for dosimetry and biological transport
Eckerman, K.F.; Watson, S.B.; Nelson, C.B.; Nelson, D.R.; Richardson, A.C.B.; Sullivan, R.E.
1984-12-01
This report presents revised values for the radioactivity concentration guides (RCGs), based on the 1960 primary radiation protection guides (RPGs) for occupational exposure (FRC 1960) and for underground uranium miners (EPA 1971a) using the updated dosimetric models developed to prepare ICRP Publication 30. Unlike the derived quantities presented in Publication 30, which are based on limitation of the weighted sum of doses to all irradiated tissues, these RCGs are based on the ''critical organ'' approach of the 1960 guidance, which was a single limit for the most critically irradiated organ or tissue. This report provides revised guides for the 1960 Federal guidance which are consistent with current dosimetric relationships. 2 figs., 4 tabs
18. Squamosamide derivative FLZ protects dopaminergic neurons against inflammation-mediated neurodegeneration through the inhibition of NADPH oxidase activity
Wilson Belinda
2008-05-01
Full Text Available Abstract Background Inflammation plays an important role in the pathogenesis of Parkinson's disease (PD through over-activation of microglia, which consequently causes the excessive production of proinflammatory and neurotoxic factors, and impacts surrounding neurons and eventually induces neurodegeneration. Hence, prevention of microglial over-activation has been shown to be a prime target for the development of therapeutic agents for inflammation-mediated neurodegenerative diseases. Methods For in vitro studies, mesencephalic neuron-glia cultures and reconstituted cultures were used to investigate the molecular mechanism by which FLZ, a squamosamide derivative, mediates anti-inflammatory and neuroprotective effects in both lipopolysaccharide-(LPS- and 1-methyl-4-phenylpyridinium-(MPP+-mediated models of PD. For in vivo studies, a 1-methyl-4-phenyl-1, 2, 3, 6-tetrahydropyridine-(MPTP- induced PD mouse model was used. Results FLZ showed potent efficacy in protecting dopaminergic (DA neurons against LPS-induced neurotoxicity, as shown in rat and mouse primary mesencephalic neuronal-glial cultures by DA uptake and tyrosine hydroxylase (TH immunohistochemical results. The neuroprotective effect of FLZ was attributed to a reduction in LPS-induced microglial production of proinflammatory factors such as superoxide, tumor necrosis factor-α (TNF-α, nitric oxide (NO and prostaglandin E2 (PGE2. Mechanistic studies revealed that the anti-inflammatory properties of FLZ were mediated through inhibition of NADPH oxidase (PHOX, the key microglial superoxide-producing enzyme. A critical role for PHOX in FLZ-elicited neuroprotection was further supported by the findings that 1 FLZ's protective effect was reduced in cultures from PHOX-/- mice, and 2 FLZ inhibited LPS-induced translocation of the cytosolic subunit of p47PHOX to the membrane and thus inhibited the activation of PHOX. The neuroprotective effect of FLZ demonstrated in primary neuronal
19. Derivation of Ecological Protective Concentration using the Probabilistic Ecological Risk Assessment applicable for Korean Water Environment: (I) Cadmium.
Nam, Sun-Hwa; Lee, Woo-Mi; An, Youn-Joo
2012-06-01
Probabilistic ecological risk assessment (PERA) for deriving ecological protective concentration (EPC) was previously suggested in USA, Australia, New Zealand, Canada, and Netherland. This study suggested the EPC of cadmium (Cd) based on the PERA to be suitable to Korean aquatic ecosystem. First, we collected reliable ecotoxicity data from reliable data without restriction and reliable data with restrictions. Next, we sorted the ecotoxicity data based on the site-specific locations, exposure duration, and water hardness. To correct toxicity by the water hardness, EU's hardness corrected algorithm was used with slope factor 0.89 and a benchmark of water hardness 100. EPC was calculated according to statistical extrapolation method (SEM), statistical extrapolation methodAcute to chronic ratio (SEMACR), and assessment factor method (AFM). As a result, aquatic toxicity data of Cd were collected from 43 acute toxicity data (4 Actinopterygill, 29 Branchiopoda, 1 Polychaeta, 2 Bryozoa, 6 Chlorophyceae, 1 Chanophyceae) and 40 chronic toxicity data (2 Actinopterygill, 23 Branchiopoda, 9 Chlorophyceae, 6 Macrophytes). Because toxicity data of Cd belongs to 4 classes in taxonomical classification, acute and chronic EPC (11.07 μg/l and 0.034 μg/l, respectively) was calculated according to SEM technique. These values were included in the range of international EPCs. This study would be useful to establish the ecological standard for the protection of aquatic ecosystem in Korea.
20. Adipose Tissue-Derived Stem Cell Secreted IGF-1 Protects Myoblasts from the Negative Effect of Myostatin
Sebastian Gehmert
2014-01-01
Full Text Available Myostatin, a TGF-β family member, is associated with inhibition of muscle growth and differentiation and might interact with the IGF-1 signaling pathway. Since IGF-1 is secreted at a bioactive level by adipose tissue-derived mesenchymal stem cells (ASCs, these cells (ASCs provide a therapeutic option for Duchenne Muscular Dystrophy (DMD. But the protective effect of stem cell secreted IGF-1 on myoblast under high level of myostatin remains unclear. In the present study murine myoblasts were exposed to myostatin under presence of ASCs conditioned medium and investigated for proliferation and apoptosis. The protective effect of IGF-1 was further examined by using IGF-1 neutralizing and receptor antibodies as well as gene silencing RNAi technology. MyoD expression was detected to identify impact of IGF-1 on myoblasts differentiation when exposed to myostatin. IGF-1 was accountable for 43.6% of the antiapoptotic impact and 48.8% for the proliferative effect of ASCs conditioned medium. Furthermore, IGF-1 restored mRNA and protein MyoD expression of myoblasts under risk. Beside fusion and transdifferentiation the beneficial effect of ASCs is mediated by paracrine secreted cytokines, particularly IGF-1. The present study underlines the potential of ASCs as a therapeutic option for Duchenne muscular dystrophy and other dystrophic muscle diseases.
1. Protection by an oral disubstituted hydroxylamine derivative against loss of retinal ganglion cell differentiation following optic nerve crush.
James D Lindsey
Full Text Available Thy-1 is a cell surface protein that is expressed during the differentiation of retinal ganglion cells (RGCs. Optic nerve injury induces progressive loss in the number of RGCs expressing Thy-1. The rate of this loss is fastest during the first week after optic nerve injury and slower in subsequent weeks. This study was undertaken to determine whether oral treatment with a water-soluble N-hydroxy-2,2,6,6-tetramethylpiperidine derivative (OT-440 protects against loss of Thy-1 promoter activation following optic nerve crush and whether this effect targets the earlier quick phase or the later slow phase. The retina of mice expressing cyan fluorescent protein under control of the Thy-1 promoter (Thy1-CFP mice was imaged using a blue-light confocal scanning laser ophthalmoscope (bCSLO. These mice then received oral OT-440 prepared in cream cheese or dissolved in water, or plain vehicle, for two weeks and were imaged again prior to unilateral optic nerve crush. Treatments and weekly imaging continued for four more weeks. Fluorescent neurons were counted in the same defined retinal areas imaged at each time point in a masked fashion. When the counts at each time point were directly compared, the numbers of fluorescent cells at each time point were greater in the animals that received OT-440 in cream cheese by 8%, 27%, 52% and 60% than in corresponding control animals at 1, 2, 3 and 4 weeks after optic nerve crush. Similar results were obtained when the vehicle was water. Rate analysis indicated the protective effect of OT-440 was greatest during the first two weeks and was maintained in the second two weeks after crush for both the cream cheese vehicle study and water vehicle study. Because most of the fluorescent cells detected by bCSLO are RGCs, these findings suggest that oral OT-440 can either protect against or delay early degenerative responses occurring in RGCs following optic nerve injury.
2. Protection by an oral disubstituted hydroxylamine derivative against loss of retinal ganglion cell differentiation following optic nerve crush.
Lindsey, James D; Duong-Polk, Karen X; Dai, Yi; Nguyen, Duy H; Leung, Christopher K; Weinreb, Robert N
2013-01-01
Thy-1 is a cell surface protein that is expressed during the differentiation of retinal ganglion cells (RGCs). Optic nerve injury induces progressive loss in the number of RGCs expressing Thy-1. The rate of this loss is fastest during the first week after optic nerve injury and slower in subsequent weeks. This study was undertaken to determine whether oral treatment with a water-soluble N-hydroxy-2,2,6,6-tetramethylpiperidine derivative (OT-440) protects against loss of Thy-1 promoter activation following optic nerve crush and whether this effect targets the earlier quick phase or the later slow phase. The retina of mice expressing cyan fluorescent protein under control of the Thy-1 promoter (Thy1-CFP mice) was imaged using a blue-light confocal scanning laser ophthalmoscope (bCSLO). These mice then received oral OT-440 prepared in cream cheese or dissolved in water, or plain vehicle, for two weeks and were imaged again prior to unilateral optic nerve crush. Treatments and weekly imaging continued for four more weeks. Fluorescent neurons were counted in the same defined retinal areas imaged at each time point in a masked fashion. When the counts at each time point were directly compared, the numbers of fluorescent cells at each time point were greater in the animals that received OT-440 in cream cheese by 8%, 27%, 52% and 60% than in corresponding control animals at 1, 2, 3 and 4 weeks after optic nerve crush. Similar results were obtained when the vehicle was water. Rate analysis indicated the protective effect of OT-440 was greatest during the first two weeks and was maintained in the second two weeks after crush for both the cream cheese vehicle study and water vehicle study. Because most of the fluorescent cells detected by bCSLO are RGCs, these findings suggest that oral OT-440 can either protect against or delay early degenerative responses occurring in RGCs following optic nerve injury.
3. Continuum modeling investigation of gigahertz oscillators based on a C60 fullerene inside cyclic peptide nanotubes
Sadeghi, F.; Ansari, R.; Darvizeh, M.
2016-02-01
Research concerning the fabrication of nano-oscillators with operating frequency in the gigahertz (GHz) range has become a focal point in recent years. In this paper, a new type of GHz oscillators is introduced based on a C60 fullerene inside a cyclic peptide nanotube (CPN). To study the dynamic behavior of such nano-oscillators, using the continuum approximation in conjunction with the 6-12 Lennard-Jones (LJ) potential function, analytical expressions are derived to determine the van der Waals (vdW) potential energy and interaction force between the two interacting molecules. Employing Newton's second law, the equation of motion is solved numerically to arrive at the telescopic oscillatory motion of a C60 fullerene inside CPNs. It is shown that the fullerene molecule exhibits different kinds of oscillation inside peptide nanotubes which are sensitive to the system parameters. Furthermore, for the precise evaluation of the oscillation frequency, a novel semi-analytical expression is proposed based on the conservation of the mechanical energy principle. Numerical results are presented to comprehensively study the effects of the number of peptide units and initial conditions (initial separation distance and velocity) on the oscillatory behavior of C60 -CPN oscillators. It is found out that for peptide nanotubes comprised of one unit, the maximum achievable frequency is obtained when the inner core oscillates with respect to its preferred positions located outside the tube, while for other numbers of peptide units, such frequency is obtained when the inner core oscillates with respect to the preferred positions situated in the space between the two first or the two last units. It is further found out that four peptide units are sufficient to obtain the optimal frequency.
4. Characterization of nanophotonic soft contact lenses based on poly (2-hydroxyethyl methacrylate and fullerene
Debeljković Aleksandra D.
2013-01-01
Full Text Available This work presents comparative research of characteristics of a basic and new nanophotonic material, the latter of which was obtained by incorporation fullerene, C60, in the base material for soft contact lenses. The basic (SL38 and nanophotonic materials (SL38-A for soft contact lenses were obtained by radical polymerization of 2-hydroxyethyl methacrylate and 2-hydroxyethyl methacrylate and fullerene, which were derived by the technology in the production lab of the company Soleko (Milan, Italy. The materials were used for production of soft contact lenses in the company Optix (Belgrade, Serbia for the purposes of this research. Fullerene was used due to its apsorption transmission characteristics in ultraviolet, visible and near infrared spectrum. For the purposes of material characterization for potential application as soft contact lenses, network parameters were calculated and SEM analysis of the materials was performed while the optical properties of the soft contact lenses were measured by a Rotlex device. The values of the diffusion exponent, n, close to 0.5 indicated Fick's kinetics corresponding to diffusion. The investigated hydrogels could be classified as nonporous hydrogels. With Rotlex device, values of optical power and map of defects were showed. The obtained values of optical power and map of defects showed that the optical power of synthesized nanophotonic soft contact lens is identical to the nominal value while this was not the case for the basic lens. Also, the quality of the nanophotonic soft contact lens is better than the basic soft contact lens. Hence, it is possible to synthesize new nanophotonic soft contact lenses of desired optical characteristics, implying possibilities for their application in this field.
5. Induction of Protective Immunity against Eimeria tenella, Eimeria maxima, and Eimeria acervulina Infections Using Dendritic Cell-Derived Exosomes
Gallego, Margarita; Lee, Sung Hyen; Lillehoj, Hyun Soon; Quilez, Joaquin; Lillehoj, Erik P.; Sánchez-Acedo, Caridad
2012-01-01
This study describes a novel immunization strategy against avian coccidiosis using exosomes derived from Eimeria parasite antigen (Ag)-loaded dendritic cells (DCs). Chicken intestinal DCs were isolated and pulsed in vitro with a mixture of sporozoite-extracted Ags from Eimeria tenella, E. maxima, and E. acervulina, and the cell-derived exosomes were isolated. Chickens were nonimmunized or immunized intramuscularly with exosomes and subsequently noninfected or coinfected with E. tenella, E. maxima, and E. acervulina oocysts. Immune parameters compared among the nonimmunized/noninfected, nonimmunized/infected, and immunized/infected groups were the numbers of cells secreting Th1 cytokines, Th2 cytokines, interleukin-16 (IL-16), and Ag-reactive antibodies in vitro and in vivo readouts of protective immunity against Eimeria infection. Cecal tonsils, Peyer's patches, and spleens of immunized and infected chickens had increased numbers of cells secreting the IL-16 and the Th1 cytokines IL-2 and gamma interferon, greater Ag-stimulated proliferative responses, and higher numbers of Ag-reactive IgG- and IgA-producing cells following in vitro stimulation with the sporozoite Ags compared with the nonimmunized/noninfected and nonimmunized/infected controls. In contrast, the numbers of cells secreting the Th2 cytokines IL-4 and IL-10 were diminished in immunized and infected chickens compared with the nonimmunized/noninfected and the nonimmunized/infected controls. Chickens immunized with Ag-loaded exosomes and infected in vivo with Eimeria oocysts had increased body weight gains, reduced feed conversion ratios, diminished fecal oocyst shedding, lessened intestinal lesion scores, and reduced mortality compared with the nonimmunized/infected controls. These results suggest that successful field vaccination against avian coccidiosis using exosomes derived from DCs incubated with Ags isolated from Eimeria species may be possible. PMID:22354026
6. Development of protective autoimmunity by immunization with a neural-derived peptide is ineffective in severe spinal cord injury.
Susana Martiñón
Full Text Available Protective autoimmunity (PA is a physiological response to central nervous system trauma that has demonstrated to promote neuroprotection after spinal cord injury (SCI. To reach its beneficial effect, PA should be boosted by immunizing with neural constituents or neural-derived peptides such as A91. Immunizing with A91 has shown to promote neuroprotection after SCI and its use has proven to be feasible in a clinical setting. The broad applications of neural-derived peptides make it important to determine the main features of this anti-A91 response. For this purpose, adult Sprague-Dawley rats were subjected to a spinal cord contusion (SCC; moderate or severe or a spinal cord transection (SCT; complete or incomplete. Immediately after injury, animals were immunized with PBS or A91. Motor recovery, T cell-specific response against A91 and the levels of IL-4, IFN-γ and brain-derived neurotrophic factor (BDNF released by A91-specific T (T(A91 cells were evaluated. Rats with moderate SCC, presented a better motor recovery after A91 immunization. Animals with moderate SCC or incomplete SCT showed significant T cell proliferation against A91 that was characterized chiefly by the predominant production of IL-4 and the release of BDNF. In contrast, immunization with A91 did not promote a better motor recovery in animals with severe SCC or complete SCT. In fact, T cell proliferation against A91 was diminished in these animals. The present results suggest that the effective development of PA and, consequently, the beneficial effects of immunizing with A91 significantly depend on the severity of SCI. This could mainly be attributed to the lack of T(A91 cells which predominantly showed to have a Th2 phenotype capable of producing BDNF, further promoting neuroprotection.
7. Fullerene hydride - A potential hydrogen storage material
Nai Xing Wang; Jun Ping Zhang; An Guang Yu; Yun Xu Yang; Wu Wei Wang; Rui long Sheng; Jia Zhao
2005-01-01
Hydrogen, as a clean, convenient, versatile fuel source, is considered to be an ideal energy carrier in the foreseeable future. Hydrogen storage must be solved in using of hydrogen energy. To date, much effort has been put into storage of hydrogen including physical storage via compression or liquefaction, chemical storage in hydrogen carriers, metal hydrides and gas-on-solid adsorption. But no one satisfies all of the efficiency, size, weight, cost and safety requirements for transportation or utility use. C 60 H 36 , firstly synthesized by the method of the Birch reduction, was loaded with 4.8 wt% hydrogen indicating [60]fullerene might be as a potential hydrogen storage material. If a 100% conversion of C 60 H 36 is achieved, 18 moles of H 2 gas would be liberated from each mole of fullerene hydride. Pure C 60 H 36 is very stable below 500 C under nitrogen atmosphere and it releases hydrogen accompanying by other hydrocarbons under high temperature. But C 60 H 36 can be decomposed to generate H 2 under effective catalyst. We have reported that hydrogen can be produced catalytically from C 60 H 36 by Vasks's compound (IrCl(CO)(PPh 3 ) 2 ) under mild conditions. (RhCl(CO)(PPh 3 ) 2 ) having similar structure to (IrCl(CO)(PPh 3 ) 2 ), was also examined for thermal dehydrogenation of C 60 H 36 ; but it showed low catalytic activity. To search better catalyst, palladium carbon (Pd/C) and platinum carbon (Pt/C) catalysts, which were known for catalytic hydrogenation of aromatic compounds, were tried and good results were obtained. A very big peak of hydrogen appeared at δ=5.2 ppm in 1 H NMR spectrum based on Evans'work (fig 1) at 100 C over a Pd/C catalyst for 16 hours. It is shown that hydrogen can be produced from C 60 H 36 using a catalytic amount of Pd/C. Comparing with Pd/C, Pt/C catalyst showed lower activity. The high cost and limited availability of Vaska's compounds, Pd and Pt make it advantageous to develop less expensive catalysts for our process based on
8. Table of periodic properties of fullerenes based on structural parameters.
Torrens, Francisco
2004-01-01
The periodic table (PT) of the elements suggests that hydrogen could be the origin of everything else. The construction principle is an evolutionary process that is formally similar to those of Darwin and Oparin. The Kekulé structure count and permanence of the adjacency matrix of fullerenes are related to structural parameters involving the presence of contiguous pentagons p, q and r. Let p be the number of edges common to two pentagons, q the number of vertices common to three pentagons, and r the number of pairs of nonadjacent pentagon edges shared between two other pentagons. Principal component analysis (PCA) of the structural parameters and cluster analysis (CA) of the fullerenes permit classifying them and agree. A PT of the fullerenes is built based on the structural parameters, PCA and CA. The periodic law does not have the rank of the laws of physics. (1) The properties of the fullerenes are not repeated; only, and perhaps, their chemical character. (2) The order relationships are repeated, although with exceptions. The proposed statement is the following: The relationships that any fullerene p has with its neighbor p + 1 are approximately repeated for each period.
9. Carboxylated Fullerene at the Oil/Water Interface.
Li, Rongqiang; Chai, Yu; Jiang, Yufeng; Ashby, Paul D; Toor, Anju; Russell, Thomas P
2017-10-04
The self-assembly of carboxylated fullerene with poly(styrene-b-2-vinylpyridine) (PS-b-P2VP) with different molecular weights, poly-2-vinylpyridine, and amine-terminated polystyrene, at the interface between toluene and water was investigated. For all values of the pH, the functionalized fullerene interacted with the polymers at the water/toluene interface, forming a nanoparticle network, reducing the interfacial tension. At pH values of 4.84 and 7.8, robust, elastic films were formed at the interface, such that hollow tubules could be formed in situ when an aqueous solution of the functionalized fullerene was jetted into a toluene solution of PS-b-P2VP at a pH of 4.84. With variation of the pH, the mechanical properties of the fullerene/polymer assemblies can be varied by tuning the strength of the interactions between the functionalized fullerenes and the PS-b-P2VP.
10. Fullerene nanostructures, monolayers and thin films
Cotier, B.N.
2000-10-01
The interaction of submonolayer, monolayer and multilayer coverages of C 60 with the Ag/Si(111)-(√3x√3)R30 deg. (√3Ag/Si) and Si(111)-7x7 surfaces has been investigated using atomic force microscopy (AFM), photoelectron spectroscopy (PES) and ultra high vacuum scanning tunneling microscopy (UHV-STM). It is shown that it is possible to preserve the √3Ag/Si surface, normally corrupted by exposure to air, in ambient conditions when immersed beneath a few layers of C 60 molecules. Upon removal of the fullerene layers in the UHV-STM some corruption is observed which is linked to the morphology of the fullerene film (defined by the nature of the interaction of C 60 with √3Ag/Si). This technique opens up the possibility of performing experiments on the clean √3Ag/Si surface outside of UHV conditions. With the discovery of techniques whereby structures may be formed that are composed of only a few atoms/molecules, there is a need to perform electrical measurements in order to probe the fascinating properties of these 'nano-scale' devices. Using AFM, PES and STM evaporated metals and ion implantation have been investigated as materials for use in forming sub-micron scale contacts to nanostructures. It is found that ion implantation is a more promising approach after studying the response to annealing of treated surfaces. Electrical measurements between open/short circuited contacts and through Ag films clearly demonstrate the validity of the method, further confirmed by a PES study which probes the chemical nature of the near surface region of ion-implanted samples. Attempts have been made to form nanostructure templates between sub-micron scale contacts as a possible precursor to forming nanostructures. The bonding state of C 60 molecules on the Si(111)-7x7 surface has been in dispute for many years. To properly understand the system a comprehensive AFM, PES and STM study has been performed. PES results indicate covalent bond formation, with the number of bonds
11. Fullerene-rare gas mixed plasmas in an electron cyclotron resonance ion source
Asaji, T., E-mail: asaji@oshima-k.ac.jp; Ohba, T. [Oshima National College of Maritime Technology, 1091-1 Komatsu, Suo-oshima, Oshima, Yamaguchi 742-2193 (Japan); Uchida, T.; Yoshida, Y. [Bio-Nano Electronics Research Centre, Toyo University, 2100 Kujirai, Kawagoe, Saitama 350-8585 (Japan); Minezaki, H.; Ishihara, S. [Graduate School of Engineering, Toyo University, 2100 Kujirai, Kawagoe, Saitama 350-8585 (Japan); Racz, R.; Biri, S. [Institute of Nuclear Research (ATOMKI), H-4026 Debrecen, Bem Tér 18/c (Hungary); Muramatsu, M.; Kitagawa, A. [National Institute of Radiological Sciences (NIRS), 4-9-1 Anagawa, Inage-ku, Chiba 263-8555 (Japan); Kato, Y. [Graduate School of Engineering, Osaka University, 2-1 Yamada-oka, Suita, Osaka 565-0871 (Japan)
2014-02-15
A synthesis technology of endohedral fullerenes such as Fe@C{sub 60} has developed with an electron cyclotron resonance (ECR) ion source. The production of N@C{sub 60} was reported. However, the yield was quite low, since most fullerene molecules were broken in the ECR plasma. We have adopted gas-mixing techniques in order to cool the plasma and then reduce fullerene dissociation. Mass spectra of ion beams extracted from fullerene-He, Ar or Xe mixed plasmas were observed with a Faraday cup. From the results, the He gas mixing technique is effective against fullerene destruction.
12. Intercalated vs Nonintercalated Morphologies in Donor-Acceptor Bulk Heterojunction Solar Cells: PBTTT:Fullerene Charge Generation and Recombination Revisited.
Collado-Fregoso, Elisa; Hood, Samantha N; Shoaee, Safa; Schroeder, Bob C; McCulloch, Iain; Kassal, Ivan; Neher, Dieter; Durrant, James R
2017-09-07
In this Letter, we study the role of the donor:acceptor interface nanostructure upon charge separation and recombination in organic photovoltaic devices and blend films, using mixtures of PBTTT and two different fullerene derivatives (PC 70 BM and ICTA) as models for intercalated and nonintercalated morphologies, respectively. Thermodynamic simulations show that while the completely intercalated system exhibits a large free-energy barrier for charge separation, this barrier is significantly lower in the nonintercalated system and almost vanishes when energetic disorder is included in the model. Despite these differences, both femtosecond-resolved transient absorption spectroscopy (TAS) and time-delayed collection field (TDCF) exhibit extensive first-order losses in both systems, suggesting that geminate pairs are the primary product of photoexcitation. In contrast, the system that comprises a combination of fully intercalated polymer:fullerene areas and fullerene-aggregated domains (1:4 PBTTT:PC 70 BM) is the only one that shows slow, second-order recombination of free charges, resulting in devices with an overall higher short-circuit current and fill factor. This study therefore provides a novel consideration of the role of the interfacial nanostructure and the nature of bound charges and their impact upon charge generation and recombination.
13. Fullerene C60 and graphene photosensibiles for photodynamic virus inactivation
Belousova, I.; Hvorostovsky, A.; Kiselev, V.; Zarubaev, V.; Kiselev, O.; Piotrovsky, L.; Anfimov, P.; Krisko, T.; Muraviova, T.; Rylkov, V.; Starodubzev, A.; Sirotkin, A.; Grishkanich, A.; Kudashev, I.; Kancer, A.; Kustikova, M.; Bykovskaya, E.; Mayurova, A.; Stupnikov, A.; Ruzankina, J.; Afanasyev, M.; Lukyanov, N.; Redka, D.; Paklinov, N.
2018-02-01
A solid-phase photosensitizer based on aggregated C60 fullerene and graphene oxide for photodynamic inactivation of pathogens in biological fluids was studied. The most promising technologies of inactivation include the photodynamic effect, which consists in the inactivation of infectious agents by active oxygen forms (including singlet oxygen), formed when light is activated by the photosensitizer introduced into the plasma. Research shows features of solid-phase systems based on graphene and fullerene C60 oxide, which is a combination of an effective inactivating pathogens (for example, influenza viruses) reactive oxygen species formed upon irradiation of the photosensitizer in aqueous and biological fluids, a high photostability fullerene coatings and the possibility of full recovery photosensitizer from the biological environment after the photodynamic action.
14. Porphyrin and fullerene-based artificial photosynthetic materials for photovoltaics
Imahori, Hiroshi; Kashiwagi, Yukiyasu; Hasobe, Taku; Kimura, Makoto; Hanada, Takeshi; Nishimura, Yoshinobu; Yamazaki, Iwao; Araki, Yasuyuki; Ito, Osamu; Fukuzumi, Shunichi
2004-01-01
We have developed artificial photosynthetic systems in which porphyrins and fullerenes are self-assembled as building blocks into nanostructured molecular light-harvesting materials and photovoltaic devices. Multistep electron transfer strategy has been combined with our finding that porphyrin and fullerene systems have small reorganization energies, which are suitable for the construction of light energy conversion systems as well as artificial photosynthetic models. Highly efficient photosynthetic electron transfer reactions have been realized at ITO electrodes modified with self-assembled monolayers of porphyrin oligomers as well as porphyrin-fullerene linked systems. Porphyrin-modified gold nanoclusters have been found to have potential as artificial photosynthetic materials. These results provide basic information for the development of nanostructured artificial photosynthetic systems
15. Polymer solar cells with novel fullerene-based acceptor
Riedel, I.; Martin, N.; Giacalone, F.; Segura, J.L.; Chirvase, D.; Parisi, J.; Dyakonov, V.
2004-01-01
Alternative acceptor materials are possible candidates to improve the optical absorption and/or the open circuit voltage of polymer-fullerene solar cells. We studied a novel fullerene-type acceptor, DPM-12, for application in polymer-fullerene bulk heterojunction photovoltaic devices. Though DPM-12 has the identical redox potentials as methanofullerene PCBM, surprisingly high open circuit voltages in the range V OC =0.95 V were measured for OC 1 C 10 -PPV:DPM-12-based samples. The potential for photovoltaic application was studied by means of photovoltaic characterization of solar cells including current-voltage measurements and external quantum yield spectroscopy. Further studies were carried out by profiling the solar cell parameters vs. temperature and white light intensity
16. Simulating fullerene ball bearings of ultra-low friction
Li Xiaoyan; Yang Wei
2007-01-01
We report the direct molecular dynamics simulations for molecular ball bearings composed of fullerene molecules (C 60 and C 20 ) and multi-walled carbon nanotubes. The comparison of friction levels indicates that fullerene ball bearings have extremely low friction (with minimal frictional forces of 5.283 x 10 -7 and 6.768 x 10 -7 nN/atom for C 60 and C 20 bearings) and energy dissipation (lowest dissipation per cycle of 0.013 and 0.016 meV/atom for C 60 and C 20 bearings). A single fullerene inside the ball bearings exhibits various motion statuses of mixed translation and rotation. The influences of the shaft's distortion on the long-ranged potential energy and normal force are discussed. The phonic dissipation mechanism leads to a non-monotonic function between the friction and the load rate for the molecular bearings
17. Preparation of Polyaniline-Doped Fullerene Whiskers
Bingzhe Wang
2013-01-01
Full Text Available Fullerene C60 whiskers (FWs doped with polyaniline emeraldine base (PANI-EB were synthesized by mixing PANI-EB/N-methyl pyrrolidone (NMP colloid and FWs suspension based on the nature of the electron acceptor of C60 and electron donor of PANI-EB. Scanning electron microscopy (SEM, Fourier transform infrared (FT-IR, and ultraviolet-visible (UV-Vis spectra characterized the morphology and molecular structure of the FWs doped with PANI-EB. SEM observation showed that the smooth surface of FWs was changed to worm-like surface morphology after being doped with PANI-EB. The UV-Vis spectra suggested that charge-transfer (CT complex of C60 and PANI-EB was formed as PANI-EBδ+-C60δ-. PANI-EB-doped FWs might be useful as a new type of antibacterial and self-cleaning agent as well as multifunctional material to improve the human health and living environment.
18. Toxicity of polyhydroxylated fullerene to mitochondria
Yang, Li-Yun [State Key Laboratory of Virology & Key Laboratory of Analytical Chemistry for Biology and Medicine (MOE), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); Gao, Jia-Ling [Department of Chemistry, College of Chemistry and Environmental Engineering, Yangtze University, Jingzhou 434023 (China); Gao, Tian; Dong, Ping; Ma, Long; Jiang, Feng-Lei [State Key Laboratory of Virology & Key Laboratory of Analytical Chemistry for Biology and Medicine (MOE), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); Liu, Yi, E-mail: yiliuchem@whu.edu.cn [State Key Laboratory of Virology & Key Laboratory of Analytical Chemistry for Biology and Medicine (MOE), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China)
2016-01-15
Highlights: • Fullerenol-induced mitochondrial dysfunction was investigated at mitochondrial level. • Fullerenol disturbed mitochondrial inner membrane in polar protein regions. • Fullerenol affected the inner membrane and respiration chain of mitochondria. - Abstract: Mitochondrial dysfunction is considered as a crucial mechanism of nanomaterial toxicity. Herein, we investigated the effects of polyhydroxylated fullerene (C{sub 60}(OH){sub 44}, fullerenol), a model carbon-based nanomaterial with high water solubility, on isolated mitochondria. Our study demonstrated that fullerenol enhanced the permeabilization of mitochondrial inner membrane to H{sup +} and K{sup +} and induced mitochondrial permeability transition (MPT). The fullerenol-induced swelling was dose-dependent and could be effectively inhibited by MPT inhibitors such as cyclosporin A (CsA), adenosine diphosphate (ADP), ruthenium red (RR) and ethylenediaminetetraacetic acid (EDTA). After treating the mitochondria with fullerenol, the mitochondrial membrane potential (MMP) was found collapsed in a concentration-independent manner. The fluorescence anisotropy of hematoporphyrin (HP) changed significantly with the addition of fullerenol, while that of 1,6-diphenyl-hexatriene (DPH) changed slightly. Moreover, a decrease of respiration state 3 and increase of respiration state 4 were observed when mitochondria were energized with complex II substrate succinate. The results of transmission electron microscopy (TEM) provided direct evidence that fullerenol damaged the mitochondrial ultrastructure. The investigations can provide comprehensive information to elucidate the possible toxic mechanism of fullerenols at subcellular level.
19. Bulk Heterojunction Solar Cells: Impact of Minor Structural Modifications to the Polymer Backbone on the Polymer-Fullerene Mixing and Packing and on the Fullerene-Fullerene Connecting Network
Wang, Tonghui
2018-01-25
The morphology of the active layer of a bulk heterojunction solar cell, made of a blend of an electron-donating polymer and an electron-accepting fullerene derivative, is known to play a determining role in device performance. Here, a combination of molecular dynamics simulations and long-range corrected density functional theory calculations is used to elucidate the molecular-scale effects that even minor structural changes to the polymer backbone can have on the “local” morphology; this study focuses on the extent of polymer–fullerene mixing, on their packing, and on the characteristics of the fullerene–fullerene connecting network in the mixed regions, aspects that are difficult to access experimentally. Three representative polymer donors are investigated: (i) poly[(5,6-difluoro-2,1,3-benzothiadiazol-4,7-diyl)-alt-(3,3′″-di(2-octyldodecyl)-2,2′;5′,2″;5″,2′″-quaterthiophen-5,5′″-diyl)] (PffBT4T-2OD); (ii) poly[(2,1,3-benzothiadiazol-4,7-diyl)-alt-(3,3′″-di(2-octyldodecyl)-2,2′;5′,2″;5″,2′″-quaterthiophen-5,5′″-diyl)] (PBT4T-2OD), where the fluorine atoms in the benzothiadiazole moieties of PffBT4T-2OD are replaced with hydrogen atoms; and (iii) poly[(2,2′-bithiophene)-alt-(4,7-bis((2-decyltetradecyl)thiophen-2-yl)-5,6-difluoro-2-propyl-2H-benzo[d][1,2,3]triazole)] (PT2-FTAZ), where the sulfur atoms in the benzothiadiazole moieties of PffBT4T-2OD are replaced with nitrogen atoms carrying a linear C3H7 side-chain; these polymers are mixed with the phenyl-C71-butyric acid methyl ester (PC71BM) acceptor. This study also discusses the nature of the charge-transfer electronic states appearing at the donor–acceptor interfaces, the electronic couplings relevant for the charge-recombination process, and the electron-transfer features between neighboring PC71BM molecules.
20. Bulk Heterojunction Solar Cells: Impact of Minor Structural Modifications to the Polymer Backbone on the Polymer-Fullerene Mixing and Packing and on the Fullerene-Fullerene Connecting Network
Wang, Tonghui; Chen, Xiankai; Ashokan, Ajith; Zheng, Zilong; Ravva, Mahesh Kumar; Bré das, Jean-Luc
2018-01-01
The morphology of the active layer of a bulk heterojunction solar cell, made of a blend of an electron-donating polymer and an electron-accepting fullerene derivative, is known to play a determining role in device performance. Here, a combination of molecular dynamics simulations and long-range corrected density functional theory calculations is used to elucidate the molecular-scale effects that even minor structural changes to the polymer backbone can have on the “local” morphology; this study focuses on the extent of polymer–fullerene mixing, on their packing, and on the characteristics of the fullerene–fullerene connecting network in the mixed regions, aspects that are difficult to access experimentally. Three representative polymer donors are investigated: (i) poly[(5,6-difluoro-2,1,3-benzothiadiazol-4,7-diyl)-alt-(3,3′″-di(2-octyldodecyl)-2,2′;5′,2″;5″,2′″-quaterthiophen-5,5′″-diyl)] (PffBT4T-2OD); (ii) poly[(2,1,3-benzothiadiazol-4,7-diyl)-alt-(3,3′″-di(2-octyldodecyl)-2,2′;5′,2″;5″,2′″-quaterthiophen-5,5′″-diyl)] (PBT4T-2OD), where the fluorine atoms in the benzothiadiazole moieties of PffBT4T-2OD are replaced with hydrogen atoms; and (iii) poly[(2,2′-bithiophene)-alt-(4,7-bis((2-decyltetradecyl)thiophen-2-yl)-5,6-difluoro-2-propyl-2H-benzo[d][1,2,3]triazole)] (PT2-FTAZ), where the sulfur atoms in the benzothiadiazole moieties of PffBT4T-2OD are replaced with nitrogen atoms carrying a linear C3H7 side-chain; these polymers are mixed with the phenyl-C71-butyric acid methyl ester (PC71BM) acceptor. This study also discusses the nature of the charge-transfer electronic states appearing at the donor–acceptor interfaces, the electronic couplings relevant for the charge-recombination process, and the electron-transfer features between neighboring PC71BM molecules.
1. Intratracheal administration of fullerene nanoparticles activates splenic CD11b{sup +} cells
Ding, Ning [Department of Immunology and Parasitology, School of Medicine, University of Occupational and Environmental Health, Japan, 1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu 807-8555 (Japan); Kunugita, Naoki [Department of Environmental Health, National Institute of Public Health, 2-3-6, Minami, Wako 351-0197 (Japan); Ichinose, Takamichi [Department of Health Sciences, Oita University of Nursing and Health Sciences, Oita 870-1201 (Japan); Song, Yuan [Department of Immunology and Parasitology, School of Medicine, University of Occupational and Environmental Health, Japan, 1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu 807-8555 (Japan); Yokoyama, Mitsuru [Bio-information Research Center, University of Occupational and Environmental Health, Japan, 1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu 807-8555 (Japan); Arashidani, Keiichi [School of Health Sciences, University of Occupational and Environmental Health, Japan, 1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu 807-8555 (Japan); Yoshida, Yasuhiro, E-mail: freude@med.uoeh-u.ac.jp [Department of Immunology and Parasitology, School of Medicine, University of Occupational and Environmental Health, Japan, 1-1 Iseigaoka, Yahata-nishi-ku, Kitakyushu 807-8555 (Japan)
2011-10-30
Highlights: {yields} Fullerene administration triggered splenic responses. {yields} Splenic responses occurred at different time-points than in the lung tissue. {yields} CD11b{sup +} cells were demonstrated to function as responder cells to fullerene. - Abstract: Fullerene nanoparticles ('Fullerenes'), which are now widely used materials in daily life, have been demonstrated to induce elevated pulmonary inflammation in several animal models; however, the effects of fullerenes on the immune system are not fully understood. In the present study, mice received fullerenes intratracheally and were sacrificed at days 1, 6 and 42. Mice that received fullerenes exhibited increased proliferation of splenocytes and increased splenic production of IL-2 and TNF-{alpha}. Changes in the spleen in response to fullerene treatment occurred at different time-points than in the lung tissue. Furthermore, fullerenes induced CDK2 expression and activated NF-{kappa}B and NFAT in splenocytes at 6 days post-administration. Finally, CD11b{sup +} cells were demonstrated to function as responder cells to fullerene administration in the splenic inflammatory process. Taken together, in addition to the effects on pulmonary responses, fullerenes also modulate the immune system.
2. Intratracheal administration of fullerene nanoparticles activates splenic CD11b+ cells
Ding, Ning; Kunugita, Naoki; Ichinose, Takamichi; Song, Yuan; Yokoyama, Mitsuru; Arashidani, Keiichi; Yoshida, Yasuhiro
2011-01-01
Highlights: → Fullerene administration triggered splenic responses. → Splenic responses occurred at different time-points than in the lung tissue. → CD11b + cells were demonstrated to function as responder cells to fullerene. - Abstract: Fullerene nanoparticles ('Fullerenes'), which are now widely used materials in daily life, have been demonstrated to induce elevated pulmonary inflammation in several animal models; however, the effects of fullerenes on the immune system are not fully understood. In the present study, mice received fullerenes intratracheally and were sacrificed at days 1, 6 and 42. Mice that received fullerenes exhibited increased proliferation of splenocytes and increased splenic production of IL-2 and TNF-α. Changes in the spleen in response to fullerene treatment occurred at different time-points than in the lung tissue. Furthermore, fullerenes induced CDK2 expression and activated NF-κB and NFAT in splenocytes at 6 days post-administration. Finally, CD11b + cells were demonstrated to function as responder cells to fullerene administration in the splenic inflammatory process. Taken together, in addition to the effects on pulmonary responses, fullerenes also modulate the immune system.
3. Self-Cleaning Photocatalytic Polyurethane Coatings Containing Modified C60 Fullerene Additives
Jeffrey G. Lundin
2014-08-01
4. Adrenergic Stress Protection of Human iPS Cell-Derived Cardiomyocytes by Fast Kv7.1 Recycling
Ilaria Piccini
2017-09-01
Full Text Available The fight-or-flight response (FFR, a physiological acute stress reaction, involves positive chronotropic and inotropic effects on heart muscle cells mediated through β-adrenoceptor activation. Increased systolic calcium is required to enable stronger heart contractions whereas elevated potassium currents are to limit the duration of the action potentials and prevent arrhythmia. The latter effect is accomplished by an increased functional activity of the Kv7.1 channel encoded by KCNQ1. Current knowledge, however, does not sufficiently explain the full extent of rapid Kv7.1 activation and may hence be incomplete. Using inducible genetic KCNQ1 complementation in KCNQ1-deficient human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs, we here reinvestigate the functional role of Kv7.1 in adapting human CMs to adrenergic stress. Under baseline conditions, Kv7.1 was barely detectable at the plasma membrane of hiPSC-CMs, yet it fully protected these from adrenergic stress-induced beat-to-beat variability of repolarization and torsade des pointes-like arrhythmia. Furthermore, isoprenaline treatment increased field potential durations specifically in KCNQ1-deficient CMs to cause these adverse macroscopic effects. Mechanistically, we find that the protective action by Kv7.1 resides in a rapid translocation of channel proteins from intracellular stores to the plasma membrane, induced by adrenergic signaling. Gene silencing experiments targeting RAB GTPases, mediators of intracellular vesicle trafficking, showed that fast Kv7.1 recycling under acute stress conditions is RAB4A-dependent.Our data reveal a key mechanism underlying the rapid adaptation of human cardiomyocytes to adrenergic stress. These findings moreover aid to the understanding of disease pathology in long QT syndrome and bear important implications for safety pharmacological screening.
5. Gene therapy with brain-derived neurotrophic factor as a protection: retinal ganglion cells in a rat glaucoma model.
Martin, Keith R G; Quigley, Harry A; Zack, Donald J; Levkovitch-Verbin, Hana; Kielczewski, Jennifer; Valenta, Danielle; Baumrind, Lisa; Pease, Mary Ellen; Klein, Ronald L; Hauswirth, William W
2003-10-01
To develop a modified adenoassociated viral (AAV) vector capable of efficient transfection of retinal ganglion cells (RGCs) and to test the hypothesis that use of this vector to express brain-derived neurotrophic factor (BDNF) could be protective in experimental glaucoma. Ninety-three rats received one unilateral, intravitreal injection of either normal saline (n = 30), AAV-BDNF-woodchuck hepatitis posttranscriptional regulatory element (WPRE; n = 30), or AAV-green fluorescent protein (GFP)-WPRE (n = 33). Two weeks later, experimental glaucoma was induced in the injected eye by laser application to the trabecular meshwork. Survival of RGCs was estimated by counting axons in optic nerve cross sections after 4 weeks of glaucoma. Transgene expression was assessed by immunohistochemistry, Western blot analysis, and direct visualization of GFP. The density of GFP-positive cells in retinal wholemounts was 1,828 +/- 299 cells/mm(2) (72,273 +/- 11,814 cells/retina). Exposure to elevated intraocular pressure was similar in all groups. Four weeks after initial laser treatment, axon loss was 52.3% +/- 27.1% in the saline-treated group (n = 25) and 52.3% +/- 24.2% in the AAV-GFP-WPRE group (n = 30), but only 32.3% +/- 23.0% in the AAV-BDNF-WPRE group (n = 27). Survival in AAV-BDNF-WPRE animals increased markedly and the difference was significant compared with those receiving either AAV-GFP-WPRE (P = 0.002, t-test) or saline (P = 0.006, t-test). Overexpression of the BDNF gene protects RGC as estimated by axon counts in a rat glaucoma model, further supporting the potential feasibility of neurotrophic therapy as a complement to the lowering of IOP in the treatment of glaucoma.
6. Identification of a Novel CD8 T Cell Epitope Derived from Plasmodium berghei Protective Liver-Stage Antigen
Alexander Pichugin
2018-01-01
Full Text Available We recently identified novel Plasmodium berghei (Pb liver stage (LS genes that as DNA vaccines significantly reduce Pb LS parasite burden (LPB in C57Bl/6 (B6 mice through a mechanism mediated, in part, by CD8 T cells. In this study, we sought to determine fine antigen (Ag specificities of CD8 T cells that target LS malaria parasites. Guided by algorithms for predicting MHC class I-restricted epitopes, we ranked sequences of 32 Pb LS Ags and selected ~400 peptides restricted by mouse H-2Kb and H-2Db alleles for analysis in the high-throughput method of caged MHC class I-tetramer technology. We identified a 9-mer H-2Kb restricted CD8 T cell epitope, Kb-17, which specifically recognized and activated CD8 T cell responses in B6 mice immunized with Pb radiation-attenuated sporozoites (RAS and challenged with infectious sporozoites (spz. The Kb-17 peptide is derived from the recently described novel protective Pb LS Ag, PBANKA_1031000 (MIF4G-like protein. Notably, immunization with the Kb-17 epitope delivered in the form of a minigene in the adenovirus serotype 5 vector reduced LPB in mice infected with spz. On the basis of our results, Kb-17 peptide was available for CD8 T cell activation and recall following immunization with Pb RAS and challenge with infectious spz. The identification of a novel MHC class I-restricted epitope from the protective Pb LS Ag, MIF4G-like protein, is crucial for advancing our understanding of immune responses to Plasmodium and by extension, toward vaccine development against malaria.
7. Human amnion-derived mesenchymal stem cells protect against UVA irradiation-induced human dermal fibroblast senescence, in vitro
Zhang, Chunli; Yuchi, Haishen; Sun, Lu; Zhou, Xiaoli; Lin, Jinde
2017-01-01
The aim of the present study was to determine if human amnion-derived mesenchymal stem cells (HAMSCs) exert a protective effect on ultraviolet A (UVA) irradiation-induced human dermal fibroblast (HDF) senescence. A senescence model was constructed as follows: HDFs (104–106 cells/well) were cultured in a six-well plate in vitro and then exposed to UVA irradiation at 9 J/cm2 for 30 min. Following the irradiation period, HDFs were co-cultured with HAMSCs, which were seeded on transwells. A total of 72 h following the co-culturing, senescence-associated β-galactosidase staining was performed and reactive oxygen species (ROS) content and mitochondrial membrane potential (Δψm) were detected in the HDFs via flow cytometric analysis. The results demonstrated that the percentage of HDFs, detected via staining with X-gal, were markedly decreased when co-cultured with human HAMSCs, compared with the group that were not co-cultured. The ROS content was decreased and the mitochondrial membrane potential (Δψm) recovered in cells treated with UVA and HAMSCs, compared with that of cells treated with UVA alone. Reverse transcription-quantitative polymerase chain reaction revealed the significant effects of HAMSCs on the HDF senescence marker genes p53 and matrix metalloproteinase-1 mRNA expression. In addition to this, western blot analysis verified the effects of HAMSCs on UVA induced senescence, providing a foundation for novel regenerative therapeutic methods. Furthermore, the results suggested that activation of the extracellular-signal regulated kinase 1/2 mitogen activated protein kinase signal transduction pathway, is essential for the HAMSC-mediated UVA protective effects. The decrease in ROS content additionally indicated that HAMSCs may exhibit the potential to treat oxidative stress-mediated UVA skin senescence in the future. PMID:28627622
8. Affine Fullerene C60 in a GS-Quasigroup
2014-01-01
Full Text Available It will be shown that the affine fullerene C60, which is defined as an affine image of buckminsterfullerene C60, can be obtained only by means of the golden section. The concept of the affine fullerene C60 will be constructed in a general GS-quasigroup using the statements about the relationships between affine regular pentagons and affine regular hexagons. The geometrical interpretation of all discovered relations in a general GS-quasigroup will be given in the GS-quasigroup C(1/2(1+5.
9. Stereodivergent-at-metal synthesis of [60]fullerene hybrids
2017-02-13
Chiral fullerene-metal hybrids with complete control over the four stereogenic centers, including the absolute configuration of the metal atom, have been synthesized for the first time. The stereochemistry of the four chiral centers formed during [60]fullerene functionalization is the result of both the chiral catalysts employed and the diastereoselective addition of the metal complexes used (iridium, rhodium, or ruthenium). DFT calculations underpin the observed configurational stability at the metal center, which does not undergo an epimerization process. (copyright 2017 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)
10. Properties of Natural Rubber-Based Composites Containing Fullerene
Omar A. Al-Hartomy
2012-01-01
Full Text Available In this study the influence of fullerenes in concentrations from 0.5 to 1.5 phr on both the vulcanization characteristics of the compounds and physicomechanical, dynamic, and dielectric properties and thermal aging resistance of nanocomposites on the basis of natural rubber has been investigated. The effect of the filler dispersion in the elastomeric matrix has been also investigated. Neat fullerene and the composites comprising it have been studied and characterized by scanning electron microscopy (SEM and transmission electron microscopy (TEM.
11. Multiscale simulation of water flow past a C540 fullerene
Walther, Jens Honore; Praprotnik, Matej; Kotsalis, Evangelos M.
2012-01-01
We present a novel, three-dimensional, multiscale algorithm for simulations of water flow past a fullerene. We employ the Schwarz alternating overlapping domain method to couple molecular dynamics (MD) of liquid water around the C540 buckyball with a Lattice–Boltzmann (LB) description for the Nav......We present a novel, three-dimensional, multiscale algorithm for simulations of water flow past a fullerene. We employ the Schwarz alternating overlapping domain method to couple molecular dynamics (MD) of liquid water around the C540 buckyball with a Lattice–Boltzmann (LB) description...
12. Carboxylated fullerene at the oil/water interface
Li, R; Chai, Y; Jiang, Y; Ashby, PD; Toor, A; Russell, TP
2017-01-01
© 2017 American Chemical Society. The self-assembly of carboxylated fullerene with poly(styrene-b-2-vinylpyridine) (PS-b-P2VP) with different molecular weights, poly-2-vinylpyridine, and amine-terminated polystyrene, at the interface between toluene and water was investigated. For all values of the pH, the functionalized fullerene interacted with the polymers at the water/toluene interface, forming a nanoparticle network, reducing the interfacial tension. At pH values of 4.84 and 7.8, robust,...
13. Free Carrier Generation in Fullerene Acceptors and Its Effect on Polymer Photovoltaics
Burkhard, George F.
2012-12-20
Early research on C60 led to the discovery that the absorption of photons with energy greater than 2.35 eV by bulk C60 produces free charge carriers at room temperature. We find that not only is this also true for many of the soluble fullerene derivatives commonly used in organic photovoltaics, but also that the presence of these free carriers has significant implications for the modeling, characterization, and performance of devices made with these materials. We demonstrate that the discrepancy between absorption and quantum efficiency spectra in P3HT:PCBM is due to recombination of such free carriers in large PCBM domains before they can be separated at a donor/acceptor interface. Since most theories assume that all free charges result from the separation of excitons at a donor/acceptor interface, the presence of free carrier generation in fullerenes can have a significant impact on the interpretation of data generated by numerous field-dependent techniques. © 2012 American Chemical Society.
14. Boomerang-type substitution reaction: reactivity of fullerene epoxides and a halofullerenol.
Jia, Zhenshan; Zhang, Xiang; Zhang, Gaihong; Huang, Shaohua; Fang, Hao; Hu, Xiangqing; Li, Yuliang; Gan, Liangbing; Zhang, Shiwei; Zhu, Daoben
2007-02-05
The C(s)-symmetric fullerene chlorohydrin C60(Cl)(OH)(OOtBu)4 reacts with 4-dimethylaminopyridine (DMAP) and 1,4-diazabicyclo[2.2.2]octane (DABCO) to yield two isomers with the formula C60(O)(OOtBu)4 in good yields. These isomers differ with respect to the location of the epoxy functionality. The one from DMAP is C(s) symmetric, whereas that from DABCO is C1 symmetric with the epoxy group on the central pentagon. Two different mechanisms are proposed to explain the chemoselectivity of these reactions. The reaction with DMAP involves single-electron transfer as the key step; DMAP acts as the electron donor. A combination of an oxygen-atom shift and S(N)2'' processes (boomerang substitution) are responsible for the formation of isomer with DACBO. Various related reactions support the proposed mechanisms. The structures of new fullerene derivatives were determined by spectroscopy, single-crystal X-ray analysis, and chemical correlation experiments.
15. Fullerene C60 hydroxylated with peracetic acid and its radioprotective effects tested in vivo
Zemanova, Eva; Klouda, Karel
2011-01-01
A water-soluble C60 derivative (DF) was obtained by reacting C60 fullerene with peracetic acid followed by hydrolysis. The highest DF concentration achieved at room temperature and neutral pH was 443.2 mg/L. TEM and SEM observations and FTIR spectra were interpreted The possibility of DF application as a substance improving resistance to ionizing radiation (6X, linear accelerator, 10-70 Gry) was investigated in vivo using juvenile (2.5 months) Danio rerio without sex selection. A prolonged toxicity test gave evidence that an aqueous DF solution 147 mg/L is not toxic to this fish species in the long run. A radioprotective effect was demonstrated for a five-day exposure to this solution prior to irradiation. The survival times after irradiation with 10 to 70 Gy doses were up to 70% longer. The LD50 values for various times of survival roughly doubled. The effect is preventive rather than curative and is associated with the capability of fullerenes to eliminate free radicals and oxidants formed by radiolysis of water. (orig.)
16. Free Carrier Generation in Fullerene Acceptors and Its Effect on Polymer Photovoltaics
Burkhard, George F.; Hoke, Eric T.; Beiley, Zach M.; McGehee, Michael D.
2012-01-01
Early research on C60 led to the discovery that the absorption of photons with energy greater than 2.35 eV by bulk C60 produces free charge carriers at room temperature. We find that not only is this also true for many of the soluble fullerene derivatives commonly used in organic photovoltaics, but also that the presence of these free carriers has significant implications for the modeling, characterization, and performance of devices made with these materials. We demonstrate that the discrepancy between absorption and quantum efficiency spectra in P3HT:PCBM is due to recombination of such free carriers in large PCBM domains before they can be separated at a donor/acceptor interface. Since most theories assume that all free charges result from the separation of excitons at a donor/acceptor interface, the presence of free carrier generation in fullerenes can have a significant impact on the interpretation of data generated by numerous field-dependent techniques. © 2012 American Chemical Society.
17. Protective
Wessam M. Abdel-Wahab
2013-10-01
Full Text Available Many active ingredients extracted from herbal and medicinal plants are extensively studied for their beneficial effects. Antioxidant activity and free radical scavenging properties of thymoquinone (TQ have been reported. The present study evaluated the possible protective effects of TQ against the toxicity and oxidative stress of sodium fluoride (NaF in the liver of rats. Rats were divided into four groups, the first group served as the control group and was administered distilled water whereas the NaF group received NaF orally at a dose of 10 mg/kg for 4 weeks, TQ group was administered TQ orally at a dose of 10 mg/kg for 5 weeks, and the NaF-TQ group was first given TQ for 1 week and was secondly administered 10 mg/kg/day NaF in association with 10 mg/kg TQ for 4 weeks. Rats intoxicated with NaF showed a significant increase in lipid peroxidation whereas the level of reduced glutathione (GSH and the activity of superoxide dismutase (SOD, catalase (CAT, glutathione S-transferase (GST and glutathione peroxidase (GPx were reduced in hepatic tissues. The proper functioning of the liver was also disrupted as indicated by alterations in the measured liver function indices and biochemical parameters. TQ supplementation counteracted the NaF-induced hepatotoxicity probably due to its strong antioxidant activity. In conclusion, the results obtained clearly indicated the role of oxidative stress in the induction of NaF toxicity and suggested hepatoprotective effects of TQ against the toxicity of fluoride compounds.
18. Endogenous protection derived from activin A/Smads transduction loop stimulated via ischemic injury in PC12 cells.
Mang, Jing; Mei, Chun-Li; Wang, Jiao-Qi; Li, Zong-Shu; Chu, Ting-Ting; He, Jin-Ting; Xu, Zhong-Xin
2013-10-17
Activin A (ActA), a member of transforming growth factor-beta (TGF-b) super- family, affects many cellular processes, including ischemic stroke. Though the neuroprotective effects of exogenous ActA on oxygen-glucose deprivation (OGD) injury have already been reported by us, the endogenous role of ActA remains poorly understood. To further define the role and mechanism of endogenous ActA and its signaling in response to acute ischemic damage, we used an OGD model in PC12 cells to simulate ischemic injury on neurons in vitro. Cells were pre-treated by monoclonal antibody against activin receptor type IIA (ActRII-Ab). We found that ActRII-Ab augments ischemic injury in PC12 cells. Further, the extracellular secretion of ActA as well as phosphorylation of smad3 in PC12 cells was also up-regulated by OGD, but suppressed by ActRII-Ab. Taken together, our results show that ActRII-Ab may augment ischemic injury via blocking of transmembrane signal transduction of ActA, which confirmed the existence of endogenous neuroprotective effects derived from the ActA/Smads pathway. ActRIIA plays an important role in transferring neuronal protective signals inside. It is highly possible that ActA transmembrance signaling is a part of the positive feed-back loop for extracellular ActA secretion.
19. Endogenous Protection Derived from Activin A/Smads Transduction Loop Stimulated via Ischemic Injury in PC12 Cells
Zhong-Xin Xu
2013-10-01
Full Text Available Activin A (ActA, a member of transforming growth factor-beta (TGF-b super- family, affects many cellular processes, including ischemic stroke. Though the neuroprotective effects of exogenous ActA on oxygen-glucose deprivation (OGD injury have already been reported by us, the endogenous role of ActA remains poorly understood. To further define the role and mechanism of endogenous ActA and its signaling in response to acute ischemic damage, we used an OGD model in PC12 cells to simulate ischemic injury on neurons in vitro. Cells were pre-treated by monoclonal antibody against activin receptor type IIA (ActRII-Ab. We found that ActRII-Ab augments ischemic injury in PC12 cells. Further, the extracellular secretion of ActA as well as phosphorylation of smad3 in PC12 cells was also up-regulated by OGD, but suppressed by ActRII-Ab. Taken together, our results show that ActRII-Ab may augment ischemic injury via blocking of transmembrane signal transduction of ActA, which confirmed the existence of endogenous neuroprotective effects derived from the ActA/Smads pathway. ActRIIA plays an important role in transferring neuronal protective signals inside. It is highly possible that ActA transmembrance signaling is a part of the positive feed-back loop for extracellular ActA secretion.
20. Employing Escherichia coli-derived outer membrane vesicles as an antigen delivery platform elicits protective immunity against Acinetobacter baumannii infection
Huang, Weiwei; Wang, Shijie; Yao, Yufeng; Xia, Ye; Yang, Xu; Li, Kui; Sun, Pengyan; Liu, Cunbao; Sun, Wenjia; Bai, Hongmei; Chu, Xiaojie; Li, Yang; Ma, Yanbing
2016-11-01
Outer membrane vesicles (OMVs) have proven to be highly immunogenic and induced an immune response against bacterial infection in human clinics and animal models. We sought to investigate whether engineered OMVs can be a feasible antigen-delivery platform for efficiently inducing specific antibody responses. In this study, Omp22 (an outer membrane protein of A. baumannii) was displayed on E. coli DH5α-derived OMVs (Omp22-OMVs) using recombinant gene technology. The morphological features of Omp22-OMVs were similar to those of wild-type OMVs (wtOMVs). Immunization with Omp22-OMVs induced high titers of Omp22-specific antibodies. In a murine sepsis model, Omp22-OMV immunization significantly protected mice from lethal challenge with a clinically isolated A. baumannii strain, which was evidenced by the increased survival rate of the mice, the reduced bacterial burdens in the lung, spleen, liver, kidney, and blood, and the suppressed serum levels of inflammatory cytokines. In vitro opsonophagocytosis assays showed that antiserum collected from Omp22-OMV-immunized mice had bactericidal activity against clinical isolates, which was partly specific antibody-dependent. These results strongly indicated that engineered OMVs could display a whole heterologous protein (~22 kDa) on the surface and effectively induce specific antibody responses, and thus OMVs have the potential to be a feasible vaccine platform.
1. Thioredoxin-1 Protects Bone Marrow-Derived Mesenchymal Stromal Cells from Hyperoxia-Induced Injury In Vitro
Zhang, Lei; Wang, Jin; Zeng, Lingkong; Li, Qiong; Liu, Yalan
2018-01-01
Background The poor survival rate of mesenchymal stromal cells (MSC) transplanted into recipient lungs greatly limits their therapeutic efficacy for diseases like bronchopulmonary dysplasia (BPD). The aim of this study is to evaluate the effect of thioredoxin-1 (Trx-1) overexpression on improving the potential for bone marrow-derived mesenchymal stromal cells (BMSCs) to confer resistance against hyperoxia-induced cell injury. Methods 80% O2 was used to imitate the microenvironment surrounding-transplanted cells in the hyperoxia-induced lung injury in vitro. BMSC proliferation and apoptotic rates and the levels of reactive oxygen species (ROS) were measured. The effects of Trx-1 overexpression on the level of antioxidants and growth factors were investigated. We also investigated the activation of apoptosis-regulating kinase-1 (ASK1) and p38 mitogen-activated protein kinases (MAPK). Result Trx-1 overexpression significantly reduced hyperoxia-induced BMSC apoptosis and increased cell proliferation. We demonstrated that Trx-1 overexpression upregulated the levels of superoxide dismutase and glutathione peroxidase as well as downregulated the production of ROS. Furthermore, we illustrated that Trx-1 protected BMSCs against hyperoxic injury via decreasing the ASK1/P38 MAPK activation rate. Conclusion These results demonstrate that Trx-1 overexpression improved the ability of BMSCs to counteract hyperoxia-induced injury, thus increasing their potential to treat hyperoxia-induced lung diseases such as BPD. PMID:29599892
2. Immunotoxicity of nanoparticles: a computational study suggests that CNTs and C60 fullerenes might be recognized as pathogens by Toll-like receptors
Turabekova, M.; Rasulev, B.; Theodore, M.; Jackman, J.; Leszczynska, D.; Leszczynski, J.
2014-03-01
Over the last decade, a great deal of attention has been devoted to study the inflammatory response upon exposure to multi/single-walled carbon nanotubes (CNTs) and different fullerene derivatives. In particular, carbon nanoparticles are reported to provoke substantial inflammation in alveolar and bronchial epithelial cells, epidermal keratinocytes, cultured monocyte-macrophage cells, etc. We suggest a hypothetical model providing the potential mechanistic explanation for immune and inflammatory responses observed upon exposure to carbon nanoparticles. Specifically, we performed a theoretical study to analyze CNT and C60 fullerene interactions with the available X-ray structures of Toll-like receptors (TLRs) homo- and hetero-dimer extracellular domains. This assumption was based on the fact that similar to the known TLR ligands both CNTs and fullerenes induce, in cells, the secretion of certain inflammatory protein mediators, such as interleukins and chemokines. These proteins are observed within inflammation downstream processes resulted from the ligand molecule dependent inhibition or activation of TLR-induced signal transduction. Our computational studies have shown that the internal hydrophobic pockets of some TLRs might be capable of binding small-sized carbon nanostructures (5,5 armchair SWCNTs containing 11 carbon atom layers and C60 fullerene). High binding scores and minor structural alterations induced in TLR ectodomains upon binding C60 and CNTs further supported our hypothesis. Additionally, the proposed hypothesis is strengthened by the indirect experimental findings indicating that CNTs and fullerenes induce an excessive expression of specific cytokines and chemokines (i.e. IL-8 and MCP1).Over the last decade, a great deal of attention has been devoted to study the inflammatory response upon exposure to multi/single-walled carbon nanotubes (CNTs) and different fullerene derivatives. In particular, carbon nanoparticles are reported to provoke
3. Fullerene-Based Symmetry in Hibiscus rosa-sinensis Pollen
Andrade, Kleber; Guerra, Sara; Debut, Alexis
2014-01-01
The fullerene molecule belongs to the so-called super materials. The compound is interesting due to its spherical configuration where atoms occupy positions forming a mechanically stable structure. We first demonstrate that pollen of Hibiscus rosa-sinensis has a strong symmetry regarding the distribution of its spines over the spherical grain. These spines form spherical hexagons and pentagons. The distance between atoms in fullerene is explained applying principles of flat, spherical, and spatial geometry, based on Euclid’s “Elements” book, as well as logic algorithms. Measurements of the pollen grain take into account that the true spine lengths, and consequently the real distances between them, are measured to the periphery of each grain. Algorithms are developed to recover the spatial effects lost in 2D photos. There is a clear correspondence between the position of atoms in the fullerene molecule and the position of spines in the pollen grain. In the fullerene the separation gives the idea of equal length bonds which implies perfectly distributed electron clouds while in the pollen grain we suggest that the spines being equally spaced carry an electrical charge originating in forces involved in the pollination process. PMID:25003375
4. Ultimate performance of polymer: Fullerene bulk heterojunction tandem solar cells
Kotlarski, J.D.; Blom, P.W.M.
2011-01-01
We present the model calculations to explore the potential of polymer:fullerene tandem solar cells. As an approach we use a combined optical and electrical device model, where the absorption profiles are used as starting point for the numerical current-voltage calculations. With this model a maximum
5. Bipolar polaron pair recombination in polymer/fullerene solar cells
Kupijai, Alexander J.; Behringer, Konstantin M.; Schaeble, Florian G.
2015-01-01
We present a study of the rate-limiting spin-dependent charge-transfer processes in different polymer/fullerene bulk-heterojunction solar cells at 10 K. Observing central spin-locking signals in pulsed electrically detected magnetic resonance and an inversion of Rabi oscillations in multifrequency...
6. Fullerene-based Anchoring Groups for Molecular Electronics
Martin, Christian A.; Ding, Dapeng; Sørensen, Jakob Kryger
2008-01-01
We present results on a new fullerene-based anchoring group for molecular electronics. Using lithographic mechanically controllable break junctions in vacuum we have determined the conductance and stability of single-molecule junctions of 1,4-bis(fullero[c]pyrrolidin-1-yl)benzene. The compound can...
7. Local magnetism in rare-earth metals encapsulated in fullerenes
De Nadai, C; Mirone, A; Dhesi, SS; Bencok, P; Brookes, NB; Marenne, [No Value; Rudolf, P; Tagmatarchis, N; Shinohara, H; Dennis, TJS; Marenne, I.; Nadaï, C. De
Local magnetic properties of rare-earth (RE) atoms encapsulated in fullerenes have been characterized using x-ray magnetic circular dichroism and x-ray absorption spectroscopy (XAS). The orbital and spin contributions of the magnetic moment have been determined through sum rules and theoretical
8. APPLICATION FULLERENE FOR IDENTIFICATION OF MEAT PRODUCTS CONTAINING KLENBUTEROL
G. V. Popov
2014-01-01
Full Text Available Summary. In modern conditions the majority of developing livestock complexes, various chemical additives, apply to cattle feeding. One of such preparations is clenbuterol. Clenbuterol is β-2-adrenostimulyator belonging to group β-agonist who stimulate growth of muscular weight and regulate a ratio of fatty and muscular tissue at cultivation of agricultural animals and birds. In Russia results of researches in which it is recommended to apply clenbuterol as a growth factor at cattle cultivation are published. Thus the risk of influences of the residual maintenance of a preparation in animal husbandry production on health of consumers wasn't estimated. We conducted researches in the field of studying of properties fullerene and clenbuterol and their opportunities interaction among themselves. For identification clenbuterol in meat raw materials the synthesis of Prato based on a functionalization fullerene by C60 and C70 consisting in its transformation in fullerene on reactions of a 1,3-dipolar cycloaddition of azomethine ylide on multiple communications of C=C of a fulleren kernel was moved. Reaction took place with allocation of a deposit of the dark color which analysis proved that is a product of interaction of substances investigated by us. This experiment gives the chance to identify clenbuterolfullerene.
9. Fullerene nanoparticles in soil: Analysis, occurrence and fate
Carboni, A.
2016-01-01
Fullerenes are carbon-based nanomaterials that can occur in the environment due to both natural events and human production. Recently, the increasing use in novel nanotechnologies raised concern for the possible adverse effects on humans and the environment. However, the assessment is complicated by
10. Raman spectroelectrochemistry of ordered C-60 fullerene layers
Krause, M.; Deutsch, D.; Dunsch, L.; Janda, Pavel; Kavan, Ladislav
2005-01-01
Roč. 13, - (2005), s. 159-166 ISSN 1536-383X R&D Projects: GA AV ČR IAA4040306 Institutional research plan: CEZ:AV0Z40400503 Keywords : fullerenes * thin films * nanostructuring * Raman spectroscopy Subject RIV: CG - Electrochemistry Impact factor: 0.776, year: 2005
11. Photoconducting properties of fullerene derivatized with a biphenil moiety
Corvis, Y.; Trzcinska, K.; Rink, R.; Bílková, Petra; Gorecka, E.; Bilewicz, R.; Rogalska, E.
2006-01-01
Roč. 80, č. 3 (2006), s. 1899-1907 ISSN 0137- 5083 Grant - others:Research Training Network(XE) HPRN-CT-2002-00171 Institutional research plan: CEZ:AV0Z10100520 Keywords : fullerene * photoconductivity Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.491, year: 2006
12. Protection against Schistosoma mansoni infection using a Fasciola hepatica-derived fatty acid binding protein from different delivery systems.
Vicente, Belén; López-Abán, Julio; Rojas-Caraballo, Jose; del Olmo, Esther; Fernández-Soto, Pedro; Muro, Antonio
2016-04-18
protection is obtained by using Fasciola hepatica-derived FABP protein against Schistosoma mansoni infection. Native FABP is more effective than both recombinant systems. It could be due to post-translational modifications or FABP isoform or changes in the recombinant proteins.
13. Protective effect of human umbilical cord-derived mesenchymal stem cells against severe acute pancreatitis in rats
Dong-ye WU
2017-06-01
Full Text Available Objective To study the protective effects of human umbilical cord-derived mesenchymal stem cells (ucMSCs against severe acute pancreatitis (SAP in rats. Methods A total of 135 Sprague-Dawley male rats were randomly divided into Sham group, SAP group and SAP+ucMSCs group (45 each. SAP+ucMSCs group: Severe acute pancreatitis was induced by injecting 5% sodium taurocholate (0.1ml/100g into the common biliopancreatic duct and then CM-DiI-labeled ucMSCs at 1×107cells/kg were injected via the tail vein. All the rats were sacrificed 12, 24 and 72 hours after SAP. The 72h death rate was counted. Pathological changes in the pancrease were detected by HE staining and pathological score was graded. ucMSCs colonization was observed by fluorescence microscopy. The serum levels of amylase, lipase, TNF-α, IL-1β, IL-4 and IL-10 were determined by ELISA. Results ucMSCs colonize the injured area of pancreatic tissue, the 72h death rate was reduced, and the serum amylase and lipase were also reduced significantly. Moreover, ucMSCs significantly reduced the pathological score of the pancrea and the level of proinflammatory cytokines (TNF-α and IL-1β, but the levels of anti-inflammatory cytokines were increased (IL-4 and IL-10. Conclusion Transplantation of ucMSCs can reduce the severity of pancreatic injury and inflammation in SAP rats. DOI: 10.11855/j.issn.0577-7402.2017.05.03
14. Protecting Neural Structures and Cognitive Function During Prolonged Space Flight by Targeting the Brain Derived Neurotrophic Factor Molecular Network
Schmidt, M. A.; Goodwin, T. J.
2014-01-01
Brain derived neurotrophic factor (BDNF) is the main activity-dependent neurotrophin in the human nervous system. BDNF is implicated in production of new neurons from dentate gyrus stem cells (hippocampal neurogenesis), synapse formation, sprouting of new axons, growth of new axons, sprouting of new dendrites, and neuron survival. Alterations in the amount or activity of BDNF can produce significant detrimental changes to cortical function and synaptic transmission in the human brain. This can result in glial and neuronal dysfunction, which may contribute to a range of clinical conditions, spanning a number of learning, behavioral, and neurological disorders. There is an extensive body of work surrounding the BDNF molecular network, including BDNF gene polymorphisms, methylated BDNF gene promoters, multiple gene transcripts, varied BDNF functional proteins, and different BDNF receptors (whose activation differentially drive the neuron to neurogenesis or apoptosis). BDNF is also closely linked to mitochondrial biogenesis through PGC-1alpha, which can influence brain and muscle metabolic efficiency. BDNF AS A HUMAN SPACE FLIGHT COUNTERMEASURE TARGET Earth-based studies reveal that BDNF is negatively impacted by many of the conditions encountered in the space environment, including oxidative stress, radiation, psychological stressors, sleep deprivation, and many others. A growing body of work suggests that the BDNF network is responsive to a range of diet, nutrition, exercise, drug, and other types of influences. This section explores the BDNF network in the context of 1) protecting the brain and nervous system in the space environment, 2) optimizing neurobehavioral performance in space, and 3) reducing the residual effects of space flight on the nervous system on return to Earth
15. Protective effect of an egg yolk-derived immunoglobulin (IgY) against Prevotella intermedia-mediated gingivitis.
Hou, Y-Y; Zhen, Y-H; Wang, D; Zhu, J; Sun, D-X; Liu, X-T; Wang, H-X; Liu, Y; Long, Y-Y; Shu, X-H
2014-04-01
To investigate the effects of an egg yolk-derived immunoglobulin (IgY) specific to Prevotella intermedia in vitro and in vivo. An IgY specific to P. intermedia was produced by immunizing hens with formaldehyde-inactivated P. intermedia and showed high titres when subjected to an ELISA. The obtained IgY inhibited the growth of P. intermedia in a dose-dependent manner at concentrations from 1 to 20 mg ml(-1) in Center for Disease Control and Prevention liquid medium. Forty rats were challenged with P. intermedia on gingivae and then randomly divided into four groups, which were syringed respectively with phosphate-buffered saline, 1 mg ml(-1) of tinidazole, 20 mg ml(-1) of nonspecific IgY and 20 mg ml(-1) of the IgY specific to P. intermedia at a dosage of 300 μl per day. Gingival index (GI), plaque index (PI), bleeding on probing (BOP), counts of white blood cell (WBC) and histopathological slide of the gums were measured after treatment for 15 days. The gingivitis rats treated with the IgY specific to P. intermedia showed significantly decreased GI, PI, BOP and WBC (P intermedia-mediated gingivitis. A new immunoglobulin specific to P. intermedia was developed from egg yolk. This specific IgY can dose-dependently inhibit the growth of P. intermedia and protect rats from gingivitis induced by P. intermedia. The new IgY has potential for the treatment of P. intermedia-mediated gingivitis. © 2013 The Society for Applied Microbiology.
16. [60]Fullerene Displacement from (Dihapto-Buckminster-Fullerene) Pentacarbonyl Tungsten(0): An Experiment for the Inorganic Chemistry Laboratory, Part II
Cortes-Figueroa, Jose E.; Moore-Russo, Deborah A.
2006-01-01
The kinetics experiments on the ligand-C[subscript 60] exchange reactions on (dihapto-[60]fullerene) pentacarbonyl tungsten(0), ([eta][superscript 2]-C[subscript 60])W(CO)[subscript 5], form an educational activity for the inorganic chemistry laboratory that promotes graphical thinking as well as the understanding of kinetics, mechanisms, and the…
17. Two-chamber configuration of Bio-Nano electron cyclotron resonance ion source for fullerene modification
Uchida, T., E-mail: uchida-t@toyo.jp [Bio-Nano Electronics Research Centre, Toyo University, Kawagoe 350-8585 (Japan); Graduate School of Interdisciplinary New Science, Toyo University, Kawagoe 350-8585 (Japan); Rácz, R.; Biri, S. [Institute for Nuclear Research (Atomki), Hungarian Academy of Sciences, Bem tér 18/C, H-4026 Debrecen (Hungary); Muramatsu, M.; Kitagawa, A. [National Institute of Radiological Sciences (NIRS), Chiba 263-8555 (Japan); Kato, Y. [Graduate School of Engineering, Osaka University, Suita 565-0871 (Japan); Yoshida, Y. [Bio-Nano Electronics Research Centre, Toyo University, Kawagoe 350-8585 (Japan); Faculty of Science and Engineering, Toyo University, Kawagoe 350-8585 (Japan)
2016-02-15
We report on the modification of fullerenes with iron and chlorine using two individually controllable plasmas in the Bio-Nano electron cyclotron resonance ion source (ECRIS). One of the plasmas is composed of fullerene and the other one is composed of iron and chlorine. The online ion beam analysis allows one to investigate the rate of the vapor-phase collisional modification process in the ECRIS, while the offline analyses (e.g., liquid chromatography-mass spectrometry) of the materials deposited on the plasma chamber can give information on the surface-type process. Both analytical methods show the presence of modified fullerenes such as fullerene-chlorine, fullerene-iron, and fullerene-chlorine-iron.
18. Conjugation-promoted reaction of open-cage fullerene: a density functional theory study.
Guo, Yong; Yan, Jingjing; Khashab, Niveen M
2012-02-01
Density functional theory calculations are performed to study the addition mechanism of e-rich moieties such as triethyl phosphite to a carbonyl group on the rim of a fullerene orifice. Three possible reaction channels have been investigated. The obtained results show that the reaction of a carbonyl group on a fullerene orifice with triethyl phosphite most likely proceeds along the classical Abramov reaction; however, the classical product is not stable and is converted into the experimental product. An attack on a fullerene carbonyl carbon will trigger a rearrangement of the phosphate group to the carbonyl oxygen as the conversion transition state is stabilized by fullerene conjugation. This work provides a new insight on the reactivity of open-cage fullerenes, which may prove helpful in designing new switchable fullerene systems. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
19. Elemental and Microscopic Analysis of Naturally Occurring C-O-Si Hetero-Fullerene-Like Structures.
2015-03-01
Carbon exhibits an ability to form a wide range of structures in nature. Under favorable conditions, carbon condenses to form hollow, spheroid fullerenes in an inert atmosphere. Using high resolution FESEM, we have concealed the existence of giant hetero-fullerene like structures in the natural form. Clear, distinct features of connected hexagons and pentagons were observed. Energy dispersive X-ray analysis depth-profile of natural fullerene structures indicates that Russian-doll-like configurations composed of C, 0, and Si rings exist in nature. The analysis is based on an outstanding molecular feature found in the size fraction of aerosols having diameters 150 nm to 1.0 µm. The fullerene like structures, which are ~ 150 nm in diameter, are observed in large numbers. To the best of our knowledge, this is the first direct detailed observation of natural fullerene-like structures. This article reports inadvertent observation of naturally occurring hetero-fullerene-like structures in the Arctic.
20. Charge transfer complex states in diketopyrrolopyrrole polymers and fullerene blends: Implications for organic solar cell efficiency
Moghe, D.; Yu, P.; Kanimozhi, C.; Patil, S.; Guha, S.
2011-12-01
The spectral photocurrent characteristics of two donor-acceptor diketopyrrolopyrrole (DPP)-based copolymers (PDPP-BBT and TDPP-BBT) blended with a fullerene derivative [6,6]-phenyl C61-butyric acid methyl ester (PCBM) were studied using Fourier-transform photocurrent spectroscopy (FTPS) and monochromatic photocurrent (PC) method. PDPP-BBT:PCBM shows the onset of the lowest charge transfer complex (CTC) state at 1.42 eV, whereas TDPP-BBT:PCBM shows no evidence of the formation of a midgap CTC state. The FTPS and PC spectra of P3HT:PCBM are also compared. The larger singlet state energy difference of TDPP-BBT and PCBM compared to PDPP-BBT/P3HT and PCBM obliterates the formation of a midgap CTC state resulting in an enhanced photovoltaic efficiency over PDPP-BBT:PCBM.
1. C60H- an intermediate in the photochemical reduction of C60 fullerene with triethylamine
Stasko, A.; Brezova, V.; Neudeck, A.; Bartl, A.; Dunsch, L.
1999-01-01
The systematic investigations on the photo reduction of C 60 fullerene and its derivatives using triethylamine and TiO 2 donors, and also other techniques, verified the formation of C 60 .- . Formation of the mono-anion a narrow EPR line (pp A = 0.1 mT) was observed. During a continuous irradiation line A is replaced with line B having g B - 2.0006 and pp B = 0.04 mT which also vanished under prolonged irradiation. But lines A and B repeated after stopping irradiation. This unusual behaviour was re-investigated in analogous EPR-NIR experiments using now a rapid NIR spectrometer. The band at 996 nm was assigned to C 60 H - . The mechanism of C 60 H - formation is discussed
2. Computing the Reverse Eccentric Connectivity Index for Certain Family of Nanocone and Fullerene Structures
Wei Gao
2016-01-01
Full Text Available A large number of previous works reveal that there exist strong connections between the chemical characteristics of chemical compounds and drugs (e.g., melting point and boiling point and their topological structures. Chemical indices introduced on these molecular topological structures can help chemists and material and medical scientists to grasp its chemical reactivity, biological activity, and physical features better. Hence, the study of the topological indices on the material structure can make up the defect of experiments and provide the theoretical evidence in material engineering. In this paper, we determine the reverse eccentric connectivity index of one family of pentagonal carbon nanocones PCN5[n] and three infinite families of fullerenes C12n+2, C12n+4, and C18n+10 based on graph analysis and computation derivation, and these results can offer the theoretical basis for material properties.
3. Plasmon-plasmon coupling in nested fullerenes: photoexcitation of interlayer plasmonic cross modes
McCune, Mathew A; De, Ruma; Chakraborty, Himadri S; Madjet, Mohamed E; Manson, Steven T
2011-01-01
Considering the photoionization of a two-layer fullerene-onion system, C 60 -C 240 , strong plasmonic couplings between the nested fullerenes are demonstrated. The resulting hybridization produces four cross-over plasmons generated from the bonding and antibonding mixing of excited charge clouds of individual fullerenes. This suggests the possibility of designing buckyonions exhibiting plasmon resonances with specified properties and may motivate future research to modify the resonances with encaged atoms, molecules or clusters. (fast track communication)
4. Organic–Inorganic Nanostructure Architecture via Directly Capping Fullerenes onto Quantum Dots
Kim Jonggi
2011-01-01
Full Text Available Abstract A new form of fullerene-capped CdSe nanoparticles (PCBA-capped CdSe NPs, using carboxylate ligands with [60]fullerene capping groups that provides an effective synthetic methodology to attach fullerenes noncovalently to CdSe, is presented for usage in nanotechnology and photoelectric fields. Interestingly, either the internal charge transfer or the energy transfer in the hybrid material contributes to photoluminescence (PL quenching of the CdSe moieties.
5. On the Evaporation Kinetics of [60] Fullerene in Aromatic Organic Solvents
Amer, Maher S.; Wang, Wenhu; Kollins, Kaitlin N; Altalebi, Hasanain; Schwingenschlö gl, Udo
2018-01-01
We investigate the effect of C60 fullerene nanospheres on the evaporation kinetics of a number of aromatic solvents with different levels of molecular association, namely, benzene, toluene, and chlorobenzene. The dependence of the evaporation rate on the fullerene concentration is not monotonic but rather exhibits maxima and minima. The results strongly support the notion of molecular structuring within the liquid solvent controlled by the nature of fullerene/solvent interaction and the level of molecular association within the solvent itself.
6. Centrosymmetric Graphs And A Lower Bound For Graph Energy Of Fullerenes
Katona Gyula Y.
2014-11-01
Full Text Available The energy of a molecular graph G is defined as the summation of the absolute values of the eigenvalues of adjacency matrix of a graph G. In this paper, an infinite class of fullerene graphs with 10n vertices, n ≥ 2, is considered. By proving centrosymmetricity of the adjacency matrix of these fullerene graphs, a lower bound for its energy is given. Our method is general and can be extended to other class of fullerene graphs.
7. Organic-Inorganic Nanostructure Architecture via Directly Capping Fullerenes onto Quantum Dots.
Lee, Jae Kwan; Kim, Jonggi; Yang, Changduk
2011-12-01
A new form of fullerene-capped CdSe nanoparticles (PCBA-capped CdSe NPs), using carboxylate ligands with [60]fullerene capping groups that provides an effective synthetic methodology to attach fullerenes noncovalently to CdSe, is presented for usage in nanotechnology and photoelectric fields. Interestingly, either the internal charge transfer or the energy transfer in the hybrid material contributes to photoluminescence (PL) quenching of the CdSe moieties.
8. On the Evaporation Kinetics of [60] Fullerene in Aromatic Organic Solvents
Amer, Maher S.
2018-04-03
We investigate the effect of C60 fullerene nanospheres on the evaporation kinetics of a number of aromatic solvents with different levels of molecular association, namely, benzene, toluene, and chlorobenzene. The dependence of the evaporation rate on the fullerene concentration is not monotonic but rather exhibits maxima and minima. The results strongly support the notion of molecular structuring within the liquid solvent controlled by the nature of fullerene/solvent interaction and the level of molecular association within the solvent itself.
9. Ciliary derived neurotrophic factor protects oligodendrocytes against radiation induced damage in vitro by a mechanism independent of a proliferative effect
Evans, Andrew J.; Mabie, Peter C.; Kessler, Jack A.; Vikram, Bhadrasain
1997-01-01
10. Adverse effects of fullerenes (nC{sub 60}) spiked to sediments on Lumbriculus variegatus (Oligochaeta)
Pakarinen, K., E-mail: kukka.tervonen@uef.fi [Department of Biology, University of Eastern Finland, 80101 Joensuu (Finland); Petersen, E.J. [Material Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD (United States); Leppaenen, M.T.; Akkanen, J.; Kukkonen, J.V.K. [Department of Biology, University of Eastern Finland, 80101 Joensuu (Finland)
2011-12-15
Effects of fullerene-spiked sediment on a benthic organism, Lumbriculus variegatus (Oligochaeta), were investigated. Survival, growth, reproduction, and feeding rates were measured to assess possible adverse effects of fullerene agglomerates produced by water stirring and then spiked to a natural sediment. L. variegatus were exposed to 10 and 50 mg fullerenes/kg sediment dry mass for 28 d. These concentrations did not impact worm survival or reproduction compared to the control. Feeding activities were slightly decreased for both concentrations indicating fullerenes' disruptive effect on feeding. Depuration efficiency decreased in the high concentration only. Electron and light microscopy and extraction of the worm fecal pellets revealed fullerene agglomerates in the gut tract but not absorption into gut epithelial cells. Micrographs also indicated that 16% of the epidermal cuticle fibers of the worms were not present in the 50 mg/kg exposures, which may make worms susceptible to other contaminants. - Highlights: > Effects of fullerene-spiked sediment on black worms were investigated. > Survival, growth, reproduction, and feeding rates were measured. > Exposure did not impact worm survival or reproduction. > Feeding rates and depuration efficiency were decreased. > Worms transferred fullerenes from the sediment to the sediment surface. - Exposure to fullerene-spiked sediment decreased black worms' feeding and depuration efficiency, but fullerenes did not appear to be absorbed into the microvilli.
11. Electronic transport properties aspects and structure of polymer-fullerene based organic semiconductors for photovoltaic devices
2006-01-01
A series of polystyrene (PS) and fullerene (C 60 ) based thin films containing from 23 to 60 wt.% in fullerene were investigated. Initially, the films were characterised by Fourier Transform Infrared Spectroscopy (FTIR) spectroscopy where the characteristic absorption bands of both the fullerene and the polystyrene were revealed. The additional characteristic absorption bands due the grafted fullerene to polystyrene were revealed as well. The relative peak intensities provided with qualitative information of the films stoichiometry in terms of the fullerene's amount that was grafted to polystyrene. The optical properties of the films were investigated by spectroscopic ellipsometry (SE). It was found that the increase of the fullerene's amount that was grafted to polystyrene results in an increase of the absorption coefficient α, refractive index n, extinction coefficient k as well as in the dielectric constant ε ∝ within the range between 2.4 and 2.8 for the lower and higher fullerene content, respectively. The films' J-V characteristics, of the space charge limited current (SCLC) behaviour, showed increased currents with increasing the fullerene's content. The electron mobility was extracted and found to increase with increasing the fullerene amount, from 4 x 10 -9 cm 2 /V s to 2 x 10 -7 cm 2 /V s
12. Detection of fullerenes (C60 and C70) in commercial cosmetics
Benn, Troy M.; Westerhoff, Paul; Herckes, Pierre
2011-01-01
Detection methods are necessary to quantify fullerenes in commercial applications to provide potential exposure levels for future risk assessments of fullerene technologies. The fullerene concentrations of five cosmetic products were evaluated using liquid chromatography with mass spectrometry to separate and specifically detect C 60 and C 70 from interfering cosmetic substances (e.g., castor oil). A cosmetic formulation was characterized with transmission electron microscopy, which confirmed that polyvinylpyrrolidone encapsulated C 60 . Liquid-liquid extraction of fullerenes from control samples approached 100% while solid-phase and sonication in toluene extractions yielded recoveries of 27-42%. C 60 was detected in four commercial cosmetics ranging from 0.04 to 1.1 μg/g, and C 70 was qualitatively detected in two samples. A single-use quantity of cosmetic (0.5 g) may contain up to 0.6 μg of C 60 , demonstrating a pathway for human exposure. Steady-state modeling of fullerene adsorption to biosolids is used to discuss potential environmental releases from wastewater treatment systems. - Highlights: → Fullerenes were detected in cosmetics up to 1.1 μg/g. → Liquid-liquid extraction efficiently recovers fullerenes in cosmetic matrices. → Solid-phase extraction reduces LC-MS detection interferences for C60. → Cosmetics can increase human and environmental fullerene exposures. - Fullerenes were detected in cosmetics with liquid chromatography-mass spectrometry up to 1.1 μg/g, demonstrating a source for human/environmental exposure.
13. Structural and phase changes in copper-fullerene films by ion implantation and annealing
Shpilevsky, E.M.; Baran, L.V.; Okatova, G.P.; Jakimovich, A.V.
2001-01-01
The structural and phase changes and the electrical properties of copper - fullerene (Cu-C 60 ) films by the ion implantation(B + , E=80 keV, D 5·10 21 m -2 ) and the thermal annealing are described. We found the copper-fullerene solid supersaturated solution formed in process of the two-component films obtaining. The result of the thermal annealing is the phase segregation of fullerene. It has been established the ion implantation adduces to the partial fragmentation of fullerene, to the destruction of the C 60 molecules and to the formation of the CuB 24 , B 25 C and B 4 C phases
14. Electronic transport properties aspects and structure of polymer-fullerene based organic semiconductors for photovoltaic devices
Adamopoulos, G. [Laboratoire d' Ingenierie des Polymeres pour les Hautes Technologies (L.I.P.H.T.), Ecole Europeenne Chimie Polymeres Materiaux (E.C.P.M.), 25 Rue Becquerel, 67087 Strasbourg Cedex 02 (France)]. E-mail: geo_adamo@yahoo.fr; Heiser, T. [Institut d' Electronique du Solide et des Systemes (IN.E.S.S.), CNRS/ULP, 23 Rue du Loess, BP 20, 67037 Strasbourg Cedex 02 (France); Giovanella, U. [Laboratoire d' Ingenierie des Polymeres pour les Hautes Technologies (L.I.P.H.T.), Ecole Europeenne Chimie Polymeres Materiaux (E.C.P.M.), 25 Rue Becquerel, 67087 Strasbourg Cedex 02 (France); Ould-Saad, S. [Laboratoire d' Ingenierie des Polymeres pour les Hautes Technologies (L.I.P.H.T.), Ecole Europeenne Chimie Polymeres Materiaux (E.C.P.M.), 25 Rue Becquerel, 67087 Strasbourg Cedex 02 (France); Wetering, K.I. van de [Laboratoire d' Ingenierie des Polymeres pour les Hautes Technologies (L.I.P.H.T.), Ecole Europeenne Chimie Polymeres Materiaux (E.C.P.M.), 25 Rue Becquerel, 67087 Strasbourg Cedex 02 (France); Brochon, C. [Laboratoire d' Ingenierie des Polymeres pour les Hautes Technologies (L.I.P.H.T.), Ecole Europeenne Chimie Polymeres Materiaux (E.C.P.M.), 25 Rue Becquerel, 67087 Strasbourg Cedex 02 (France); Zorba, T. [Physics Department, Solid State Physics Section, Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Paraskevopoulos, K.M. [Physics Department, Solid State Physics Section, Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Hadziioannou, G. [Laboratoire d' Ingenierie des Polymeres pour les Hautes Technologies (L.I.P.H.T.), Ecole Europeenne Chimie Polymeres Materiaux (E.C.P.M.), 25 Rue Becquerel, 67087 Strasbourg Cedex 02 (France)
2006-07-26
A series of polystyrene (PS) and fullerene (C{sub 60}) based thin films containing from 23 to 60 wt.% in fullerene were investigated. Initially, the films were characterised by Fourier Transform Infrared Spectroscopy (FTIR) spectroscopy where the characteristic absorption bands of both the fullerene and the polystyrene were revealed. The additional characteristic absorption bands due the grafted fullerene to polystyrene were revealed as well. The relative peak intensities provided with qualitative information of the films stoichiometry in terms of the fullerene's amount that was grafted to polystyrene. The optical properties of the films were investigated by spectroscopic ellipsometry (SE). It was found that the increase of the fullerene's amount that was grafted to polystyrene results in an increase of the absorption coefficient {alpha}, refractive index n, extinction coefficient k as well as in the dielectric constant {epsilon} {sub {proportional_to}} within the range between 2.4 and 2.8 for the lower and higher fullerene content, respectively. The films' J-V characteristics, of the space charge limited current (SCLC) behaviour, showed increased currents with increasing the fullerene's content. The electron mobility was extracted and found to increase with increasing the fullerene amount, from 4 x 10{sup -9} cm{sup 2}/V s to 2 x 10{sup -7} cm{sup 2}/V s.
15. Development of a framework based on an ecosystem services approach for deriving specific protection goals for environmental risk assessment of pesticides
Nienstedt, Karin M.; Brock, Theo C.M.; Wensem, Joke van; Montforts, Mark; Hart, Andy; Aagaard, Alf; Alix, Anne; Boesten, Jos; Bopp, Stephanie K.; Brown, Colin; Capri, Ettore; Forbes, Valery; Köpp, Herbert; Liess, Matthias; Luttik, Robert; Maltby, Lorraine
2012-01-01
General protection goals for the environmental risk assessment (ERA) of plant protection products are stated in European legislation but specific protection goals (SPGs) are often not precisely defined. These are however crucial for designing appropriate risk assessment schemes. The process followed by the Panel on Plant Protection Products and their Residues (PPR) of the European Food Safety Authority (EFSA) as well as examples of resulting SPGs obtained so far for environmental risk assessment (ERA) of pesticides is presented. The ecosystem services approach was used as an overarching concept for the development of SPGs, which will likely facilitate communication with stakeholders in general and risk managers in particular. It is proposed to develop SPG options for 7 key drivers for ecosystem services (microbes, algae, non target plants (aquatic and terrestrial), aquatic invertebrates, terrestrial non target arthropods including honeybees, terrestrial non-arthropod invertebrates, and vertebrates), covering the ecosystem services that could potentially be affected by the use of pesticides. These SPGs need to be defined in 6 dimensions: biological entity, attribute, magnitude, temporal and geographical scale of the effect, and the degree of certainty that the specified level of effect will not be exceeded. In general, to ensure ecosystem services, taxa representative for the key drivers identified need to be protected at the population level. However, for some vertebrates and species that have a protection status in legislation, protection may be at the individual level. To protect the provisioning and supporting services provided by microbes it may be sufficient to protect them at the functional group level. To protect biodiversity impacts need to be assessed at least at the scale of the watershed/landscape. - Research highlights: ► How to define specific protection goals (SPGs) for environmental risk assessment? ► The process uses the ecosystem services (ES
16. Proper Timing of Foot-and-Mouth Disease Vaccination of Piglets with Maternally Derived Antibodies Will Maximize Expected Protection Levels
Dekker, A.; Chénard, G.; Stockhofe, N.; Eble, P.L.
2016-01-01
We investigated to what extent maternally derived antibodies interfere with foot-and-mouth disease (FMD) vaccination in order to determine the factors that influence the correct vaccination for piglets. Groups of piglets with maternally derived antibodies were vaccinated at different time points
17. Interface engineering for efficient fullerene-free organic solar cells
Shivanna, Ravichandran; Narayan, K. S., E-mail: rajaram@jncasr.ac.in, E-mail: narayan@jncasr.ac.in [Chemistry and Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore 560064 (India); Rajaram, Sridhar, E-mail: rajaram@jncasr.ac.in, E-mail: narayan@jncasr.ac.in [International Centre for Materials Science, Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore 560064 (India)
2015-03-23
We demonstrate the role of zinc oxide (ZnO) morphology and addition of an acceptor interlayer to achieve high efficiency fullerene-free bulk heterojunction inverted organic solar cells. Nanopatterning of the ZnO buffer layer enhances the effective light absorption in the active layer, and the insertion of a twisted perylene acceptor layer planarizes and decreases the electron extraction barrier. Along with an increase in current homogeneity, the reduced work function difference and selective transport of electrons prevent the accumulation of charges and decrease the electron-hole recombination at the interface. These factors enable an overall increase of efficiency to 4.6%, which is significant for a fullerene-free solution-processed organic solar cell.
18. New insights in low-energy electron-fullerene interactions
Msezane, Alfred Z.; Felfli, Zineb
2018-03-01
The robust Regge-pole methodology has been used to probe for long-lived metastable anionic formation in Cn (n = 20, 24, 26, 28, 44, 70, 92 and 112) through the calculated electron elastic scattering total cross sections (TCSs). All the TCSs are found to be characterized by Ramsauer-Townsend minima, shape resonances and dramatically sharp resonances manifesting metastable anionic formation during the collisions. The energy positions of the anionic ground states resonances are found to match the measured electron affinities (EAs). We also investigated the size-effect through the correlation and polarization induced metastable resonances as the fullerene size varied from C20 through C112. The C20 TCSs exhibit atomic behavior while the C112 TCSs demonstrate strong departure from atomic behavior attributed to the size effect. Surprisingly C24 is found to have the largest EA among the investigated fullerenes making it suitable for use in organic solar cells and nanocatalysis.
19. Interaction energy for a fullerene encapsulated in a carbon nanotorus
Sarapat, Pakhapoom; Baowan, Duangkamon; Hill, James M.
2018-06-01
The interaction energy of a fullerene symmetrically situated inside a carbon nanotorus is studied. For these non-bonded molecules, the main interaction originates from the van der Waals energy which is modelled by the 6-12 Lennard-Jones potential. Upon utilising the continuum approximation which assumes that there are infinitely many atoms that are uniformly distributed over the surfaces of the molecules, the total interaction energy between the two structures is obtained as a surface integral over the spherical and the toroidal surfaces. This analytical energy is employed to determine the most stable configuration of the torus encapsulating the fullerene. The results show that a torus with major radius around 20-22 Å and minor radius greater than 6.31 Å gives rise to the most stable arrangement. This study will pave the way for future developments in biomolecules design and drug delivery system.
20. Thermodynamics of TMPC/PSd/Fullerene Nanocomposites: SANS Study
Chua, Yang-Choo
2010-11-23
Wereport a small angle neutron scattering study of the thermodynamics of a polymer mixture in the presence of nanoparticles, both in equilibrium and during phase separation. Neutron cloud point measurements and random phase approximation (RPA) analysis demonstrate that 1-2 mass % of C60 fullerenes destabilizes a highly interacting mixture of poly(tetramethyl bisphenol A polycarbonate) and deuterated polystyrene (TMPC/PSd). We unequivocally corroborate these findings with time-resolved temperature jump experiments that, in identical conditions, result in phase separation for the nanocomposite and stability for the neat polymer mixture. At lower C 60 loadings (viz. 0.2-0.5 mass %), stabilization of the mixture is observed. The nonmonotonic variation of the spinodal temperature with fullerene addition suggests a competitive interplay of asymmetric component interactions and nanoparticle dispersion. The stability line shift depends critically on particle dispersion and vanishes upon nanoparticle agglomeration. © 2010 American Chemical Society.
1. Investigation of fullerene ions in crossed-beams experiments
Hathiramani, D.; Scheier, P.; Braeuning, H.; Trassl, R.; Salzborn, E.; Presnyakov, L.P.; Narits, A.A.; Uskov, D.B.
2003-01-01
Employing the crossed-beams technique, we have studied the interaction of fullerene ions both with electrons and He 2+ -ions. Electron-impact ionization cross sections for C 60 q+ (q=1,2,3) have been measured at electron energies up to 1000 eV. Unusual features in shape and charge state dependence have been found, which are not observed for atomic ions. The evaporative loss of neutral C 2 fragments in collisions with electrons indicates the presence of two different mechanisms. In a first-ever ion-ion crossed-beams experiment involving fullerene ions a cross section of (1.05 ± 0.06) x 10 -15 cm 2 for charge transfer in the collision C 60 + + He 2+ at 117.2 keV center-of-mass energy has been obtained
2. Classical molecular dynamics simulations of fusion and fragmentation in fullerene-fullerene collisions
Verkhovtsev, A.; Korol, A.V.; Solovyov, A.V.
2017-01-01
We present the results of classical molecular dynamics simulations of collision-induced fusion and fragmentation of C 60 fullerenes, performed by means of the MBN Explorer software package. The simulations provide information on structural differences of the fused compound depending on kinematics of the collision process. The analysis of fragmentation dynamics at different initial conditions shows that the size distributions of produced molecular fragments are peaked for dimers, which is in agreement with a well-established mechanism of C 60 fragmentation via preferential C 2 emission. Atomic trajectories of the colliding particles are analyzed and different fragmentation patterns are observed and discussed. On the basis of the performed simulations, characteristic time of C 2 emission is estimated as a function of collision energy. The results are compared with experimental time-of-flight distributions of molecular fragments and with earlier theoretical studies. Considering the widely explored case study of C 60 -C 60 collisions, we demonstrate broad capabilities of the MBN Explorer software, which can be utilized for studying collisions of a broad variety of nano-scale and bio-molecular systems by means of classical molecular dynamics. (authors)
3. Growth of Fullerene Fragments Using the Diels-Alder Cycloaddition Reaction: First Step towards a C60 Synthesis by Dimerization
Julio A. Alonso
2013-02-01
Full Text Available Density Functional Theory has been used to model the Diels-Alder reactions of the fullerene fragments triindenetriphenilene and pentacyclopentacorannulene with ethylene and 1,3-butadiene. The purpose is to prove the feasibility of using Diels-Alder cycloaddition reactions to grow fullerene fragments step by step, and to dimerize fullerene fragments, as a way to obtain C60. The dienophile character of the fullerene fragments is dominant, and the reaction of butadiene with pentacyclopentacorannulene is favored.
4. Single or functionalized fullerenes interacting with heme group
Costa, Wallison Chaves; Diniz, Eduardo Moraes, E-mail: eduardo.diniz@ufma.br [Departamento de Física, Universidade Federal do Maranhão, Avenida dos Portugueses, 1966, CEP 65080-805, São Luís - MA (Brazil)
2014-09-15
The heme group is responsible for iron transportation through the bloodstream, where iron participates in redox reactions, electron transfer, gases detection etc. The efficiency of such processes can be reduced if the whole heme molecule or even the iron is somehow altered from its original oxidation state, which can be caused by interactions with nanoparticles as fullerenes. To verify how such particles alter the geometry and electronic structure of heme molecule, here we report first principles calculations based on density functional theory of heme group interacting with single C{sub 60} fullerene or with C{sub 60} functionalized with small functional groups (−CH{sub 3}, −COOH, −NH{sub 2}, −OH). The calculations shown that the system heme + nanoparticle has a different spin state in comparison with heme group if the fullerene is functionalized. Also a functional group can provide a stronger binding between nanoparticle and heme molecule or inhibit the chemical bonding in comparison with single fullerene results. In addition heme molecule loses electrons to the nanoparticles and some systems exhibited a geometry distortion in heme group, depending on the binding energy. Furthermore, one find that such nanoparticles induce a formation of spin up states in heme group. Moreover, there exist modifications in density of states near the Fermi energy. Although of such changes in heme electronic structure and geometry, the iron atom remains in the heme group with the same oxidation state, so that processes that involve the iron might not be affected, only those that depend on the whole heme molecule.
5. Thermal Effect on Structure Organizations in Cobalt-Fullerene Nanocomposition
Lavrentiev, Vasyl; Vacík, Jiří; Naramoto, H.; Sakai, S.
2010-01-01
Roč. 10, č. 4 (2010), s. 2624-2629 ISSN 1533-4880 R&D Projects: GA AV ČR(CZ) KAN400480701; GA AV ČR IAA200480702; GA MŠk(CZ) LC06041 Institutional research plan: CEZ:AV0Z10480505 Keywords : cobalt * fullerene * simultaneous deposition Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.351, year: 2010
6. Is the Use of Fullerene in Photodynamic Therapy Effective for Atherosclerosis?
Nitta, Norihisa; Seko, Ayumi; Sonoda, Akinaga; Ohta, Shinichi; Tanaka, Toyohiko; Takahashi, Masashi; Murata, Kiyoshi; Takemura, Shizuki; Sakamoto, Tsutomu; Tabata, Yasuhiko
2008-01-01
The purpose of this study was to evaluate Fullerene as a therapeutic photosensitizer in the treatment of atherosclerosis. An atherosclerotic experimental rabbit model was prepared by causing intimal injury to bilateral external iliac arteries using balloon expansion. In four atherosclerotic rabbits and one normal rabbit, polyethylene glycol-modified Fullerene (Fullerene-PEG) was infused into the left external iliac artery and illuminated by light emitting diode (LED), while the right external iliac artery was only illuminated by LED. Two weeks later, the histological findings for each iliac artery were evaluated quantitatively and comparisons were made among atherosclerotic Fullerene+LED artery (n = 4), atherosclerotic light artery (n = 4), normal Fullerene+LED artery (n = 1), and normal light artery (n = 1). An additional two atherosclerotic rabbits were studied by fluorescence microscopy, after Fullerene-PEG-Cy5 complex infusion into the left external iliac artery, for evaluation of Fullerene-PEG incorporated within the atherosclerotic lesions. The degree of atherosclerosis in the atherosclerotic Fullerene+LED artery was significantly (p < 0.05) more severe than that in the atherosclerotic LED artery. No pathological change was observed in normal Fullerene+LED and LED arteries. In addition, strong accumulation of Fullerene-PEG-Cy5 complex within the plaque of the left iliac artery of the two rabbits was demonstrated, in contrast to no accumulation in the right iliac artery. We conclude that infusion of a high concentration of Fullerene-PEG followed by photo-illumination resulted not in a suppression of atherosclerosis but in a progression of atherosclerosis in experimental rabbit models. However, this intervention showed no adverse effects on the normal iliac artery
7. Characterizing Cavities in Model Inclusion Fullerenes: A Comparative Study
Francisco Torrens
2001-06-01
Full Text Available Abstract: The fullerene-82 cavity is selected as a model system in order to test several methods for characterizing inclusion molecules. The methods are based on different technical foundations such as a square and triangular tessellation of the molecular surface, spherical tessellation of the molecular surface, numerical integration of the atomic volumes and surfaces, triangular tessellation of the molecular surface, and cubic lattice approach to the molecular volume. Accurate measures of the molecular volume and surface area have been performed with the pseudorandom Monte Carlo (MCVS and uniform Monte Carlo (UMCVS methods. These calculations serve as a reference for the rest of the methods. The SURMO2 method does not recognize the cavity and may not be convenient for intercalation compounds. The programs that detect the cavities never exceed 1% deviation relative to the reference value for molecular volume and 5% for surface area. The GEPOL algorithm, alone or combined with TOPO, shows results in good agreement with those of the UMCVS reference. The uniform random number generator provides the fastest convergence for UMCVS and a correct estimate of the standard deviations. The effect of the internal cavity on the solvent-accessible surfaces has been calculated. Fullerene-82 is compared with fullerene-60 and -70.
8. Photophysical properties of fullerenes prepared in an atmosphere of pyrrole
Glenis, S.; Cooke, S.; Chen, X.; Labes, M.M. (Temple Univ., Philadelphia, PA (United States))
1994-10-01
Samples of C[sub 60] and C[sub 70] containing a variety of nitrogen-doped species were prepared by arc vaporization of graphite in the presence of pyrrole. Cage-doped fractions were isolated by column chromatography and characterized by mass spectroscopy, optical absorption, and fluorescence measurements. Mass spectra were consistent with the substitution of an even number of carbon atoms of the C[sub 60] and C[sub 70] cages by nitrogen atoms. Carbonaceous clusters including fragmented fullerenes containing hydrogen atoms were also formed. UV-visible spectral analysis indicated that there is an influence of the molecular weight on the fundamental [pi]-[pi]* electronic transition. Fluorescence spectra showed a broad band containing vibrational fine structure that is attributed to photoseparated charges in the fragmented fullerenes and a shoulder on the low-energy side that is related to intrinsic excitation in the nitrogen-doped species. Fluorescence results imply a bandgap of 2.36 eV for the N doped fullerenes and the existence of intermediate excitonic transitions below the optical bandgap. Although it has not yet been possible to isolate a pure cage-doped material, the photophysical studies add credence to their existence and the importance of further attempts at their isolation. 17 refs., 4 figs., 1 tab.
9. THERMOOXIDATIVE STABILITY OF JET FUEL WITH FULLERENES AS AN ADDITIVE
С.В. Іванов
2012-10-01
Full Text Available Heating of fuels in presence of oxygen reduces their thermal-oxidative stability, leads to a solid phase in the form of sludge and tar, which, sedimented at the details of the fuel system, change its characteristics and cause contamination of fuel filters and injectors, spool control sticking, reduce efficiency of heat exchangers. Nanomaterials, performance of which is considerably superior to the natural materials, are the basis for the movement of humanity's progress. Therefore, with a develpoment of technologies it has become necessary to carry out a research of modified additives – fullerens, to improve an oxidative stability of fuels. We have carried out an investigation of thermal-oxidative stability of fuel RT as a function of additive C60 concentration. The results has shown that even 0,043 g/l fullerene addition as an antioxidant, reduces the amount of sediment in the fuel almost by half. Usage of fullerenes for improvement of petroleum products performance properties is a promising area of research.
10. Melting of Pb clusters encapsulated in large fullerenes
Delogu, Francesco
2011-01-01
Graphical abstract: Encapsulation significantly increases the melting point of nanometer-sized Pb particles with respect to the corresponding unsupported ones. Highlights: → Nanometer-sized Pb particles are encapsulated in fullerene cages. → Their thermal behavior is studied by molecular dynamics simulations. → Encapsulated particles undergo a pressure rise as temperature increases. → Encapsulated particles melt at temperatures higher than unsupported ones. - Abstract: Molecular dynamics simulations have been employed to explore the melting behavior of nanometer-sized Pb particles encapsulated in spherical and polyhedral fullerene cages of suitable size. The encapsulated particles, as well as the corresponding unsupported ones for comparison, were submitted to a gradual temperature rise. Encapsulation is shown to severely affect the thermodynamic behavior of Pb particles due to the different thermal expansion coefficients of particles and cages. This determines a volume constraint that induces a rise of pressure inside the fullerene cages, which operate for particles as rigid confinement systems. The result is that surface pre-melting and melting processes occur in encapsulated particles at temperatures higher than in unsupported ones.
11. Non-fullerene acceptors for organic solar cells
Yan, Cenqi; Barlow, Stephen; Wang, Zhaohui; Yan, He; Jen, Alex K.-Y.; Marder, Seth R.; Zhan, Xiaowei
2018-03-01
Non-fullerene acceptors (NFAs) are currently a major focus of research in the development of bulk-heterojunction organic solar cells (OSCs). In contrast to the widely used fullerene acceptors (FAs), the optical properties and electronic energy levels of NFAs can be readily tuned. NFA-based OSCs can also achieve greater thermal stability and photochemical stability, as well as longer device lifetimes, than their FA-based counterparts. Historically, the performance of NFA OSCs has lagged behind that of fullerene devices. However, recent developments have led to a rapid increase in power conversion efficiencies for NFA OSCs, with values now exceeding 13%, demonstrating the viability of using NFAs to replace FAs in next-generation high-performance OSCs. This Review discusses the important work that has led to this remarkable progress, focusing on the two most promising NFA classes to date: rylene diimide-based materials and materials based on fused aromatic cores with strong electron-accepting end groups. The key structure-property relationships, donor-acceptor matching criteria and aspects of device physics are discussed. Finally, we consider the remaining challenges and promising future directions for the NFA OSCs field.
12. Features of interaction of fullerenes with microwave radiation
Venger, E.F.; Konakova, R.V.; Kolyadina, E.Yu.; Matveeva, L.A.; Nelyuba, P.L.; Shinkarenko, V.V.
2015-01-01
Hetero systems with C 6 0 fullerenes were obtained by thermal sublimation method of microcrystalline C 6 0 powder from effusion tantalum cell in vacuum at a pressure of 10 -4 Pa onto non-heated silicon substrates. Composition, structural perfection and electronic properties, internal mechanical stresses in the films and the substrate at the interface, the influence on them of electromagnetic radiation (frequency of 2.45 GHz, power of 1.5 W/cm 2 ) were studied. Investigations were carried out by atomic force microscopy, Raman spectroscopy, electro reflectance modulation spectroscopy and hetero systems profilography to determine the sign and magnitude of mechanical stresses. There was the possibility of obtaining heterostructures with fullerenes without mechanical stress and the decomposition of the C 6 0 molecules in the film. Improvement of electronic properties of the films and the substrate was determined by the shift and value of transition energy Eg. This decreases the phenomenological broadening parameter Γ, increases the energy relaxation time of charge carriers τ and their mobility μ. For the first time determined the change of the fullerenes band gap depending on availability of internal mechanical stresses in the film: - 2.8×10 -10 eV/Pa and - 4.2×10 -10 eV/Pa for E0 and E0' transitions, respectively. (authors)
13. Neuron-derived IgG protects dopaminergic neurons from insult by 6-OHDA and activates microglia through the FcγR I and TLR4 pathways.
Zhang, Jie; Niu, Na; Wang, Mingyu; McNutt, Michael A; Zhang, Donghong; Zhang, Baogang; Lu, Shijun; Liu, Yuqing; Liu, Zhihui
2013-08-01
Oxidative and immune attacks from the environment or microglia have been implicated in the loss of dopaminergic neurons of Parkinson's disease. The role of IgG which is an important immunologic molecule in the process of Parkinson's disease has been unclear. Evidence suggests that IgG can be produced by neurons in addition to its traditionally recognized source B lymphocytes, but its function in neurons is poorly understood. In this study, extensive expression of neuron-derived IgG was demonstrated in dopaminergic neurons of human and rat mesencephalon. With an in vitro Parkinson's disease model, we found that neuron-derived IgG can improve the survival and reduce apoptosis of dopaminergic neurons induced by 6-hydroxydopamine toxicity, and also depress the release of NO from microglia triggered by 6-hydroxydopamine. Expression of TNF-α and IL-10 in microglia was elevated to protective levels by neuron-derived IgG at a physiologic level via the FcγR I and TLR4 pathways and microglial activation could be attenuated by IgG blocking. All these data suggested that neuron-derived IgG may exert a self-protective function by activating microglia properly, and IgG may be involved in maintaining immunity homeostasis in the central nervous system and serve as an active factor under pathological conditions such as Parkinson's disease. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
14. In situ synthesis, electrochemical and quantum chemical analysis of an amino acid-derived ionic liquid inhibitor for corrosion protection of mild steel in 1M HCl solution
Kowsari, E.; Arman, S.Y.; Shahini, M.H.; Zandi, H.; Ehsani, A.; Naderi, R.; PourghasemiHanza, A.; Mehdipour, M.
2016-01-01
Highlights: • Electrochemical analysis of effectiveness of an amino acid-derived ionic liquid inhibitor. • Quantum chemical analysis of effectiveness of an amino acid-derived ionic liquid inhibitor. • Finding correlation between electrochemical analysis and quantum chemical analysis. - Abstract: In this study, an amino acid-derived ionic liquid inhibitor, namely tetra-n-butyl ammonium methioninate, was synthesized and the role this inhibitor for corrosion protection of mild steel exposed to 1.0 M HCl was investigated using electrochemical, quantum and surface analysis. By taking advantage of potentiodynamic polarization, the inhibitory action of tetra-n-butyl ammonium methioninate was found to be mainly mixed-type with dominant anodic inhibition. The effectiveness of the inhibitor was also indicated using electrochemical impedance spectroscopy (EIS). Moreover, to provide further insight into the mechanism of inhibition, electrochemical noise (EN) and quantum chemical calculations of the inhibitor were performed.
15. Aggregation behavior of fullerenes in aqueous solutions: a capillary electrophoresis and asymmetric flow field-flow fractionation study
Astefanei, A.; Núñez, O.; Galceran, M.T.; Kok, W.Th.; Schoenmakers, P.J.
2015-01-01
In this work, the electrophoretic behavior of hydrophobic fullerenes [buckminsterfullerene (C-60), C-70, and N-methyl-fulleropyrrolidine (C-60-pyrr)] and water-soluble fullerenes [fullerol (C-60(OH)(24)); polyhydroxy small gap fullerene, hydrated (C-120(OH)(30)); C-60 pyrrolidine tris acid
16. Protective immune responses against Schistosoma mansoni infection by immunization with functionally active gut-derived cysteine peptidases alone and in combination with glyceraldehyde 3-phosphate dehydrogenase.
Hatem Tallima
2017-03-01
Full Text Available Schistosomiasis, a severe disease caused by parasites of the genus Schistosoma, is prevalent in 74 countries, affecting more than 250 million people, particularly children. We have previously shown that the Schistosoma mansoni gut-derived cysteine peptidase, cathepsin B1 (SmCB1, administered without adjuvant, elicits protection (>60% against challenge infection of S. mansoni or S. haematobium in outbred, CD-1 mice. Here we compare the immunogenicity and protective potential of another gut-derived cysteine peptidase, S. mansoni cathepsin L3 (SmCL3, alone, and in combination with SmCB1. We also examined whether protective responses could be boosted by including a third non-peptidase schistosome secreted molecule, glyceraldehyde 3-phosphate dehydrogenase (SG3PDH, with the two peptidases.While adjuvant-free SmCB1 and SmCL3 induced type 2 polarized responses in CD-1 outbred mice those elicited by SmCL3 were far weaker than those induced by SmCB1. Nevertheless, both cysteine peptidases evoked highly significant (P < 0.005 reduction in challenge worm burden (54-65% as well as worm egg counts and viability. A combination of SmCL3 and SmCB1 did not induce significantly stronger immune responses or higher protection than that achieved using each peptidase alone. However, when the two peptidases were combined with SG3PDH the levels of protection against challenge S. mansoni infection reached 70-76% and were accompanied by highly significant (P < 0.005 decreases in worm egg counts and viability. Similarly, high levels of protection were achieved in hamsters immunized with the cysteine peptidase/SG3PDH-based vaccine.Gut-derived cysteine peptidases are highly protective against schistosome challenge infection when administered subcutaneously without adjuvant to outbred CD-1 mice and hamsters, and can also act to enhance the efficacy of other schistosome antigens, such as SG3PDH. This cysteine peptidase-based vaccine should now be advanced to experiments in
17. Fullerene alloy formation and the benefits for efficient printing of ternary blend organic solar cells
Angmo, Dechan; Bjerring, Morten; Nielsen, Niels Chr.
2015-01-01
behaving as pseudo-binary mixtures due to alloying of the fullerene components. This finding has vast implications for the understanding of polymer–fullerene mixtures and quite certainly also their application in organic solar cells where performance hinges critically on the blend behaviour which is also...
18. First prediction of the direct effect of a confined atom on photoionization of the confining fullerene
2010-01-01
We predict that the confined atom can qualitatively modify the energetic photoionization of some cage levels, even though these levels are of very dominant fullerene character. The effect imposes strong new oscillations in the cross sections which are forbidden to the ionization of empty fullerenes. Results are presented for the AratC 60 endofullerene compound. (fast track communication)
19. First prediction of the direct effect of a confined atom on photoionization of the confining fullerene
McCune, Matthew A; De, Ruma; Chakraborty, Himadri S [Center for Innovation and Entrepreneurship, Department of Chemistry and Physics, Northwest Missouri State University, Maryville, MO 64468 (United States); Madjet, Mohamed E, E-mail: himadri@nwmissouri.ed [Institute of Chemistry and Biochemistry, Free University, Fabeckstrasse 36a, D-14195 Berlin (Germany)
2010-09-28
We predict that the confined atom can qualitatively modify the energetic photoionization of some cage levels, even though these levels are of very dominant fullerene character. The effect imposes strong new oscillations in the cross sections which are forbidden to the ionization of empty fullerenes. Results are presented for the AratC{sub 60} endofullerene compound. (fast track communication)
20. Conjugation-promoted reaction of open-cage fullerene: A density functional theory study
Guo, Yong
2012-01-20
Density functional theory calculations are performed to study the addition mechanism of e-rich moieties such as triethyl phosphite to a carbonyl group on the rim of a fullerene orifice. Three possible reaction channels have been investigated. The obtained results show that the reaction of a carbonyl group on a fullerene orifice with triethyl phosphite most likely proceeds along the classical Abramov reaction; however, the classical product is not stable and is converted into the experimental product. An attack on a fullerene carbonyl carbon will trigger a rearrangement of the phosphate group to the carbonyl oxygen as the conversion transition state is stabilized by fullerene conjugation. This work provides a new insight on the reactivity of open-cage fullerenes, which may prove helpful in designing new switchable fullerene systems. Not that classical: The reaction of a carbonyl group on the fullerene orifice with triethyl phosphite most likely proceeds following the Abramov reaction to firstly form a classical product. However, this product is not stable and turns into an experimental product as the conversion transition state is stabilized by fullerene conjugation (see picture). Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
1. Evidence for the existence of sulfur-doped fullerenes from elucidation of their photophysical properties
Glenis, S.; Cooke, S.; Chen, X.; Labes, M.M. [Temple Univ., Philadelphia, PA (United States)
1996-01-01
Cage carbon atoms of fullerenes were substituted by sulfur in sulfur-doped fullerenes synthesized by the authors. The synthesis method was based on the arc evaporation of graphite in the presence of thiophene or 3-methylthiophene. Structural characterization was accomplished through mass spectrometry and fluorescence spectroscopy and crude purification regimens using column chromatography were established. 24 refs., 4 figs., 1 tab.
2. On the possibility of considering the fullerene shell C{sub 60} as a conducting sphere
Amusia, M.Ya. [Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Ioffe Physical-Technical Institute, St. Petersburg 194021 (Russian Federation); Baltenkov, A.S. [Arifov Institute of Electronics, Tashkent 700125 (Uzbekistan)]. E-mail: arkbalt@mail.ru
2006-12-25
The dynamical and static dipole polarizabilities of the C{sub 60} molecule have been calculated on the basis of the experimental data on the cross section of the fullerene photoabsorption. It has been shown that the fullerene shell in the static electric field behaves most likely as a set of separate carbon atoms rather than as a conducting sphere.
3. Changes in Agglomeration of Fullerenes During Ingestion and Excretion in Thamnocephalus Platyurus
The crustacean Thamnocephalus platyurus was exposed to aqueous suspensions of fullerenes C60 and C70. Aqueous fullerene suspensions were formed by stirring C60 and C70 as received from a commercial vendor in deionized water (termed aqu/C60 and aqu/C70) for approximately 100 d. Th...
4. Ultra-low friction and excellent elastic recovery of fullerene-like ...
Multilayer fullerene-like hydrogenated carbon (FL-C:H) films were synthesized by using the chemical vapourdeposition technique with a different flow rate of methane. The typical fullerene-like structure of as-prepared films wasinvestigated by using transmission electron microscopy and Raman spectra. The prepared ...
5. Interaction between fullerene halves C{sub n} (n ≤ 40) and single wall carbon nanotube
Sharma, Amrish, E-mail: amrish99@gmail.com; Kaur, Sandeep, E-mail: sipusukhn@gmail.com [Department of Physics, Punjabi University, Patiala (India); Mudahar, Isha, E-mail: isha@pbi.ac.in [Department of Basic and Applied Sciences, Punjabi University, Patiala (India)
2016-05-06
We have investigated the structural and electronic properties of carbon nanotube with small fullerene halves C{sub n} (n ≤ 40) which are covalently bonded to the side wall of an armchair single wall carbon nanotube (SWCNT) using first principle method based on density functional theory. The fullerene size results in weak bonding between fullerene halves and carbon nanotube (CNT). Further, it was found that the C-C bond distance that attaches the fullerene half and CNT is of the order of 1.60 Å. The calculated binding energies indicate the stability of the complexes formed. The HOMO-LUMO gaps and electron density of state plots points towards the metallicity of the complex formed. Our calculations on charge transfer reveal that very small amount of charge is transferred from CNT to fullerene halves.
6. A Sensitive Gold Nanoplasmonic SERS Quantitative Analysis Method for Sulfate in Serum Using Fullerene as Catalyst
Chongning Li
2018-04-01
Full Text Available Fullerene exhibited strong catalysis of the redox reaction between HAuCl4 and trisodium citrate to form gold nanoplasmon with a strong surface-enhanced Raman scattering (SERS effect at 1615 cm−1 in the presence of Vitoria blue B molecule probes. When fullerene increased, the SERS peak enhanced linearly due to formation of more AuNPs as substrate. Upon addition of Ba2+, Ba2+ ions adsorb on the fullerene surface to inhibit the catalysis of fullerene that caused the SERS peak decreasing. Analyte SO42− combined with Ba2+ to form stable BaSO4 precipitate to release free fullerene that the catalysis recovered, and the SERS intensity increased linearly. Thus, a new SERS quantitative analysis method was established for the detection of sulfate in serum samples, with a linear range of 0.03–3.4 μM.
7. The interactions of high-energy, highly-charged ions with fullerenes
Ali, R.; Berry, H.G.; Cheng, S.
1996-01-01
In 1985, Robert Curl and Richard Smalley discovered a new form of carbon, the fullerene, C 60 , which consists of 60 carbon atoms in a closed cage resembling a soccer ball. In 1990, Kritschmer et al. were able to make macroscopic quantities of fullerenes. This has generated intense activity to study the properties of fullerenes. One area of research involves collisions between fullerenes and atoms, ions or electrons. In this paper we describe experiments involving interactions between fullerenes and highly charged ions in which the center-of-mass energies exceed those used in other work by several orders of magnitude. The high values of projectile velocity and charge state result in excitation and decay processes differing significantly from those seen in studies 3 at lower energies. Our results are discussed in terms of theoretical models analogous to those used in nuclear physics and this provides an interesting demonstration of the unity of physics
8. Interaction between fullerene halves C_n (n ≤ 40) and single wall carbon nanotube
Sharma, Amrish; Kaur, Sandeep; Mudahar, Isha
2016-01-01
We have investigated the structural and electronic properties of carbon nanotube with small fullerene halves C_n (n ≤ 40) which are covalently bonded to the side wall of an armchair single wall carbon nanotube (SWCNT) using first principle method based on density functional theory. The fullerene size results in weak bonding between fullerene halves and carbon nanotube (CNT). Further, it was found that the C-C bond distance that attaches the fullerene half and CNT is of the order of 1.60 Å. The calculated binding energies indicate the stability of the complexes formed. The HOMO-LUMO gaps and electron density of state plots points towards the metallicity of the complex formed. Our calculations on charge transfer reveal that very small amount of charge is transferred from CNT to fullerene halves.
9. Bone marrow-derived multidrug resistance protein ABCB4 protects against atherosclerotic lesion development in LDL receptor knockout mice
Pennings, Marieke; Hildebrand, Reeni B.; Ye, Dan; Kunne, Cindy; van Berkel, Theo J. C.; Groen, Albert K.; van Eck, Miranda
2007-01-01
OBJECTIVE: Several members of the ATP binding cassette (ABC)-transporter super family expressed in macrophages protect against atherosclerosis by promoting macrophage cholesterol and phospholipid efflux. Systemic disruption of ABCB4 in mice results in a virtual absence of phospholipids in bile and a
10. The Flaxseed-Derived Lignan Phenolic Secoisolariciresinol Diglucoside (SDG) Protects Non-Malignant Lung Cells from Radiation Damage.
Velalopoulou, Anastasia; Tyagi, Sonia; Pietrofesa, Ralph A; Arguiri, Evguenia; Christofidou-Solomidou, Melpo
2015-12-22
Plant phenolic compounds are common dietary antioxidants that possess antioxidant and anti-inflammatory properties. Flaxseed (FS) has been reported to be radioprotective in murine models of oxidative lung damage. Flaxseed's protective properties are attributed to its main biphenolic lignan, secoisolariciresinol diglucoside (SDG). SDG is a free radical scavenger, shown in cell free systems to protect DNA from radiation-induced damage. The objective of this study was to investigate the in vitro radioprotective efficacy of SDG in murine lung cells. Protection against irradiation (IR)-induced DNA double and single strand breaks was assessed by γ-H2AX labeling and alkaline comet assay, respectively. The role of SDG in modulating the levels of cytoprotective enzymes was evaluated by qPCR and confirmed by Western blotting. Additionally, effects of SDG on clonogenic survival of irradiated cells were evaluated. SDG protected cells from IR-induced death and ameliorated DNA damage by reducing mean comet tail length and percentage of γ-H2AX positive cells. Importantly, SDG significantly increased gene and protein levels of antioxidant HO-1, GSTM1 and NQO1. Our results identify the potent radioprotective properties of the synthetic biphenolic SDG, preventing DNA damage and enhancing the antioxidant capacity of normal lung cells; thus, rendering SDG a potential radioprotector against radiation exposure.
11. The Flaxseed-Derived Lignan Phenolic Secoisolariciresinol Diglucoside (SDG Protects Non-Malignant Lung Cells from Radiation Damage
Anastasia Velalopoulou
2015-12-01
Full Text Available Plant phenolic compounds are common dietary antioxidants that possess antioxidant and anti-inflammatory properties. Flaxseed (FS has been reported to be radioprotective in murine models of oxidative lung damage. Flaxseed’s protective properties are attributed to its main biphenolic lignan, secoisolariciresinol diglucoside (SDG. SDG is a free radical scavenger, shown in cell free systems to protect DNA from radiation-induced damage. The objective of this study was to investigate the in vitro radioprotective efficacy of SDG in murine lung cells. Protection against irradiation (IR-induced DNA double and single strand breaks was assessed by γ-H2AX labeling and alkaline comet assay, respectively. The role of SDG in modulating the levels of cytoprotective enzymes was evaluated by qPCR and confirmed by Western blotting. Additionally, effects of SDG on clonogenic survival of irradiated cells were evaluated. SDG protected cells from IR-induced death and ameliorated DNA damage by reducing mean comet tail length and percentage of γ-H2AX positive cells. Importantly, SDG significantly increased gene and protein levels of antioxidant HO-1, GSTM1 and NQO1. Our results identify the potent radioprotective properties of the synthetic biphenolic SDG, preventing DNA damage and enhancing the antioxidant capacity of normal lung cells; thus, rendering SDG a potential radioprotector against radiation exposure.
12. Intranasal H5N1 vaccines, adjuvanted with chitosan derivatives, protect ferrets against highly pathogenic influenza intranasal and intratracheal challenge.
Alex J Mann
Full Text Available We investigated the protective efficacy of two intranasal chitosan (CSN and TM-CSN adjuvanted H5N1 Influenza vaccines against highly pathogenic avian Influenza (HPAI intratracheal and intranasal challenge in a ferret model. Six groups of 6 ferrets were intranasally vaccinated twice, 21 days apart, with either placebo, antigen alone, CSN adjuvanted antigen, or TM-CSN adjuvanted antigen. Homologous and intra-subtypic antibody cross-reacting responses were assessed. Ferrets were inoculated intratracheally (all treatments or intranasally (CSN adjuvanted and placebo treatments only with clade 1 HPAI A/Vietnam/1194/2004 (H5N1 virus 28 days after the second vaccination and subsequently monitored for morbidity and mortality outcomes. Clinical signs were assessed and nasal as well as throat swabs were taken daily for virology. Samples of lung tissue, nasal turbinates, brain, and olfactory bulb were analysed for the presence of virus and examined for histolopathological findings. In contrast to animals vaccinated with antigen alone, the CSN and TM-CSN adjuvanted vaccines induced high levels of antibodies, protected ferrets from death, reduced viral replication and abrogated disease after intratracheal challenge, and in the case of CSN after intranasal challenge. In particular, the TM-CSN adjuvanted vaccine was highly effective at eliciting protective immunity from intratracheal challenge; serologically, protective titres were demonstrable after one vaccination. The 2-dose schedule with TM-CSN vaccine also induced cross-reactive antibodies to clade 2.1 and 2.2 H5N1 viruses. Furthermore ferrets immunised with TM-CSN had no detectable virus in the respiratory tract or brain, whereas there were signs of virus in the throat and lungs, albeit at significantly reduced levels, in CSN vaccinated animals. This study demonstrated for the first time that CSN and in particular TM-CSN adjuvanted intranasal vaccines have the potential to protect against significant
13. Spleen-dependent immune protection elicited by CpG adjuvanted reticulocyte-derived exosomes from malaria infection is associated with T cells population changes.
Lorena Martin-Jaular
2016-11-01
Full Text Available Reticulocyte-derived exosomes (rex are 30-100 nm membrane vesicles of endocytic origin released during the maturation of reticulocytes to erythrocytes upon fusion of multivesicular bodies with the plasma membrane. Combination of CpG-ODN with rex obtained from BALB/c mice infected with the reticulocyte-prone non-lethal P. yoelii 17X malaria strain (rexPy, had been shown to induce survival and long lasting protection. Here, we show that splenectomized mice are not protected upon rexPy+CpG inmunizations and that protection is restored upon passive transfer of splenocytes obtained from animals immunized with rexPy+CpG. Notably, rexPy immunization of mice induced PD1- memory T cell expansion with effector phenotype. Proteomics analysis of rexPy confirmed their reticulocyte origin and demonstrated the presence of parasite antigens. Our studies thus prove, for what we believe is the first time, that rex from reticulocyte-prone malarial infections are able to induce splenic long-lasting memory responses. To try extrapolating these data to human infections, in vitro experiments with spleen cells of human transplantation donors were performed. Plasma-derived exosomes from vivax malaria patients (exPv were actively uptaken by human splenocytes and stimulated spleen cells leading to expansion of T-cells.
14. Impact of C 60 fullerene on the dynamics of force-speed changes in soleus muscle of rat at ischemia-reperfusion injury.
Nozdrenko, D M; Bogutska, K I; Prylutskyy, Yu I; Korolovych, V F; Evstigneev, M P; Ritter, U; Scharff, P
2015-01-01
The effect of C60 fullerene nanoparticles (30-90 nm) on dynamics of force response development to stimulated soleus muscle of rat with ischemic pathology, existing in muscle during the first 5 hours and first 5 days after 2 hours of ischemia and further reperfusion, was investigated using the tensometric method. It was found that intravenous and intramuscular administration of C60 fullerene with a single dose of 1 mg/kg exert different therapeutic effects dependent on the investigated macroparameters of muscle contraction. The intravenous drug administration was shown to be the most optimal for correction of the velocity macroparameters of contraction due to muscle tissue ischemic damage. In contrast, the intramuscular administration displays protective action with respect to motions associated with generation of maximal force response or continuous contractions elevating the level of muscle fatigue. Hence, C60 fullerene, being a strong antioxidant, may be considered as a promising agent for effective therapy of pathological states of the muscle system caused by pathological action of free radical processes.
15. Adsorption of alanine with heteroatom substituted fullerene for solar cell application: A DFT study.
Dheivamalar, S; Sugi, L; Ravichandran, K; Sriram, S
2018-05-14
C 20 is the most important fullerene cage and alanine is the simplest representation of a backbone unit of the protein. The absorption feasibility of alanine molecule in the Si-doped C 20 and B-doped C 20 fullerenes has been studied based on calculated electronic properties of fullerenes using density functional theory (DFT). In this work, we explore the ability of Si-doped C 20 , B-doped C 20 fullerene to interact with alanine at the DFT-B3LYP/6-31G, RHF level of theory. We find that noticeable structural change takes place in C 20 when one of its carbon is substituted with Si or B. The molecular geometry, electronic properties and vibrational analysis have also been performed on the title compounds. The NMR study reveals the aromaticity of the pure and doped fullerene compounds. Stability of the doped fullerene - alanine compound arises from hyper conjugative interactions. It leads to one of the major property of bioactivity, charge transfer and delocalization of charge and this properties has been analyzed using Natural Bond Orbital (NBO) analysis. The energy gap of the doped fullerene reveals that there is a decrease in the size of energy gap significantly, making them more reactive as compared to C 20 fullerene. Theoretical studies of the electronic spectra by using time - dependent density functional theory (TD-DFT) method were helpful to interpret the observed electronic transition state. We aim to optimize the performance of the solar cells by altering the frontier orbital energy gaps. Considering all studied properties, it may be inferred that the applicability of C 20 fullerene as the non-linear optical (NLO) material and its NLO property would increase on doping fullerene with Si and B atom. Specifically C 19 Si would be better among them. Copyright © 2018. Published by Elsevier B.V.
16. Thrombin contributes to protective immunity in pneumonia-derived sepsis via fibrin polymerization and platelet-neutrophil interactions
Claushuis, T. A. M.; de Stoppelaar, S. F.; Stroo, I.; Roelofs, J. J. T. H.; Ottenhoff, R.; van der Poll, T.; van't Veer, C.
2017-01-01
Essentials Immunity and coagulation are linked during sepsis but the role of thrombin is not fully elucidated. We investigated the effect of thrombin inhibition on murine Klebsiella pneumosepsis outcome. Thrombin is crucial for survival and limiting bacterial growth in pneumonia derived sepsis.
17. The effect of phase morphology on the nature of long-lived charges in semiconductor polymer:fullerene systems
Dou, Fei; Domingo, Ester; Sakowicz, Maciej; Rezasoltani, Elham; McCarthy-Ward, Thomas; Heeney, Martin; Zhang, Xinping; Stingelin, Natalie; Silva, Carlos
2015-01-01
In this work, we investigate the effect of phase morphology on the nature of charges in poly(2,5-bis(3-tetradecyl-thiophen-2-yl)thieno[3,2,-b]thiophene) (pBTTT-C16) and phenyl-C61-butyric acid methyl ester (PC61BM) blends over timescales greater than hundreds of microseconds by quasi-steady-state photoinduced absorption spectroscopy. Specifically, we compare an essentially fully intermixed, one-phase system based on a 1 : 1 (by weight) pBTTT-C16 : PC61BM blend, known to form a co-crystal structure, with a two-phase morphology composed of relatively material-pure domains of the neat polymer and neat fullerene. The co-crystal occurs at a composition of up to 50 wt% PC61BM, because pBTTT-C16 is capable of hosting fullerene derivatives such as PC61BM in the cavities between its side chains. In contrast, the predominantly two-phase system can be obtained by manipulating a 1 : 1 polymer : fullerene blend with the assistance of a fatty acid methyl ester (dodecanoic acid methyl ester, Me12) as additive, which hinders co-crystal formation. We find that triplet excitons and polarons are generated in both phase morphologies. However, polarons are generated in the predominantly two-phase system at higher photon energy than for the structure based on the co-crystal phase. By means of a quasi-steady-state solution of a mesoscopic rate model, we demonstrate that the steady-state polaron generation efficiency and recombination rates are higher in the finely intermixed, one-phase system compared to the predominantly phase-pure, two-phase morphology. We suggest that the polarons generated in highly intermixed structures, such as the co-crystal investigated here, are localised polarons while those generated in the phase-separated polymer and fullerene systems are delocalised polarons. We expect this picture to apply generally to other organic-based heterojunctions of complex phase morphologies including donor:acceptor systems that form, for instance, molecularly mixed amorphous solid
18. Intercalated vs Non-Intercalated Morphologies in Donor-Acceptor Bulk Heterojunction Solar Cells: PBTTT:Fullerene Charge Generation and Recombination Revisited
2017-08-04
In this contribution, we study the role of the donor:acceptor interface nanostructure upon charge separation and recombination in organic photovoltaic devices and blend films, using mixtures of PBTTT and two different fullerene derivatives (PC70BM and ICTA) as models for intercalated and non-intercalated morphologies, respectively. Thermodynamic simulations show that while the completely intercalated system exhibits a large free-energy barrier for charge separation, this barrier is significantly lower in the non-intercalated system, and almost vanishes when energetic disorder is included in the model. Despite these differences, both fs-resolved transient absorption spectroscopy (TAS) and TDCF exhibit extensive first-order losses in that system, suggesting that geminate pairs are the primary product of photoexcitation. In contrast, the system that comprises a combination of fully intercalated polymer:fullerene areas and fullerene aggregated domains (1:4 PBTTT:PC70BM), is the only one that shows slow, second-order recombination of free charges, resulting in devices with an overall higher short circuit current and fill factor. This study therefore provides a novel consideration of the role of the interfacial nanostructure and the nature of bound charges, and their impact upon charge generation and recombination.
19. Intercalated vs Non-Intercalated Morphologies in Donor-Acceptor Bulk Heterojunction Solar Cells: PBTTT:Fullerene Charge Generation and Recombination Revisited
Collado Fregoso, Elisa; Hood, Samantha N.; Shoaee, Safa; Schroeder, Bob C.; McCulloch, Iain; Kassal, Ivan; Neher, Dieter; Durrant, James R.
2017-01-01
In this contribution, we study the role of the donor:acceptor interface nanostructure upon charge separation and recombination in organic photovoltaic devices and blend films, using mixtures of PBTTT and two different fullerene derivatives (PC70BM and ICTA) as models for intercalated and non-intercalated morphologies, respectively. Thermodynamic simulations show that while the completely intercalated system exhibits a large free-energy barrier for charge separation, this barrier is significantly lower in the non-intercalated system, and almost vanishes when energetic disorder is included in the model. Despite these differences, both fs-resolved transient absorption spectroscopy (TAS) and TDCF exhibit extensive first-order losses in that system, suggesting that geminate pairs are the primary product of photoexcitation. In contrast, the system that comprises a combination of fully intercalated polymer:fullerene areas and fullerene aggregated domains (1:4 PBTTT:PC70BM), is the only one that shows slow, second-order recombination of free charges, resulting in devices with an overall higher short circuit current and fill factor. This study therefore provides a novel consideration of the role of the interfacial nanostructure and the nature of bound charges, and their impact upon charge generation and recombination.
20. Effects of carbon nanomaterials fullerene C60 and fullerol C60(OH)18–22 on gills of fish Cyprinus carpio (Cyprinidae) exposed to ultraviolet radiation
Socoowski Britto, Roberta; Longaray Garcia, Márcia; Martins da Rocha, Alessandra; Artigas Flores, Juliana; Pinheiro, Maurício V. Brant; Monserrat, José María; Ribas Ferreira, Josencler L.
2012-01-01
In consequence of their growing use and demand, the inevitable environmental presence of nanomaterials (NMs) has raised concerns about their potential deleterious effects to aquatic environments. The carbon NM fullerene (C 60 ), which forms colloidal aggregates in water, and its water-soluble derivative fullerol (C 60 (OH) 18–22 ), which possesses antioxidant properties, are known to be photo-excited by ultraviolet (UV) or visible light. To investigate their potential hazards to aquatic organisms upon exposure to UV sunlight, this study analyzed (a) the in vitro behavior of fullerene and fullerol against peroxyl radicals (ROO·) under UV-A radiation and (b) the effects of these photo-excited NMs on oxidative stress parameters in functional gills extracted from the fish Cyprinus carpio (Cyprinidae). The variables measured were the total antioxidant capacity, lipid peroxidation (TBARS), the activities of the antioxidant enzymes glutathione reductase (GR) and glutamate cysteine ligase (GCL), and the levels of the non-enzymatic antioxidant glutathione (GSH). The obtained results revealed the following: (1) both NMs behaved in vitro as antioxidants against ROO· in the dark and as pro-oxidants in presence of UV-A, the latter effect being reversed by the addition of sodium azide, which is a singlet oxygen ( 1 O 2 ) quencher; (2) fullerene induced toxicity with or without UV-A incidence, with a significant (p 1 O 2 generation; and (3) fullerol also decreased GCL activity and GSH formation (p 1 O 2 formation.
1. Antioxidative Peptides Derived from Enzyme Hydrolysis of Bone Collagen after Microwave Assisted Acid Pre-Treatment and Nitrogen Protection
Jin Sun
2010-11-01
Full Text Available This study focused on the preparation method of antioxidant peptides by enzymatic hydrolysis of bone collagen after microwave assisted acid pre-treatment and nitrogen protection. Phosphoric acid showed the highest ability of hydrolysis among the four other acids tested (hydrochloric acid, sulfuric acid and/or citric acid. The highest degree of hydrolysis (DH was 9.5% using 4 mol/L phosphoric acid with a ratio of 1:6 under a microwave intensity of 510 W for 240 s. Neutral proteinase gave higher DH among the four protease tested (Acid protease, neutral protease, Alcalase and papain, with an optimum condition of: (1 ratio of enzyme and substrate, 4760 U/g; (2 concentration of substrate, 4%; (3 reaction temperature, 55 °C and (4 pH 7.0. At 4 h, DH increased significantly (P < 0.01 under nitrogen protection compared with normal microwave assisted acid pre-treatment hydrolysis conditions. The antioxidant ability of the hydrolysate increased and reached its maximum value at 3 h; however DH decreased dramatically after 3 h. Microwave assisted acid pre-treatment and nitrogen protection could be a quick preparatory method for hydrolyzing bone collagen.
2. Issues regarding the U.S. F.D.A. Protective Action Guidelines and derived response levels for human food and animal feed
Denney, Bruce
1989-01-01
Full text: A review of the Food and Drug Administration's (FDA) rationale and methods for determining protective action guidelines (PAGs) and derived response levels (DRLs) (FDAa82, FDAb82) for human food and animal feed reveals the presence of ambiguous and contradictory information that should be clarified in order to improve the usefulness of the guidance. The differences in the criteria used to determine the Preventative and Emergency PAGs and DRLs, for example, are striking. The Preventative PAGs (and DRLs) are based on accepted health physics principles, e.g. risk factors, avoidance of fetal health effects, agricultural models, etc. The Emergency PAGs (and DRLs), however, are based solely on a traditional safety factor of ten. This difference in rationale becomes more conspicuous when the protective actions for these PAGs are compared: preventative protective actions involve low impact actions, e.g. removal of cattle from pasture, storage to allow for radioactive decay, etc., while emergency protective actions involve high impact actions e.g. isolating and condemning food products. These differences result in a contradiction: high impact actions, which may cause considerable problems and loss of income for farmers and food processors, are based on non-technical premises ('tradition'), while the low impact actions, which may only result in minor inconveniences to farmers and food processors, are based on solid scientific principles. Justifying or explaining these differences to farmers or to the media may be very difficult. Clearly there exists a need to review the basis and rationale upon which the Emergency PAGs and DRLs were derived in order to provide a more scientific explanation for their choice and use. In the FDA guidance (FDAa82), references are also made to ALARA and to the use of low-impact actions at doses lower than the PAGs. Although the FDA accepts and endorses the concept of keeping doses as low as reasonably achievable, the FDA does not
3. The porphyrin-fullerene nanoparticles to promote the ATP overproduction in myocardium: 25Mg2+-magnetic isotope effect.
Rezayat, S M; Boushehri, S V S; Salmanian, B; Omidvari, A H; Tarighat, S; Esmaeili, S; Sarkar, S; Amirshahi, N; Alyautdin, R N; Orlova, M A; Trushkov, I V; Buchachenko, A L; Liu, K C; Kuznetsov, D A
2009-04-01
This is a first case ever reported on the fullerene-based low toxic nanocationite particles (porphyrin adducts of cyclohexyl fullerene-C(60)) designed for targeted delivery of the paramagnetic magnesium stable isotope to the heart muscle providing a sharp clinical effect close to about 80% recovery of the tissue hypoxia symptoms in less than 24 h after a single injection (0.03-0.1 LD(50)). A whole principle of this therapy is novel: (25)Mg(2+)-magnetic isotope effect selectively stimulates the ATP overproduction in the oxygen-depleted cells due to (25)Mg(2+) released by the nanoparticles. Being membranotropic cationites, these "smart nanoparticles" release the overactivating paramagnetic cations only in response to the metabolic acidic shift. The resulting positive changes in the heart cell energy metabolism may help to prevent and/or treat the local myocardial hypoxic disorders and, hence, protect the heart muscle from a serious damage in a vast variety of the hypoxia-caused clinical situations including both doxorubicin and 1-methylnicotineamide cardiotoxic side effects. Both pharmacokinetics and pharmacodynamics of the drug proposed make it suitable for safe and efficient administration in either single or multi-injection (acute or chronic) therapeutic schemes.
4. Glial cell line-derived neurotrophic factor protects against high-fat diet-induced hepatic steatosis by suppressing hepatic PPAR-γ expression.
Mwangi, Simon Musyoka; Peng, Sophia; Nezami, Behtash Ghazi; Thorn, Natalie; Farris, Alton B; Jain, Sanjay; Laroui, Hamed; Merlin, Didier; Anania, Frank; Srinivasan, Shanthi
2016-01-15
Glial cell line-derived neurotrophic factor (GDNF) protects against high-fat diet (HFD)-induced hepatic steatosis in mice, however, the mechanisms involved are not known. In this study we investigated the effects of GDNF overexpression and nanoparticle delivery of GDNF in mice on hepatic steatosis and fibrosis and the expression of genes involved in the regulation of hepatic lipid uptake and de novo lipogenesis. Transgenic overexpression of GDNF in liver and other metabolically active tissues was protective against HFD-induced hepatic steatosis. Mice overexpressing GDNF had significantly reduced P62/sequestosome 1 protein levels suggestive of accelerated autophagic clearance. They also had significantly reduced peroxisome proliferator-activated receptor-γ (PPAR-γ) and CD36 gene expression and protein levels, and lower expression of mRNA coding for enzymes involved in de novo lipogenesis. GDNF-loaded nanoparticles were protective against short-term HFD-induced hepatic steatosis and attenuated liver fibrosis in mice with long-standing HFD-induced hepatic steatosis. They also suppressed the liver expression of steatosis-associated genes. In vitro, GDNF suppressed triglyceride accumulation in Hep G2 cells through enhanced p38 mitogen-activated protein kinase-dependent signaling and inhibition of PPAR-γ gene promoter activity. These results show that GDNF acts directly in the liver to protect against HFD-induced cellular stress and that GDNF may have a role in the treatment of nonalcoholic fatty liver disease.
5. Plant-derived human butyrylcholinesterase, but not an organophosphorous-compound hydrolyzing variant thereof, protects rodents against nerve agents
Geyer, Brian C.; Kannan, Latha; Garnaud, Pierre-Emmanuel; Broomfield, Clarence A.; Cadieux, C. Linn; Cherni, Irene; Hodgins, Sean M.; Kasten, Shane A.; Kelley, Karli; Kilbourne, Jacquelyn; Oliver, Zeke P.; Otto, Tamara C.; Puffenberger, Ian; Reeves, Tony E.; Robbins, Neil
2010-01-01
The concept of using cholinesterase bioscavengers for prophylaxis against organophosphorous nerve agents and pesticides has progressed from the bench to clinical trial. However, the supply of the native human proteins is either limited (e.g., plasma-derived butyrylcholinesterase and erythrocytic acetylcholinesterase) or nonexisting (synaptic acetylcholinesterase). Here we identify a unique form of recombinant human butyrylcholinesterase that mimics the native enzyme assembly into tetramers; t...
6. Recombinant monovalent llama-derived antibody fragments (VHH to rotavirus VP6 protect neonatal gnotobiotic piglets against human rotavirus-induced diarrhea.
Celina G Vega
Full Text Available Group A Rotavirus (RVA is the leading cause of severe diarrhea in children. The aims of the present study were to determine the neutralizing activity of VP6-specific llama-derived single domain nanoantibodies (VHH nanoAbs against different RVA strains in vitro and to evaluate the ability of G6P[1] VP6-specific llama-derived single domain nanoantibodies (VHH to protect against human rotavirus in gnotobiotic (Gn piglets experimentally inoculated with virulent Wa G1P[8] rotavirus. Supplementation of the daily milk diet with 3B2 VHH clone produced using a baculovirus vector expression system (final ELISA antibody -Ab- titer of 4096; virus neutralization -VN- titer of 256 for 9 days conferred full protection against rotavirus associated diarrhea and significantly reduced virus shedding. The administration of comparable levels of porcine IgG Abs only protected 4 out of 6 of the animals from human RVA diarrhea but significantly reduced virus shedding. In contrast, G6P[1]-VP6 rotavirus-specific IgY Abs purified from eggs of hyperimmunized hens failed to protect piglets against human RVA-induced diarrhea or virus shedding when administering similar quantities of Abs. The oral administration of VHH nanoAb neither interfered with the host's isotype profiles of the Ab secreting cell responses to rotavirus, nor induced detectable host Ab responses to the treatment in serum or intestinal contents. This study shows that the oral administration of rotavirus VP6-VHH nanoAb is a broadly reactive and effective treatment against rotavirus-induced diarrhea in neonatal pigs. Our findings highlight the potential value of a broad neutralizing VP6-specific VHH nanoAb as a treatment that can complement or be used as an alternative to the current strain-specific RVA vaccines. Nanobodies could also be scaled-up to develop pediatric medication or functional food like infant milk formulas that might help treat RVA diarrhea.
7. Recombinant monovalent llama-derived antibody fragments (VHH) to rotavirus VP6 protect neonatal gnotobiotic piglets against human rotavirus-induced diarrhea.
Vega, Celina G; Bok, Marina; Vlasova, Anastasia N; Chattha, Kuldeep S; Gómez-Sebastián, Silvia; Nuñez, Carmen; Alvarado, Carmen; Lasa, Rodrigo; Escribano, José M; Garaicoechea, Lorena L; Fernandez, Fernando; Bok, Karin; Wigdorovitz, Andrés; Saif, Linda J; Parreño, Viviana
2013-01-01
Group A Rotavirus (RVA) is the leading cause of severe diarrhea in children. The aims of the present study were to determine the neutralizing activity of VP6-specific llama-derived single domain nanoantibodies (VHH nanoAbs) against different RVA strains in vitro and to evaluate the ability of G6P[1] VP6-specific llama-derived single domain nanoantibodies (VHH) to protect against human rotavirus in gnotobiotic (Gn) piglets experimentally inoculated with virulent Wa G1P[8] rotavirus. Supplementation of the daily milk diet with 3B2 VHH clone produced using a baculovirus vector expression system (final ELISA antibody -Ab- titer of 4096; virus neutralization -VN- titer of 256) for 9 days conferred full protection against rotavirus associated diarrhea and significantly reduced virus shedding. The administration of comparable levels of porcine IgG Abs only protected 4 out of 6 of the animals from human RVA diarrhea but significantly reduced virus shedding. In contrast, G6P[1]-VP6 rotavirus-specific IgY Abs purified from eggs of hyperimmunized hens failed to protect piglets against human RVA-induced diarrhea or virus shedding when administering similar quantities of Abs. The oral administration of VHH nanoAb neither interfered with the host's isotype profiles of the Ab secreting cell responses to rotavirus, nor induced detectable host Ab responses to the treatment in serum or intestinal contents. This study shows that the oral administration of rotavirus VP6-VHH nanoAb is a broadly reactive and effective treatment against rotavirus-induced diarrhea in neonatal pigs. Our findings highlight the potential value of a broad neutralizing VP6-specific VHH nanoAb as a treatment that can complement or be used as an alternative to the current strain-specific RVA vaccines. Nanobodies could also be scaled-up to develop pediatric medication or functional food like infant milk formulas that might help treat RVA diarrhea.
8. Application of numerical methods, derivatives theory and Monte Carlo simulation in evaluating BM&F BOVESPA's POP (Protected and Participative Investment
Giuliano Carrozza Uzêda Iorio de Souza
2011-08-01
Full Text Available This article presents a practical case in which two of the most efficient numerical procedures developed for derivative analysis are applied to evaluate the POP (Investment Protection with Participation, a structured operation created by São Paulo Stock Exchange - BM&FBOVESPA. The first procedure solves the differential equation through the use of implicit finite differences method. Due to its characteristics, the approach makes it possible to run sensitivity analysis as well as price estimation. In the second, the problem is solved by Monte Carlo simulation, which facilitates the identification of the probability related to the exercise of the embedded options.
9. Minocycline and doxycycline, but not other tetracycline-derived compounds, protect liver cells from chemical hypoxia and ischemia/reperfusion injury by inhibition of the mitochondrial calcium uniporter
Schwartz, Justin; Holmuhamedov, Ekhson; Zhang, Xun; Lovelace, Gregory L.; Smith, Charles D.; Lemasters, John J.
2013-01-01
Minocycline, a tetracycline-derived compound, mitigates damage caused by ischemia/reperfusion (I/R) injury. Here, 19 tetracycline-derived compounds were screened in comparison to minocycline for their ability to protect hepatocytes against damage from chemical hypoxia and I/R injury. Cultured rat hepatocytes were incubated with 50 μM of each tetracycline-derived compound 20 min prior to exposure to 500 μM iodoacetic acid plus 1 mM KCN (chemical hypoxia). In other experiments, hepatocytes were incubated in anoxic Krebs–Ringer–HEPES buffer at pH 6.2 for 4 h prior to reoxygenation at pH 7.4 (simulated I/R). Tetracycline-derived compounds were added 20 min prior to reperfusion. Ca 2+ uptake was measured in isolated rat liver mitochondria incubated with Fluo-5N. Cell killing after 120 min of chemical hypoxia measured by propidium iodide (PI) fluorometry was 87%, which decreased to 28% and 42% with minocycline and doxycycline, respectively. After I/R, cell killing at 120 min decreased from 79% with vehicle to 43% and 49% with minocycline and doxycycline. No other tested compound decreased killing. Minocycline and doxycycline also inhibited mitochondrial Ca 2+ uptake and suppressed the Ca 2+ -induced mitochondrial permeability transition (MPT), the penultimate cause of cell death in reperfusion injury. Ru360, a specific inhibitor of the mitochondrial calcium uniporter (MCU), also decreased cell killing after hypoxia and I/R and blocked mitochondrial Ca 2+ uptake and the MPT. Other proposed mechanisms, including mitochondrial depolarization and matrix metalloprotease inhibition, could not account for cytoprotection. Taken together, these results indicate that minocycline and doxycycline are cytoprotective by way of inhibition of MCU. - Highlights: • Minocycline and doxycycline are the only cytoprotective tetracyclines of those tested • Cytoprotective tetracyclines inhibit the MPT and mitochondrial calcium and iron uptake. • Cytoprotective tetracyclines protect
10. Minocycline and doxycycline, but not other tetracycline-derived compounds, protect liver cells from chemical hypoxia and ischemia/reperfusion injury by inhibition of the mitochondrial calcium uniporter
Schwartz, Justin; Holmuhamedov, Ekhson; Zhang, Xun; Lovelace, Gregory L.; Smith, Charles D. [Department of Drug Discovery and Biomedical Sciences, Medical University of South Carolina, Charleston, SC (United States); Lemasters, John J., E-mail: JJLemasters@musc.edu [Department of Drug Discovery and Biomedical Sciences, Medical University of South Carolina, Charleston, SC (United States); Department of Biochemistry and Molecular Biology, Medical University of South Carolina, Charleston, SC (United States)
2013-11-15
Minocycline, a tetracycline-derived compound, mitigates damage caused by ischemia/reperfusion (I/R) injury. Here, 19 tetracycline-derived compounds were screened in comparison to minocycline for their ability to protect hepatocytes against damage from chemical hypoxia and I/R injury. Cultured rat hepatocytes were incubated with 50 μM of each tetracycline-derived compound 20 min prior to exposure to 500 μM iodoacetic acid plus 1 mM KCN (chemical hypoxia). In other experiments, hepatocytes were incubated in anoxic Krebs–Ringer–HEPES buffer at pH 6.2 for 4 h prior to reoxygenation at pH 7.4 (simulated I/R). Tetracycline-derived compounds were added 20 min prior to reperfusion. Ca{sup 2+} uptake was measured in isolated rat liver mitochondria incubated with Fluo-5N. Cell killing after 120 min of chemical hypoxia measured by propidium iodide (PI) fluorometry was 87%, which decreased to 28% and 42% with minocycline and doxycycline, respectively. After I/R, cell killing at 120 min decreased from 79% with vehicle to 43% and 49% with minocycline and doxycycline. No other tested compound decreased killing. Minocycline and doxycycline also inhibited mitochondrial Ca{sup 2+} uptake and suppressed the Ca{sup 2+}-induced mitochondrial permeability transition (MPT), the penultimate cause of cell death in reperfusion injury. Ru360, a specific inhibitor of the mitochondrial calcium uniporter (MCU), also decreased cell killing after hypoxia and I/R and blocked mitochondrial Ca{sup 2+} uptake and the MPT. Other proposed mechanisms, including mitochondrial depolarization and matrix metalloprotease inhibition, could not account for cytoprotection. Taken together, these results indicate that minocycline and doxycycline are cytoprotective by way of inhibition of MCU. - Highlights: • Minocycline and doxycycline are the only cytoprotective tetracyclines of those tested • Cytoprotective tetracyclines inhibit the MPT and mitochondrial calcium and iron uptake. • Cytoprotective
11. Deriving site-specific soil clean-up values for metals and metalloids: rationale for including protection of soil microbial processes.
Kuperman, Roman G; Siciliano, Steven D; Römbke, Jörg; Oorts, Koen
2014-07-01
Although it is widely recognized that microorganisms are essential for sustaining soil fertility, structure, nutrient cycling, groundwater purification, and other soil functions, soil microbial toxicity data were excluded from the derivation of Ecological Soil Screening Levels (Eco-SSL) in the United States. Among the reasons for such exclusion were claims that microbial toxicity tests were too difficult to interpret because of the high variability of microbial responses, uncertainty regarding the relevance of the various endpoints, and functional redundancy. Since the release of the first draft of the Eco-SSL Guidance document by the US Environmental Protection Agency in 2003, soil microbial toxicity testing and its use in ecological risk assessments have substantially improved. A wide range of standardized and nonstandardized methods became available for testing chemical toxicity to microbial functions in soil. Regulatory frameworks in the European Union and Australia have successfully incorporated microbial toxicity data into the derivation of soil threshold concentrations for ecological risk assessments. This article provides the 3-part rationale for including soil microbial processes in the development of soil clean-up values (SCVs): 1) presenting a brief overview of relevant test methods for assessing microbial functions in soil, 2) examining data sets for Cu, Ni, Zn, and Mo that incorporated soil microbial toxicity data into regulatory frameworks, and 3) offering recommendations on how to integrate the best available science into the method development for deriving site-specific SCVs that account for bioavailability of metals and metalloids in soil. Although the primary focus of this article is on the development of the approach for deriving SCVs for metals and metalloids in the United States, the recommendations provided in this article may also be applicable in other jurisdictions that aim at developing ecological soil threshold values for protection of
12. Electronic structure of multi-walled carbon fullerenes
Doore, Keith; Cook, Matthew; Clausen, Eric; Lukashev, Pavel V; Kidd, Tim E; Stollenwerk, Andrew J
2017-01-01
Despite an enormous amount of research on carbon based nanostructures, relatively little is known about the electronic structure of multi-walled carbon fullerenes, also known as carbon onions. In part, this is due to the very high computational expense involved in estimating electronic structure of large molecules. At the same time, experimentally, the exact crystal structure of the carbon onion is usually unknown, and therefore one relies on qualitative arguments only. In this work we present the results of a computational study on a series of multi-walled fullerenes and compare their electronic structures to experimental data. Experimentally, the carbon onions were fabricated using ultrasonic agitation of isopropanol alcohol and deposited onto the surface of highly ordered pyrolytic graphite using a drop cast method. Scanning tunneling microscopy images indicate that the carbon onions produced using this technique are ellipsoidal with dimensions on the order of 10 nm. The majority of differential tunneling spectra acquired on individual carbon onions are similar to that of graphite with the addition of molecular-like peaks, indicating that these particles span the transition between molecules and bulk crystals. A smaller, yet sizable number exhibited a semiconducting gap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) levels. These results are compared with the electronic structure of different carbon onion configurations calculated using first-principles. Similar to the experimental results, the majority of these configurations are metallic with a minority behaving as semiconductors. Analysis of the configurations investigated here reveals that each carbon onion exhibiting an energy band gap consisted only of non-metallic fullerene layers, indicating that the interlayer interaction is not significant enough to affect the total density of states in these structures. (paper)
13. In vitro antioxidant potential and deoxyribonucleic acid protecting activity of CNB-001, a novel pyrazole derivative of curcumin
Richard L Jayaraj
2014-01-01
14. Benefit of magnesium-25 carrying porphyrin-fullerene nanoparticles in experimental diabetic neuropathy
2010-01-01
Diabetic neuropathy (DN) is a debilitating disorder occurring in most diabetic patients without a viable treatment yet. The present work examined the protective effect of 25Mg-PMC16 nanoparticle (porphyrin adducts of cyclohexil fullerene-C60) in a rat model of streptozotocin (STZ)-induced DN. 25Mg-PMC16 (0.5 lethal dose50 [LD50]) was administered intravenously in two consecutive days before intraperitoneal injection of STZ (45 mg/kg). 24Mg-PMC16 and MgCl2 were used as controls. Blood 2,3-diphosphoglycerate (2,3-DPG), oxidative stress biomarkers, adenosine triphosphate (ATP) level in dorsal root ganglion (DRG) neurons were determined as biomarkers of DN. Results indicated that 2,3-DPG and ATP decreased whereas oxidative stress increased by induction of DN which all were improved in 25Mg-PMC16-treated animals. No significant changes were observed by administration of 24Mg-PMC16 or MgCl2 in DN rats. It is concluded that in DN, oxidative stress initiates injuries to DRG neurons that finally results in death of neurons whereas administration of 25Mg-PMC16 by release of Mg and increasing ATP acts protectively. PMID:20957114
15. Peculiarities of fullerenes condensation from molecular beam in vacuum
Neluba P. L.
2011-12-01
Full Text Available There was investigated С60 fullerenes condensation in vacuum on unheated Si, GaAs, isinglass stone substrates. There were used atomic-force microscopy, Raman scattering and measurement of mechanical stresses in films. It is established that the С60 molecule can decay on the substrates with the formation of other carbon structures in the condensate without supplementary physical effects on the sublimated beam in «evaporator — substrate» space. The possibility was found to increase the grain size and reduce the mechanical stresses in the condensate.
16. Incomplete Exciton Harvesting from Fullerenes in Bulk Heterojunction Solar Cells
Burkhard, George F.
2009-12-09
We investigate the internal quantum efficiencies (IQEs) of high efficiency poly-3-hexylthiophene:[6,6]-phenyl-C61-butyric acid methyl ester (P3HT:PCBM) solar cells and find them to be lower at wavelengths where the PCBM absorbs. Because the exciton diffusion length in PCBM is too small, excitons generated in PCBM decay before reaching the donor-acceptor interface. This result has implications for most state of the art organic solar cells, since all of the most efficient devices use fullerenes as electron acceptors. © 2009 American Chemical Society.
17. Synthesis of endohedral iron-fullerenes by ion implantation
Minezaki, H.; Ishihara, S.; Uchida, T.; Muramatsu, M.; Kitagawa, A.; Rácz, R.; Biri, S.; Asaji, T.; Kato, Y.; Yoshida, Y.
2014-01-01
In this paper, we discuss the results of our study of the synthesis of endohedral iron-fullerenes. A low energy Fe + ion beam was irradiated to C 60 thin film by using a deceleration system. Fe + -irradiated C 60 thin film was analyzed by high performance liquid chromatography and laser desorption/ ionization time-of-flight mass spectrometry. We investigated the performance of the deceleration system for using a Fe + beam with low energy. In addition, we attempted to isolate the synthesized material from a Fe + -irradiated C 60 thin film by high performance liquid chromatography
18. Synthesis of endohedral iron-fullerenes by ion implantation
Minezaki, H.; Ishihara, S. [Graduate School of Engineering, Toyo University, 2100, Kujirai, Kawagoe, Saitama 350-8585 (Japan); Uchida, T., E-mail: uchida-t@toyo.jp [Bio-Nano Electronics Research Centre, Toyo University, 2100, Kujirai, Kawagoe, Saitama 350-8585 (Japan); Muramatsu, M.; Kitagawa, A. [National Institute of Radiological Sciences (NIRS), 4-9-1, Anagawa, Inage-ku, Chiba-shi, Chiba 263-8555 (Japan); Rácz, R.; Biri, S. [Institute of Nuclear Research (ATOMKI), Bem tér 18/C, H-4026 Debrecen (Hungary); Asaji, T. [Oshima National College of Maritime Technology, 1091-1, Komatsu Suou Oshima-city Oshima, Yamaguchi 742-2193 (Japan); Kato, Y. [Graduate School of Engineering, Osaka University, 2-1, Yamada-oka, Suita-shi, Osaka 565-0871 (Japan); Yoshida, Y. [Graduate School of Engineering, Toyo University, 2100, Kujirai, Kawagoe, Saitama 350-8585 (Japan); Bio-Nano Electronics Research Centre, Toyo University, 2100, Kujirai, Kawagoe, Saitama 350-8585 (Japan)
2014-02-15
In this paper, we discuss the results of our study of the synthesis of endohedral iron-fullerenes. A low energy Fe{sup +} ion beam was irradiated to C{sub 60} thin film by using a deceleration system. Fe{sup +}-irradiated C{sub 60} thin film was analyzed by high performance liquid chromatography and laser desorption/ ionization time-of-flight mass spectrometry. We investigated the performance of the deceleration system for using a Fe{sup +} beam with low energy. In addition, we attempted to isolate the synthesized material from a Fe{sup +}-irradiated C{sub 60} thin film by high performance liquid chromatography.
19. Adsorption characteristics of heat-treated fullerene nano-whiskers
Wang, Z-M [Energy Storage Materials Group, Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology, 16-1 Onogawa, Tsukuba, Ibaraki 305-8569 (Japan); Kato, R; Hotta, K; Miyazawa, K [Fullerene Engineering Group, Advanced Nano Materials Laboratory, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044 (Japan)], E-mail: zm-wang@aist.go.jp
2009-04-01
Fullerene nanowhiskers (FNWs) were synthesized by the liquid-liquid interfacial precipitation method and the adsorption properties of their heat-treated samples were characterized. It was found that vacuum-annealed FNWs at a high temperature are of microporous materials and, especially, ultramicropores are highly developed in these materials. Porosities even remain in samples after heat treatment at a temperature higher than 2273 K. The presence of ultramicroporosity is indicative of the molecular sieving properties of the vacuum-annealed FNW materials, suggesting the possibilities of their application as new materials for gas separation and gas storage.
20. On the Stability of Fullerene C60 in Aqueous Medium
Gál, Miroslav; Kolivoška, Viliam; Kavan, Ladislav; Kocábová, Jana; Pospíšil, Lubomír; Hromadová, Magdaléna; Zukalová, Markéta; Sokolová, Romana; Kielar, F.
2012-01-01
Roč. 20, č. 8 (2012), s. 737-742 ISSN 1536-383X R&D Projects: GA ČR GP203/09/P502; GA ČR GA203/09/1607; GA ČR GA203/08/1157; GA ČR GA203/09/0705; GA AV ČR IAA400400804; GA AV ČR KAN200100801 Institutional support: RVO:61388955 Keywords : fullerene s * AFM * dispersion Subject RIV: CG - Electrochemistry Impact factor: 0.764, year: 2012
1. Disorder effect on carrier mobility in Fullerene organic semiconductor
Mendil, N; Daoudi, M; Berkai, Z; Belghachi, A
2015-01-01
The critical factor that limits the efficiencies of organic electronic devices is the low charge carrier mobility which is attributed to disorder in organic films. In this context, we have studied the effects of disorder on carrier mobility in organic Schottky diode of electrons for the fullerene (C 60 ). Our results show that the mobility is sensitive probes of structural phase transitions and order-disorder underlying C 60 . Where it is one reason behind the low mobility which it take as value 1.4x10 -2 cm 2 /V.s above critical temperature Tc =289K. (paper)
2. Incomplete Exciton Harvesting from Fullerenes in Bulk Heterojunction Solar Cells
Burkhard, George F.; Hoke, Eric T.; Scully, Shawn R.; McGehee, Michael D.
2009-01-01
We investigate the internal quantum efficiencies (IQEs) of high efficiency poly-3-hexylthiophene:[6,6]-phenyl-C61-butyric acid methyl ester (P3HT:PCBM) solar cells and find them to be lower at wavelengths where the PCBM absorbs. Because the exciton diffusion length in PCBM is too small, excitons generated in PCBM decay before reaching the donor-acceptor interface. This result has implications for most state of the art organic solar cells, since all of the most efficient devices use fullerenes as electron acceptors. © 2009 American Chemical Society.
3. All-Fullerene-Based Cells for Nonaqueous Redox Flow Batteries.
Friedl, Jochen; Lebedeva, Maria A; Porfyrakis, Kyriakos; Stimming, Ulrich; Chamberlain, Thomas W
2018-01-10
Redox flow batteries have the potential to revolutionize our use of intermittent sustainable energy sources such as solar and wind power by storing the energy in liquid electrolytes. Our concept study utilizes a novel electrolyte system, exploiting derivatized fullerenes as both anolyte and catholyte species in a series of battery cells, including a symmetric, single species system which alleviates the common problem of membrane crossover. The prototype multielectron system, utilizing molecular based charge carriers, made from inexpensive, abundant, and sustainable materials, principally, C and Fe, demonstrates remarkable current and energy densities and promising long-term cycling stability.
4. Biocompatibility and Corrosion Protection Behaviour of Hydroxyapatite Sol-Gel-Derived Coatings on Ti6Al4V Alloy
El Hadad, Amir A.; Peón, Eduardo; García-Galván, Federico R.; Barranco, Violeta; Parra, Juan; Jiménez-Morales, Antonia; Galván, Juan Carlos
2017-01-01
The aim of this work was to prepare hydroxyapatite coatings (HAp) by a sol-gel method on Ti6Al4V alloy and to study the bioactivity, biocompatibility and corrosion protection behaviour of these coatings in presence of simulated body fluids (SBFs). Thermogravimetric/Differential Thermal Analyses (TG/DTA) and X-ray Diffraction (XRD) have been applied to obtain information about the phase transformations, mass loss, identification of the phases developed, crystallite size and degree of crystallinity of the obtained HAp powders. Fourier Transformer Infrared Spectroscopy (FTIR) has been utilized for studying the functional groups of the prepared structures. The surface morphology of the resulting HAp coatings was studied by Scanning Electron Microscopy (SEM). The bioactivity was evaluated by soaking the HAp-coatings/Ti6Al4V system in Kokubo’s Simulated Body Fluid (SBF) applying Inductively Coupled Plasma (ICP) spectrometry. 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT) and Alamar blue cell viability assays were used to study the biocompatibility. Finally, the corrosion behaviour of HAp-coatings/Ti6Al4V system was researched by means of Electrochemical Impedance Spectroscopy (EIS). The obtained results showed that the prepared powders were nanocrystalline HAp with little deviations from that present in the human bone. All the prepared HAp coatings deposited on Ti6Al4V showed well-behaved biocompatibility, good bioactivity and corrosion protection properties. PMID:28772455
5. Biocompatibility and Corrosion Protection Behaviour of Hydroxyapatite Sol-Gel-Derived Coatings on Ti6Al4V Alloy
2017-01-01
Full Text Available The aim of this work was to prepare hydroxyapatite coatings (HAp by a sol-gel method on Ti6Al4V alloy and to study the bioactivity, biocompatibility and corrosion protection behaviour of these coatings in presence of simulated body fluids (SBFs. Thermogravimetric/Differential Thermal Analyses (TG/DTA and X-ray Diffraction (XRD have been applied to obtain information about the phase transformations, mass loss, identification of the phases developed, crystallite size and degree of crystallinity of the obtained HAp powders. Fourier Transformer Infrared Spectroscopy (FTIR has been utilized for studying the functional groups of the prepared structures. The surface morphology of the resulting HAp coatings was studied by Scanning Electron Microscopy (SEM. The bioactivity was evaluated by soaking the HAp-coatings/Ti6Al4V system in Kokubo’s Simulated Body Fluid (SBF applying Inductively Coupled Plasma (ICP spectrometry. 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide (MTT and Alamar blue cell viability assays were used to study the biocompatibility. Finally, the corrosion behaviour of HAp-coatings/Ti6Al4V system was researched by means of Electrochemical Impedance Spectroscopy (EIS. The obtained results showed that the prepared powders were nanocrystalline HAp with little deviations from that present in the human bone. All the prepared HAp coatings deposited on Ti6Al4V showed well-behaved biocompatibility, good bioactivity and corrosion protection properties.
6. IN0523 (Urs-12-ene-3α,24β-diol) a plant based derivative of boswellic acid protect Cisplatin induced urogenital toxicity
Singh, Amarinder; Arvinda, S; Singh, Surjeet; Suri, Jyotsna; Koul, Surinder; Mondhe, Dilip M.; Singh, Gurdarshan; Vishwakarma, Ram
2017-01-01
The limiting factor for the use of Cisplatin in the treatment of different type of cancers is its toxicity and more specifically urogenital toxicity. Oxidative stress is a well-known phenomenon associated with Cisplatin toxicity. However, in Cisplatin treated group, abnormal animal behavior, decreased body weight, cellular and sub-cellular changes in the kidney and sperm abnormality were observed. Our investigation revealed that Cisplatin when administered in combination with a natural product derivative (Urs-12-ene-3α,24β-diol, labeled as IN0523) resulted in significant restoration of body weight and protection against the pathological alteration caused by Cisplatin to kidney and testis. Sperm count and motility were significantly restored near to normal. Cisplatin caused depletion of defense system i.e. glutathione peroxidase, catalase and superoxide dismutase, which were restored close to normal by treatment of IN0523. Reduction in excessive lipid peroxidation induced by Cisplatin was also found by treatment with IN0523. The result suggests that IN0523 is a potential candidate for ameliorating Cisplatin induced toxicity in the kidney and testes at a dose of 100 mg/kg p.o. via inhibiting the oxidative stress/redox status imbalance and may be improving the efflux mechanism. - Highlights: • Synthesis of a novel boswellic acid derivative (IN0523) • Counter oxidative stress induced due to Cisplatin • Protect against urogenital toxicity induced by Cisplatin
7. IN0523 (Urs-12-ene-3α,24β-diol) a plant based derivative of boswellic acid protect Cisplatin induced urogenital toxicity
Singh, Amarinder [Academy of Scientific & Innovative Research (AcSIR), New Delhi (India); PK-PD-Toxicology and Formulation Division, CSIR-Indian Institute of Integrative Medicine, Canal Road, Jammu 180001, J& K (India); Arvinda, S [Deptt. of Pathology, Govt. Medical College, Jammu 180001, J& K (India); Singh, Surjeet [PK-PD-Toxicology and Formulation Division, CSIR-Indian Institute of Integrative Medicine, Canal Road, Jammu 180001, J& K (India); Suri, Jyotsna [Deptt. of Pathology, Govt. Medical College, Jammu 180001, J& K (India); Koul, Surinder [Bio-Organic Chemistry Division, CSIR-Indian Institute of Integrative Medicine, Canal Road, Jammu 180001, J& K (India); Mondhe, Dilip M. [Cancer Pharmacology Division, CSIR-Indian Institute of Integrative Medicine, Canal Road, Jammu 180001, J& K (India); Singh, Gurdarshan, E-mail: singh_gd@iiim.ac.in [PK-PD-Toxicology and Formulation Division, CSIR-Indian Institute of Integrative Medicine, Canal Road, Jammu 180001, J& K (India); Vishwakarma, Ram [Bio-Organic Chemistry Division, CSIR-Indian Institute of Integrative Medicine, Canal Road, Jammu 180001, J& K (India)
2017-03-01
The limiting factor for the use of Cisplatin in the treatment of different type of cancers is its toxicity and more specifically urogenital toxicity. Oxidative stress is a well-known phenomenon associated with Cisplatin toxicity. However, in Cisplatin treated group, abnormal animal behavior, decreased body weight, cellular and sub-cellular changes in the kidney and sperm abnormality were observed. Our investigation revealed that Cisplatin when administered in combination with a natural product derivative (Urs-12-ene-3α,24β-diol, labeled as IN0523) resulted in significant restoration of body weight and protection against the pathological alteration caused by Cisplatin to kidney and testis. Sperm count and motility were significantly restored near to normal. Cisplatin caused depletion of defense system i.e. glutathione peroxidase, catalase and superoxide dismutase, which were restored close to normal by treatment of IN0523. Reduction in excessive lipid peroxidation induced by Cisplatin was also found by treatment with IN0523. The result suggests that IN0523 is a potential candidate for ameliorating Cisplatin induced toxicity in the kidney and testes at a dose of 100 mg/kg p.o. via inhibiting the oxidative stress/redox status imbalance and may be improving the efflux mechanism. - Highlights: • Synthesis of a novel boswellic acid derivative (IN0523) • Counter oxidative stress induced due to Cisplatin • Protect against urogenital toxicity induced by Cisplatin.
8. Mechanism of plasma-arc formation of fullerenes from coal and related materials
Pang, L S.K.; Wilson, M A; Quezada, R A [CSIRO Petroleum, North Ryde (Australia); and others
1996-12-31
When an arc is struck across graphite or coal electrodes in a helium atmosphere several products are formed including soot containing fullerenes. The mechanism by which fullerenes and nanotubes are formed is not understood. At arc temperatures exceeding 3000{degrees}C, highly ordered fullerenes might be expected to be less stable than graphite, and hence fullerene production is believed to proceed in cooler regions at the edge of the arc. There is irrefutable evidence that [C{sub 60}]-fullerene grows in a plasma from atomic carbon vapour or equivalent. When {sup 13}C-labelled carbon powder is packed into the anode, the fullerenes as produced contain a statistical distribution of {sup 13}C atoms. This implies that graphite has split into small units, predominantly C{sub 1} or C{sub 2} in the plasma and these units are involved in fullerene formation. When coal or other organic materials are used in the anode, weaker bonds are present, which may break preferentially. As a result, larger fragments, other than C{sub 1} and C{sub 2} units can exist in the plasma. This paper demonstrates the existence of such larger fragments when various coals are used and this implies that fullerenes can be formed from larger units than C{sub 1} and C{sub 2}. The distribution of polycyclic hydrocarbons formed depends very much on the structure of the coal used for the arcing experiments. The distribution of the natural abundance of {sup 13}C/{sup 12}C ratios in the fullerene products further supports this evidence.
9. Thrombospondin-1 Partly Mediates the Cartilage Protective Effect of Adipose-Derived Mesenchymal Stem Cells in Osteoarthritis
Marie Maumus
2017-11-01
Full Text Available ObjectiveAssuming that mesenchymal stem cells (MSCs respond to the osteoarthritic joint environment to exert a chondroprotective effect, we aimed at investigating the molecular response setup by MSCs after priming by osteoarthritic chondrocytes in cocultures.MethodsWe used primary human osteoarthritic chondrocytes and adipose stem cells (ASCs in mono- and cocultures and performed a high-throughput secretome analysis. Among secreted proteins differentially induced in cocultures, we identified thrombospondin-1 (THBS1 as a potential candidate that could be involved in the chondroprotective effect of ASCs.ResultsSecretome analysis revealed significant induction of THBS1 in ASCs/chondrocytes cocultures at mRNA and protein levels. We showed that THBS1 was upregulated at late stages of MSC differentiation toward chondrocytes and that recombinant THBS1 (rTHBS1 exerted a prochondrogenic effect on MSC indicating a role of THBS1 during chondrogenesis. However, compared to control ASCs, siTHBS1-transfected ASCs did not decrease the expression of hypertrophic and inflammatory markers in osteoarthritic chondrocytes, suggesting that THBS1 was not involved in the reversion of osteoarthritic phenotype. Nevertheless, downregulation of THBS1 in ASCs reduced their immunosuppressive activity, which was consistent with the anti-inflammatory role of rTHBS1 on T lymphocytes. THBS1 function was then evaluated in the collagenase-induced OA model by comparing siTHBS1-transfected and control ASCs. The protective effect of ASCs evaluated by histological and histomorphological analysis of cartilage and bone was not seen with siTHBS1-transfected ASCs.ConclusionOur data suggest that THBS1 did not exert a direct protective effect on chondrocytes but might reduce inflammation, subsequently explaining the therapeutic effect of ASCs in OA.
10. Human-derived physiological heat shock protein 27 complex protects brain after focal cerebral ischemia in mice.
Shinichiro Teramoto
Full Text Available Although challenging, neuroprotective therapies for ischemic stroke remain an interesting strategy for countering ischemic injury and suppressing brain tissue damage. Among potential neuroprotective molecules, heat shock protein 27 (HSP27 is a strong cell death suppressor. To assess the neuroprotective effects of HSP27 in a mouse model of transient middle cerebral artery occlusion, we purified a "physiological" HSP27 (hHSP27 from normal human lymphocytes. hHSP27 differed from recombinant HSP27 in that it formed dimeric, tetrameric, and multimeric complexes, was phosphorylated, and contained small amounts of αβ-crystallin and HSP20. Mice received intravenous injections of hHSP27 following focal cerebral ischemia. Infarct volume, neurological deficit scores, physiological parameters, and immunohistochemical analyses were evaluated 24 h after reperfusion. Intravenous injections of hHSP27 1 h after reperfusion significantly reduced infarct size and improved neurological deficits. Injected hHSP27 was localized in neurons on the ischemic side of the brain. hHSP27 suppressed neuronal cell death resulting from cytochrome c-mediated caspase activation, oxidative stress, and inflammatory responses. Recombinant HSP27 (rHSP27, which was artificially expressed and purified from Escherichia coli, and dephosphorylated hHSP27 did not have brain protective effects, suggesting that the phosphorylation of hHSP27 may be important for neuroprotection after ischemic insults. The present study suggests that hHSP27 with posttranslational modifications provided neuroprotection against ischemia/reperfusion injury and that the protection was mediated through the inhibition of apoptosis, oxidative stress, and inflammation. Intravenously injected human HSP27 should be explored for the treatment of acute ischemic strokes.
11. Engrafted human induced pluripotent stem cell-derived anterior specified neural progenitors protect the rat crushed optic nerve.
Leila Satarian
Full Text Available BACKGROUND: Degeneration of retinal ganglion cells (RGCs is a common occurrence in several eye diseases. This study examined the functional improvement and protection of host RGCs in addition to the survival, integration and neuronal differentiation capabilities of anterior specified neural progenitors (NPs following intravitreal transplantation. METHODOLOGY/PRINCIPAL FINDINGS: NPs were produced under defined conditions from human induced pluripotent stem cells (hiPSCs and transplanted into rats whose optic nerves have been crushed (ONC. hiPSCs were induced to differentiate into anterior specified NPs by the use of Noggin and retinoic acid. The hiPSC-NPs were labeled by green fluorescent protein or a fluorescent tracer 1,1' -dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (DiI and injected two days after induction of ONC in hooded rats. Functional analysis according to visual evoked potential recordings showed significant amplitude recovery in animals transplanted with hiPSC-NPs. Retrograde labeling by an intra-collicular DiI injection showed significantly higher numbers of RGCs and spared axons in ONC rats treated with hiPSC-NPs or their conditioned medium (CM. The analysis of CM of hiPSC-NPs showed the secretion of ciliary neurotrophic factor, basic fibroblast growth factor, and insulin-like growth factor. Optic nerve of cell transplanted groups also had increased GAP43 immunoreactivity and myelin staining by FluoroMyelin™ which imply for protection of axons and myelin. At 60 days post-transplantation hiPSC-NPs were integrated into the ganglion cell layer of the retina and expressed neuronal markers. CONCLUSIONS/SIGNIFICANCE: The transplantation of anterior specified NPs may improve optic nerve injury through neuroprotection and differentiation into neuronal lineages. These NPs possibly provide a promising new therapeutic approach for traumatic optic nerve injuries and loss of RGCs caused by other diseases.
12. Continuum Navier-Stokes modelling of water ow past fullerene molecules
Walther, J. H.; Popadic, A.; Koumoutsakos, P.
We present continuum simulations of water flow past fullerene molecules. The governing Navier-Stokes equations are complemented with the Navier slip boundary condition with a slip length that is extracted from related molecular dynamics simulations. We find that several quantities of interest...... as computed by the present model are in good agreement with results from atomistic and atomistic-continuum simulations at a fraction of the computational cost. We simulate the flow past a single fullerene and an array of fullerenes and demonstrate that such nanoscale flows can be computed efficiently...
13. C(60 fullerene prevents genotoxic effects of doxorubicin in human lymphocytes in vitro
K. S. Afanasieva
2015-02-01
Full Text Available The self-ordering of C60 fullerene, doxorubicin and their mixture precipitated from aqueous solutions was investigated using atomic-force microscopy. The results suggest the complexation between the two compounds. The genotoxicity of doxorubicin in complex with C60 fullerene (С60+Dox was evaluated in vitro with comet assay using human lymphocytes. The obtained results show that the C60 fullerene prevents the toxic effect of Dox in normal cells and, thus, С60+Dox complex might be proposed for biomedical application.
14. Continuum Navier-Stokes modelling of water flow past fullerene molecules
Walther, J. H.; Popadic, A.; Koumoutsakos, P.
We present continuum simulations of water flow past fullerene molecules. The governing Navier-Stokes equations are complemented with the Navier slip boundary condition with a slip length that is extracted from related molecular dynamics simulations. We find that several quantities of interest...... as computed by the present model are in good agreement with results from atomistic and atomistic-continuum simulations at a fraction of the computational cost. We simulate the flow past a single fullerene and an array of fullerenes and demonstrate that such nanoscale flows can be computed efficiently...
15. Effect of fullerene C(60 on ATPase activity and superprecipitation of skeletal muscle actomyosin
K. S. Andreichenko
2013-04-01
Full Text Available Creation of new biocompatible nanomaterials, which can exhibit the specific biological effects, is an important complex problem that requires the use of last accomplishments of biotechnology. The effect of pristine water-soluble fullerene C60 on ATPase activity and superprecipitation reaction of rabbit skeletal muscle natural actomyosin has been revealed, namely an increase of actomyosin superprecipitation and Мg2+, Са2+– and K+-ATPase activity by fullerene was investigated. We conclude that this finding offers a real possibility for the regulation of contraction-relaxation of skeletal muscle with fullerene C60.
16. Fullerene-doped conducting polymers: effects of enhanced photoconductivity and quenched photoluminescence
Yoshino, K.; Yin, X.H.; Muro, K.; Kiyomatsu, S.; Morita, S.; Zakhidov, A.A.; Noguchi, T.; Ohnishi, T.
1993-01-01
It is found that fullerenes (C 60 , C 70 ), due to their strong electron accepting abilities can be hole generators in conducting polymers sensitizing photoinduced charge transfer. Here we report that photoconductivity of poly(2,5-dialkoxy-p-phenylene-vinylene) OO-PPV is found to be remarkably enhanced by several orders of magnitude upon introduction of several mol % of C 60 . Positive polarons (P + ) photogenerated with increased efficiency due to autoionization of excitons and/or photopumping from fullerene are considered to be responsible for enhanced photoconductivity. Photoluminescence of polymer is strongly quenched upon C 60 doping due to dissociation of excitons accompanied by electron transfer to fullerene. (orig.)
17. Procedure of identification of fullerenes isolated from iron-carbon alloys
Zakirnichnaya, M.M.
2001-01-01
A method of fullerenes isolation from the structure of iron-carbon alloys and their identification using physical methods which provide determination of the different parameters of nanoobjects is developed. Qualitative (mass-spectrometry of positive and negative ions, small angle X-ray scattering) and quantitative (IR-spectrometry, liquid chromatography) evaluation of fullerenes in the samples obtained from iron-carbon alloys and their visual observation using scanning tunnel microscopy are performed. It is found that the method provides isolation and identification of fullerenes present in the structure of steels and irons [ru
18. On the influence of toxicity of O-alkyl serotonin derivatives on the implantation of their protective potency
Vasin, M.V.; Suvorov, N.N.; Abramov, M.M.; Gordeev, E.N.
1987-01-01
In experiments with mongrel mice, a study was made of the pharmacological activity of serotonin and its O-alkyl derivatives. It was estimated by the two indices, that is, the radioprotective properties and the influence on a local blood cannel in the spleen, the modifying effect of the agents' toxicity being estimated as well. As an O-alkyl group of 5-alkoxytryptamines was elongated from one to three carbon atoms and the toxicity of the substances increased, their radiprotective effect decreased more readily than their effect on the local blood cannel. The shortening of the range of the therapeutic effect of the agents under study, with regard to the two pharmacological indices mentioned above, the alkyl group being lengthened, followed a logarithmic function which was more pronounced in relation to the radioprotective index (cosα 1 /cosα 2 =1.58)
19. An Evaluation of Root Phytochemicals Derived from Althea officinalis (Marshmallow) and Astragalus membranaceus as Potential Natural Components of UV Protecting Dermatological Formulations.
Curnow, Alison; Owen, Sara J
2016-01-01
As lifetime exposure to ultraviolet (UV) radiation has risen, the deleterious effects have also become more apparent. Numerous sunscreen and skincare products have therefore been developed to help reduce the occurrence of sunburn, photoageing, and skin carcinogenesis. This has stimulated research into identifying new natural sources of effective skin protecting compounds. Alkaline single-cell gel electrophoresis (comet assay) was employed to assess aqueous extracts derived from soil or hydroponically glasshouse-grown roots of Althea officinalis (Marshmallow) and Astragalus membranaceus, compared with commercial, field-grown roots. Hydroponically grown root extracts from both plant species were found to significantly reduce UVA-induced DNA damage in cultured human lung and skin fibroblasts, although initial Astragalus experimentation detected some genotoxic effects, indicating that Althea root extracts may be better suited as potential constituents of dermatological formulations. Glasshouse-grown soil and hydroponic Althea root extracts afforded lung fibroblasts with statistically significant protection against UVA irradiation for a greater period of time than the commercial field-grown roots. No significant reduction in DNA damage was observed when total ultraviolet irradiation (including UVB) was employed (data not shown), indicating that the extracted phytochemicals predominantly protected against indirect UVA-induced oxidative stress. Althea phytochemical root extracts may therefore be useful components in dermatological formulations.
20. An Evaluation of Root Phytochemicals Derived from Althea officinalis (Marshmallow and Astragalus membranaceus as Potential Natural Components of UV Protecting Dermatological Formulations
Alison Curnow
2016-01-01
Full Text Available As lifetime exposure to ultraviolet (UV radiation has risen, the deleterious effects have also become more apparent. Numerous sunscreen and skincare products have therefore been developed to help reduce the occurrence of sunburn, photoageing, and skin carcinogenesis. This has stimulated research into identifying new natural sources of effective skin protecting compounds. Alkaline single-cell gel electrophoresis (comet assay was employed to assess aqueous extracts derived from soil or hydroponically glasshouse-grown roots of Althea officinalis (Marshmallow and Astragalus membranaceus, compared with commercial, field-grown roots. Hydroponically grown root extracts from both plant species were found to significantly reduce UVA-induced DNA damage in cultured human lung and skin fibroblasts, although initial Astragalus experimentation detected some genotoxic effects, indicating that Althea root extracts may be better suited as potential constituents of dermatological formulations. Glasshouse-grown soil and hydroponic Althea root extracts afforded lung fibroblasts with statistically significant protection against UVA irradiation for a greater period of time than the commercial field-grown roots. No significant reduction in DNA damage was observed when total ultraviolet irradiation (including UVB was employed (data not shown, indicating that the extracted phytochemicals predominantly protected against indirect UVA-induced oxidative stress. Althea phytochemical root extracts may therefore be useful components in dermatological formulations.
1. An endohedral fullerene-based nuclear spin quantum computer
Ju Chenyong; Suter, Dieter; Du Jiangfeng
2011-01-01
We propose a new scalable quantum computer architecture based on endohedral fullerene molecules. Qubits are encoded in the nuclear spins of the endohedral atoms, which posses even longer coherence times than the electron spins which are used as the qubits in previous proposals. To address the individual qubits, we use the hyperfine interaction, which distinguishes two modes (active and passive) of the nuclear spin. Two-qubit quantum gates are effectively implemented by employing the electronic dipolar interaction between adjacent molecules. The electron spins also assist in the qubit initialization and readout. Our architecture should be significantly easier to implement than earlier proposals for spin-based quantum computers, such as the concept of Kane [B.E. Kane, Nature 393 (1998) 133]. - Research highlights: → We propose an endohedral fullerene-based scalable quantum computer architecture. → Qubits are encoded on nuclear spins, while electron spins serve as auxiliaries. → Nuclear spins are individually addressed using the hyperfine interaction. → Two-qubit gates are implemented through the medium of electron spins.
2. Prediction of the electron redundant SinNn fullerenes
Yang, Huihui; Song, Yan; Zhang, Yan; Chen, Hongshan
2018-05-01
The stabilities and electronic structures of SimAln-mNn and SinNn (n = 16, 20, m = 12 and n = 24, m = 16) fullerene-like cages have been investigated using density functional method B3LYP and the second-order perturbation theory MP2. The results show that the SimAln-mNn and SinNn fullerenes are more stable than the AlN counterparts. Comparing with the corresponding AlnNn cages, one silicon atom in each Si2N2 square protrudes and the excess electrons reside as lone pair electrons at the outside of the protrudent Si atoms. Analyses on the electronic structures suggest that the Sisbnd N bonds are covalent bonding with strong polarity. The ELF (electron localization function) shows large electron pair probability between Si and N atoms. The orbital interactions between Si and N are stronger than that between Al and N atoms; the overlap integral is 0.40 per Sisbnd N bond in SinNn and 0.34 per Alsbnd N bond in AlnNn. The AIM (atoms in molecule) charges on the Al atoms in AlnNn and SimAln-mNn are 2.37 and 2.40. The charges on the in-plane and protrudent Si atoms are about 2.88 and 1.50 respectively. Considering the large local dipole moments around the protrudent Si atoms, the electrostatic interactions are also favorable to the SiN cages.
3. Negative differential resistance observation in complex convoluted fullerene junctions
Kaur, Milanpreet; Sawhney, Ravinder Singh; Engles, Derick
2018-04-01
In this work, we simulated the smallest fullerene molecule, C20 in a two-probe device model with gold electrodes. The gold electrodes comprised of (011) miller planes were carved to construct the novel geometry based four unique shapes, which were strung to fullerene molecules through mechanically controlled break junction techniques. The organized devices were later scrutinized using non-equilibrium Green's function based on the density functional theory to calculate their molecular orbitals, energy levels, charge transfers, and electrical parameters. After intense scrutiny, we concluded that five-edged and six-edged devices have the lowest and highest current-conductance values, which result from their electrode-dominating and electrode-subsidiary effects, respectively. However, an interesting observation was that the three-edged and four-edged electrodes functioned as semi-metallic in nature, allowing the C20 molecule to demonstrate its performance with the complementary effect of these electrodes in the electron conduction process of a two-probe device.
4. Making and exploiting fullerenes, graphene, and carbon nanotubes
Marcaccio, Massimo; Paolucci, Francesco (eds.) [Bologna Univ. (Italy). Dept. of Chemistry G. Ciamician
2014-11-01
This volume contains nine chapters which are presenting critical reviews of the present and future trends in modern chemistry research. The chapter ''Solubilization of Fullerenes, Carbon Nanotubes and Graphene'' by Alain Penicaud describes the various ingenious approaches to solve the solubility issue and describes in particular how graphite, and modern nanocarbons, can be made soluble by reductive dissolution. A large part of the present volume concerns the merging of nanocarbons with nanotechnology and their impact on technical development in many areas. Fullerenes, carbon nanotubes, nanodiamond and graphene find, for instance, various applications in the development of solar cells, including dye sensitized solar cells. The chapter ''Incorporation of Balls, Tubes and Bowls in Nanotechnology'' by James Mack describes the recent development of the area of fullerene fragments, and corannulene in particular, and their direct applications to organic light emitting diode (OLED) technology, while, in the chapter ''Exploiting Nanocarbons in Dye-Sensitized Solar Cells'' by Ladislav Kavan, the exploitation of nanocarbons in the development of novel dye sensitized solar cells with improved efficiency, durability and costs is thoroughly reviewed. The functionalization of CNSs has the invaluable advantage of combining their unique properties with those of other classes of materials. Supramolecular chemistry represents an elegant alternative approach for the construction of functional systems by means of noncovalent bonding interactions. In the chapter ''Supramolecular Chemistry of Carbon Nanotubes'' by Gildas Gavrel et al., the incredibly varied world of supramolecular, non-covalent functionalization of carbon nanotubes and their applications is examined and reviewed, and the synthetic strategies devised for fabricating mechanically-linked molecular architectures are described in the chapter ''Fullerene
5. Making and exploiting fullerenes, graphene, and carbon nanotubes
Marcaccio, Massimo; Paolucci, Francesco
2014-01-01
This volume contains nine chapters which are presenting critical reviews of the present and future trends in modern chemistry research. The chapter ''Solubilization of Fullerenes, Carbon Nanotubes and Graphene'' by Alain Penicaud describes the various ingenious approaches to solve the solubility issue and describes in particular how graphite, and modern nanocarbons, can be made soluble by reductive dissolution. A large part of the present volume concerns the merging of nanocarbons with nanotechnology and their impact on technical development in many areas. Fullerenes, carbon nanotubes, nanodiamond and graphene find, for instance, various applications in the development of solar cells, including dye sensitized solar cells. The chapter ''Incorporation of Balls, Tubes and Bowls in Nanotechnology'' by James Mack describes the recent development of the area of fullerene fragments, and corannulene in particular, and their direct applications to organic light emitting diode (OLED) technology, while, in the chapter ''Exploiting Nanocarbons in Dye-Sensitized Solar Cells'' by Ladislav Kavan, the exploitation of nanocarbons in the development of novel dye sensitized solar cells with improved efficiency, durability and costs is thoroughly reviewed. The functionalization of CNSs has the invaluable advantage of combining their unique properties with those of other classes of materials. Supramolecular chemistry represents an elegant alternative approach for the construction of functional systems by means of noncovalent bonding interactions. In the chapter ''Supramolecular Chemistry of Carbon Nanotubes'' by Gildas Gavrel et al., the incredibly varied world of supramolecular, non-covalent functionalization of carbon nanotubes and their applications is examined and reviewed, and the synthetic strategies devised for fabricating mechanically-linked molecular architectures are described in the chapter ''Fullerene-Stoppered Bistable Rotaxanes'' by Aurelio Mateo-Alonso, which presents an
6. Ashwagandha leaf derived withanone protects normal human cells against the toxicity of methoxyacetic acid, a major industrial metabolite.
Priyandoko, Didik; Ishii, Tetsuro; Kaul, Sunil C; Wadhwa, Renu
2011-05-04
The present day lifestyle heavily depends on industrial chemicals in the form of agriculture, cosmetics, textiles and medical products. Since the toxicity of the industrial chemicals has been a concern to human health, the need for alternative non-toxic natural products or adjuvants that serve as antidotes are in high demand. We have investigated the effects of Ayurvedic herb Ashwagandha (Withania somnifera) leaf extract on methoxyacetic acid (MAA) induced toxicity. MAA is a major metabolite of ester phthalates that are commonly used in industry as gelling, viscosity and stabilizer reagents. We report that the MAA cause premature senescence of normal human cells by mechanisms that involve ROS generation, DNA and mitochondrial damage. Withanone protects cells from MAA-induced toxicity by suppressing the ROS levels, DNA and mitochondrial damage, and induction of cell defense signaling pathways including Nrf2 and proteasomal degradation. These findings warrant further basic and clinical studies that may promote the use of withanone as a health adjuvant in a variety of consumer products where the toxicity has been a concern because of the use of ester phthalates.
7. Redox signaling via the molecular chaperone BiP protects cells against endoplasmic reticulum-derived oxidative stress
Wang, Jie; Pareja, Kristeen A; Kaiser, Chris A; Sevier, Carolyn S
2014-01-01
Oxidative protein folding in the endoplasmic reticulum (ER) has emerged as a potentially significant source of cellular reactive oxygen species (ROS). Recent studies suggest that levels of ROS generated as a byproduct of oxidative folding rival those produced by mitochondrial respiration. Mechanisms that protect cells against oxidant accumulation within the ER have begun to be elucidated yet many questions still remain regarding how cells prevent oxidant-induced damage from ER folding events. Here we report a new role for a central well-characterized player in ER homeostasis as a direct sensor of ER redox imbalance. Specifically we show that a conserved cysteine in the lumenal chaperone BiP is susceptible to oxidation by peroxide, and we demonstrate that oxidation of this conserved cysteine disrupts BiP's ATPase cycle. We propose that alteration of BiP activity upon oxidation helps cells cope with disruption to oxidative folding within the ER during oxidative stress. DOI: http://dx.doi.org/10.7554/eLife.03496.001 PMID:25053742
8. Protective mechanisms of melatonin against hydrogen-peroxide-induced toxicity in human bone-marrow-derived mesenchymal stem cells.
Mehrzadi, Saeed; Safa, Majid; Kamrava, Seyed Kamran; Darabi, Radbod; Hayat, Parisa; Motevalian, Manijeh
2017-07-01
Many obstacles compromise the efficacy of bone marrow mesenchymal stem cells (BM-MSCs) by inducing apoptosis in the grafted BM-MSCs. The current study investigates the effect of melatonin on important mediators involved in survival of BM-MSCs in hydrogen peroxide (H 2 O 2 ) apoptosis model. In brief, BM-MSCs were isolated, treated with melatonin, and then exposed to H 2 O 2 . Their viability was assessed by MTT assay and apoptotic fractions were evaluated through Annexin V, Hoechst staining, and ADP/ATP ratio. Oxidative stress biomarkers including ROS, total antioxidant power (TAP), superoxide dismutase (SOD) and catalase (CAT) activity, glutathione (GSH), thiol molecules, and lipid peroxidation (LPO) levels were determined. Secretion of inflammatory cytokines (TNF-α and IL-6) were measured by ELISA assay. The protein expression of caspase-3, Bax, and Bcl-2, was also evaluated by Western blotting. Melatonin pretreatment significantly increased viability and decreased apoptotic fraction of H 2 O 2 -exposed BM-MSCs. Melatonin also decreased ROS generation, as well as increasing the activity of SOD and CAT enzymes and GSH content. Secretion of inflammatory cytokines in H 2 O 2 -exposed cells was also reduced by melatonin. Expression of caspase-3 and Bax proteins in H 2 O 2 -exposed cells was diminished by melatonin pretreatment. The findings suggest that melatonin may be an effective protective agent against H 2 O 2 -induced oxidative stress and apoptosis in MSC.
9. Ashwagandha leaf derived withanone protects normal human cells against the toxicity of methoxyacetic acid, a major industrial metabolite.
Didik Priyandoko
Full Text Available The present day lifestyle heavily depends on industrial chemicals in the form of agriculture, cosmetics, textiles and medical products. Since the toxicity of the industrial chemicals has been a concern to human health, the need for alternative non-toxic natural products or adjuvants that serve as antidotes are in high demand. We have investigated the effects of Ayurvedic herb Ashwagandha (Withania somnifera leaf extract on methoxyacetic acid (MAA induced toxicity. MAA is a major metabolite of ester phthalates that are commonly used in industry as gelling, viscosity and stabilizer reagents. We report that the MAA cause premature senescence of normal human cells by mechanisms that involve ROS generation, DNA and mitochondrial damage. Withanone protects cells from MAA-induced toxicity by suppressing the ROS levels, DNA and mitochondrial damage, and induction of cell defense signaling pathways including Nrf2 and proteasomal degradation. These findings warrant further basic and clinical studies that may promote the use of withanone as a health adjuvant in a variety of consumer products where the toxicity has been a concern because of the use of ester phthalates.
10. Improved physicochemical properties and hepatic protection of Maillard reaction products derived from fish protein hydrolysates and ribose.
Yang, Sung-Yong; Lee, Sanghoon; Pyo, Min Cheol; Jeon, Hyeonjin; Kim, Yoonsook; Lee, Kwang-Won
2017-04-15
High amounts of waste products generated from fish-processing need to be disposed of despite their potential nutritional value. A variety of methods, such as enzymatic hydrolysis, have been developed for these byproducts. In the current study, we investigated the physicochemical, biological and antioxidative properties of fish protein hydrolysates (FPH) conjugated with ribose through the Maillard reaction. These glycated conjugates of FPH (GFPH) had more viscous rheological properties than FPH and exhibited higher heat, emulsification and foaming stability. They also protected liver HepG2 cells against t-BHP-induced oxidative stress with enhanced glutathione synthesis in vitro. Furthermore, it was shown that GFPH induced upregulation of phase II enzyme expression, such as that of HO-1 and γ-GCL, via nuclear translocation of Nrf2 and phosphorylation of ERK. Taken together, these results demonstrate the potential of GFPH for use as a functional food ingredient with improved rheological and antioxidative properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
11. Examining the protective role of ErbB2 modulation in human-induced pluripotent stem cell-derived cardiomyocytes.
Eldridge, Sandy; Guo, Liang; Mussio, Jodie; Furniss, Mike; Hamre, John; Davis, Myrtle
2014-10-01
Human-induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) are being used as an in vitro model system in cardiac biology and in drug discovery (e.g., cardiotoxicity testing). Qualification of these cells for use in mechanistic investigations will require detailed evaluations of cardiomyocyte signaling pathways and cellular responses. ErbB signaling and the ligand neuregulin play critical roles in survival and functional integrity of cardiac myocytes. As such, we sought to characterize the expression and activity of the ErbB family of receptors. Antibody microarray analysis performed on cell lysates derived from maturing hiPSC-CMs detected expression of ∼570 signaling proteins. EGFR/ErbB1, HER2/ErbB2, and ErbB4, but not ErbB3 receptors, of the epidermal growth factor receptor family were confirmed by Western blot. Activation of ErbB signaling by neuregulin-1β (NRG, a natural ligand for ErbB4) and its modulation by trastuzumab (a monoclonal anti-ErbB2 antibody) and lapatinib (a small molecule ErbB2 tyrosine kinase inhibitor) were evaluated through assessing phosphorylation of AKT and Erk1/2, two major downstream kinases of ErbB signaling, using nanofluidic proteomic immunoassay. Downregulation of ErbB2 expression by siRNA silencing attenuated NRG-induced AKT and Erk1/2 phosphorylation. Activation of ErbB signaling with NRG, or inhibition with trastuzumab, alleviated or aggravated doxorubicin-induced cardiomyocyte damage, respectively, as assessed by a real-time cellular impedance analysis and ATP measurement. Collectively, these results support the expanded use of hiPSC-CMs to examine mechanisms of cardiotoxicity and support the value of using these cells in early assessments of cardiotoxicity or efficacy. Published by Oxford University Press on behalf of Toxicological Sciences 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
12. Protection of vanillin derivative VND3207 on genome damage and apoptosis of human lymphoblastoid cells induced by γ-ray irradiation
Huang Rui; Huang Bo; He Xingpeng; Xu Qinzhi; Wang Yu; Zhou Pingkun
2009-01-01
To determine the protective effect of vanillin derivative VND3207 on the genome damage and apoptosis of human lymphoblastoid AHH-1 cells induced by γ-ray irradiation, the techniques of single-cell gel electrophoresis, micronucleus test, Annexin V-FACS assay, and the double-fluorescein staining and fluorescent microscope observation were used. Neutral single-cell gel electrophoresis showed that the initial DNA double-strand breaks caused by 2 Gy 60 Co γ-ray was significantly decreased by VND3207 in the range of 540 μmol/L. This significant phenomenonwas demonstrated by the fact that the comet tail-moment was significantly shortened and the DNA content in the comet tail was reduced when the cells were protected with VND3207, and the radio-protective effect increases along with the increasing of drug concentration. Similarly, the yield of micronucleus was reduced by 540 μmol/L of VND3207 in a concentration-dependency in AHH-1 cells irradiated with 0.5 Gy, 1.0 Gy and 2.0 Gy 60 Co γ-rays. 40 μmol/L VND3207 resulted in 40% reduction in the yield of micronucleus induced by 2.0 Gy. The occurrence of apoptosis enhanced along with the time from 8 h to 48 h post 4 Gy irradiation, and 40 μmol/L of VND3207 significantly decreased the induction of apoptosis. This work has further demonstrated a good protection of VND3207 on γ-ray-induced cell genome damage and apoptosis. (authors)
13. A novel vitamin E derivative (TMG) protects against gastric mucosal damage induced by ischemia and reperfusion in rats.
Ichikawa, Hiroshi; Yoshida, Norimasa; Takano, Hiroshisa; Ishikawa, Takeshi; Handa, Osamu; Takagi, Tomohisa; Naito, Yuji; Murase, Hironobu; Yoshikawa, Toshikazu
2003-01-01
The aim of the present study was to investigate the antioxidative effects of water-soluble vitamin E derivative, 2-(alpha-D-glucopyranosyl)methyl-2,5,7,8-tetramethylchroman-6-ol (TMG), on ischemia-reperfusion (I/R) -induced gastric mucosal injury in rats. Gastric ischemia was induced by applying a small clamp to the celiac artery and reoxygenation was produced by removal of the clamp. The area of gastric mucosal erosion, the concentration of thiobarbituric acid-reactive substances, and the myeloperoxidase activity in gastric mucosa significantly increased in I/R groups compared with those of sham-operated groups. These increases were significantly inhibited by pretreatment with TMG. The contents of both mucosal TNF-alpha and CINC-2beta in I/R groups were also increased compared with the levels of those in sham-operated groups. These increases of the inflammatory cytokines were significantly inhibited by the treatment with TMG. It is concluded that TMG inhibited lipid peroxidation and reduced development of the gastric mucosal inflammation induced by I/R in rats.
14. Derivation of soil screening thresholds to protect chisel-toothed kangaroo rat from uranium mine waste in northern Arizona
Hinck, Jo E.; Linder, Greg L.; Otton, James K.; Finger, Susan E.; Little, Edward E.; Tillitt, Donald E.
2013-01-01
Chemical data from soil and weathered waste material samples collected from five uranium mines north of the Grand Canyon (three reclaimed, one mined but not reclaimed, and one never mined) were used in a screening-level risk analysis for the Arizona chisel-toothed kangaroo rat (Dipodomys microps leucotis); risks from radiation exposure were not evaluated. Dietary toxicity reference values were used to estimate soil-screening thresholds presenting risk to kangaroo rats. Sensitivity analyses indicated that body weight critically affected outcomes of exposed-dose calculations; juvenile kangaroo rats were more sensitive to the inorganic constituent toxicities than adult kangaroo rats. Species-specific soil-screening thresholds were derived for arsenic (137 mg/kg), cadmium (16 mg/kg), copper (1,461 mg/kg), lead (1,143 mg/kg), nickel (771 mg/kg), thallium (1.3 mg/kg), uranium (1,513 mg/kg), and zinc (731 mg/kg) using toxicity reference values that incorporate expected chronic field exposures. Inorganic contaminants in soils within and near the mine areas generally posed minimal risk to kangaroo rats. Most exceedances of soil thresholds were for arsenic and thallium and were associated with weathered mine wastes.
15. Fatigue is a brain-derived emotion that regulates the exercise behavior to ensure the protection of whole body homeostasis
Timothy David Noakes
2012-04-01
Full Text Available An influential book written by A. Mosso in the late 19th century proposed that fatigue that at first sight might appear an imperfection of our body, is on the contrary one of its most marvellous perfections. The fatigue increasing more rapidly than the amount of work done saves us from the injury which lesser sensibility would involve for the organism so that muscular fatigue also is at bottom an exhaustion of the nervous system.It has taken more than a century to confirm Mosso’s idea that both the brain and the muscles alter their function during exercise and that fatigue is predominantly an emotion, part of a complex regulation, the goal of which is to protect the body from harm. Mosso’s ideas were supplanted in the English literature by those of A.V. Hill who believed that fatigue was the result of biochemical changes in the exercising limb muscles - peripheral fatigue - to which the central nervous system makes no contribution. The past decade has witnessed the growing realization that this brainless model cannot explain exercise performance. This article traces the evolution of our modern understanding of how the CNS regulates exercise specifically to insure that each exercise bout terminates whilst homeostasis is retained in all bodily systems. The brain uses the symptoms of fatigue as key regulators to insure that the exercise is completed before harm develops. These sensations of fatigue are unique to each individual and are illusionary since their generation is largely independent of the real biological state of the athlete at the time they develop. The model predicts that attempts to understand fatigue and to explain superior human athletic performance purely on the basis of the body’s known physiological and metabolic responses to exercise must fail since subconscious and conscious mental decisions made by winners and losers, in both training and competition, are the ultimate determinants of both fatigue and athletic performance.
16. Xyloketal-derived small molecules show protective effect by decreasing mutant Huntingtin protein aggregates in Caenorhabditis elegans model of Huntington’s disease
Zeng YX
2016-04-01
Full Text Available Yixuan Zeng,1,2,* Wenyuan Guo,1,* Guangqing Xu,3 Qinmei Wang,4 Luyang Feng,1,2 Simei Long,1 Fengyin Liang,1 Yi Huang,1 Xilin Lu,1 Shichang Li,5 Jiebin Zhou,5 Jean-Marc Burgunder,6 Jiyan Pang,5 Zhong Pei1,2 1Department of Neurology, National Key Clinical Department and Key Discipline of Neurology, Guangdong Key Laboratory for Diagnosis and Treatment of Major Neurological Disease, The First Affiliated Hospital, Sun Yat-sen University, 2Guangzhou Center, Chinese Huntington’s Disease Network, 3Department of Rehabilitation, The First Affiliated Hospital, 4Key laboratory on Assisted Circulation, Ministry of Health, Department of Cardiovascular Medicine of the First Affiliated Hospital, 5School of Chemistry and Chemical Engineering, Sun Yat-sen University, Guangzhou, Guangdong, People’s Republic of China; 6Swiss Huntington’s Disease Center, Department of Neurology, University of Bern, Bern, Switzerland *These authors contributed equally to this work Abstract: Huntington’s disease is an autosomal-dominant neurodegenerative disorder, with chorea as the most prominent manifestation. The disease is caused by abnormal expansion of CAG codon repeats in the IT15 gene, which leads to the expression of a glutamine-rich protein named mutant Huntingtin (Htt. Because of its devastating disease burden and lack of valid treatment, development of more effective therapeutics for Huntington’s disease is urgently required. Xyloketal B, a natural product from mangrove fungus, has shown protective effects against toxicity in other neurodegenerative disease models such as Parkinson’s and Alzheimer’s diseases. To identify potential neuroprotective molecules for Huntington’s disease, six derivatives of xyloketal B were screened in a Caenorhabditis elegans Huntington’s disease model; all six compounds showed a protective effect. Molecular docking studies indicated that compound 1 could bind to residues GLN369 and GLN393 of the mutant Htt protein, forming a
17. Host-derived, pore-forming toxin-like protein and trefoil factor complex protects the host against microbial infection.
Xiang, Yang; Yan, Chao; Guo, Xiaolong; Zhou, Kaifeng; Li, Sheng'an; Gao, Qian; Wang, Xuan; Zhao, Feng; Liu, Jie; Lee, Wen-Hui; Zhang, Yun
2014-05-06
Aerolysins are virulence factors belonging to the bacterial β-pore-forming toxin superfamily. Surprisingly, numerous aerolysin-like proteins exist in vertebrates, but their biological functions are unknown. βγ-CAT, a complex of an aerolysin-like protein subunit (two βγ-crystallin domains followed by an aerolysin pore-forming domain) and two trefoil factor subunits, has been identified in frogs (Bombina maxima) skin secretions. Here, we report the rich expression of this protein, in the frog blood and immune-related tissues, and the induction of its presence in peritoneal lavage by bacterial challenge. This phenomena raises the possibility of its involvement in antimicrobial infection. When βγ-CAT was administrated in a peritoneal infection model, it greatly accelerated bacterial clearance and increased the survival rate of both frogs and mice. Meanwhile, accelerated Interleukin-1β release and enhanced local leukocyte recruitments were determined, which may partially explain the robust and effective antimicrobial responses observed. The release of interleukin-1β was potently triggered by βγ-CAT from the frog peritoneal cells and murine macrophages in vitro. βγ-CAT was rapidly endocytosed and translocated to lysosomes, where it formed high molecular mass SDS-stable oligomers (>170 kDa). Lysosomal destabilization and cathepsin B release were detected, which may explain the activation of caspase-1 inflammasome and subsequent interleukin-1β maturation and release. To our knowledge, these results provide the first functional evidence of the ability of a host-derived aerolysin-like protein to counter microbial infection by eliciting rapid and effective host innate immune responses. The findings will also largely help to elucidate the possible involvement and action mechanisms of aerolysin-like proteins and/or trefoil factors widely existing in vertebrates in the host defense against pathogens.
18. The C60-Fullerene Porphyrin Adducts for Prevention of the Doxorubicin-Induced Acute Cardiotoxicity in Rat Myocardial Cells
Seyed Vahid Shetab Boushehri
2010-10-01
Full Text Available This is a fullerene-based low toxic nanocationite designed for targeted delivery of the paramagnetic stable isotope of magnesium to the doxorubicin (DXR-induced damaged heart muscle providing a prominent effect close to about 80% recovery of the tissue hypoxia symptoms in less than 24 hrs after a single injection (0.03 - 0.1 LD50. Magnesium magnetic isotope effect selectively stimulates the ATP formation in the oxygen-depleted cells due to a creatine kinase (CK and mitochondrial respiratory chain-focusing "attack" of 25Mg2+ released by nanoparticles. These "smart nanoparticles" with membranotropic properties release the overactivating cations only in response to the intracellular acidosis. The resulting positive changes in the energy metabolism of heart cell may help to prevent local myocardial hypoxic (ischemic disorders and, hence, to protect the heart muscle from a serious damage in a vast variety of the hypoxia-induced clinical situations including DXR side effects.
19. Conjugation-promoted reaction of open-cage fullerene: A density functional theory study
Guo, Yong; Yan, Jingjing; Khashab, Niveen M.
2012-01-01
Density functional theory calculations are performed to study the addition mechanism of e-rich moieties such as triethyl phosphite to a carbonyl group on the rim of a fullerene orifice. Three possible reaction channels have been investigated
20. Current Analysis and Modeling of Fullerene Single-Electron Transistor at Room Temperature
2017-07-01
Single-electron transistors (SETs) are interesting electronic devices that have become key elements in modern nanoelectronic systems. SETs operate quickly because they use individual electrons, with the number transferred playing a key role in their switching behavior. However, rapid transmission of electrons can cause their accumulation at the island, affecting the I- V characteristic. Selection of fullerene as a nanoscale zero-dimensional material with high stability, and controllable size in the fabrication process, can overcome this charge accumulation issue and improve the reliability of SETs. Herein, the current in a fullerene SET is modeled and compared with experimental data for a silicon SET. Furthermore, a weaker Coulomb staircase and improved reliability are reported. Moreover, the applied gate voltage and fullerene diameter are found to be directly associated with the I- V curve, enabling the desired current to be achieved by controlling the fullerene diameter.
1. Understanding triplet formation pathways in bulk heterojunction polymer : fullerene photovoltaic devices
Tedla, B.; Zhu, F.; Cox, M.; Drijkoningen, J.; Manca, J.V.; Koopmans, B.; Goovaerts, E.
2015-01-01
Triplet exciton (TE) formation pathways are systematically investigated in prototype bulk heterojunction (BHJ) "super yellow" poly(p-phenylene vinylene) (SY-PPV) solar cell devices with varying fullerene compositions using complementary optoelectrical and electrically detected magnetic resonance
2. Effect of Peierls transition in armchair carbon nanotube on dynamical behaviour of encapsulated fullerene
Hieu Nguyen
2011-01-01
Full Text Available Abstract The changes of dynamical behaviour of a single fullerene molecule inside an armchair carbon nanotube caused by the structural Peierls transition in the nanotube are considered. The structures of the smallest C20 and Fe@C20 fullerenes are computed using the spin-polarized density functional theory. Significant changes of the barriers for motion along the nanotube axis and rotation of these fullerenes inside the (8,8 nanotube are found at the Peierls transition. It is shown that the coefficients of translational and rotational diffusions of these fullerenes inside the nanotube change by several orders of magnitude. The possibility of inverse orientational melting, i.e. with a decrease of temperature, for the systems under consideration is predicted.
3. Synthesis of Polythiophene–Fullerene Hybrid Additives as Potential Compatibilizers of BHJ Active Layers
Sofia Kakogianni
2016-12-01
Full Text Available Perfluorophenyl functionalities have been introduced as side chain substituents onto regioregular poly(3-hexyl thiophene (rr-P3HT, under various percentages. These functional groups were then converted to azides which were used to create polymeric hybrid materials with fullerene species, either C60 or C70. The P3HT–fullerene hybrids thus formed were thereafter evaluated as potential compatibilizers of BHJ active layers comprising P3HT and fullerene based acceptors. Therefore, a systematic investigation of the optical and morphological properties of the purified polymer–fullerene hybrid materials was performed, via different complementary techniques. Additionally, P3HT:PC70BM blends containing various percentages of the herein synthesized hybrid material comprising rr-P3HT and C70 were investigated via Transmission Electron Microscopy (TEM in an effort to understand the effect of the hybrids as additives on the morphology and nanophase separation of this typically used active layer blend for OPVs.
4. A bench arc-furnace facility for fullerene and single-wall nanotubes synthesis
Huber John G
2001-01-01
Full Text Available A metallic-sample arc-furnace was modified to synthesize fullerenes and nanotubes. The (reversible changes and the process for producing single-wall nanotubes (SWNTs are described.
5. Realization of large area flexible fullerene - conjugated polymer photocells: a route to plastic solar cells
Brabec, C.J.; Padinger, F.; Hummelen, J.C.; Janssen, R.A.J.; Sariciftci, N.S.
1999-01-01
Bulk donor — acceptor heterojunctions between conjugated polymers and fullerenes have been utilized for photovoltaic devices with quantum efficiencies of around 1%. These devices are based on the photoinduced, ultrafast electron transfer between non degenerate ground state conjugated polymers and
6. Non-Fullerene Electron Acceptors for Use in Organic Solar Cells
Nielsen, Christian B.; Holliday, Sarah; Chen, Hung-Yang; Cryer, Samuel J.; McCulloch, Iain
2015-01-01
The active layer in a solution processed organic photovoltaic device comprises a light absorbing electron donor semiconductor, typically a polymer, and an electron accepting fullerene acceptor. Although there has been huge effort targeted
7. The protective effect of autophagy on mouse spermatocyte derived cells exposure to 1800MHz radiofrequency electromagnetic radiation.
Liu, Kaijun; Zhang, Guowei; Wang, Zhi; Liu, Yong; Dong, Jianyun; Dong, Xiaomei; Liu, Jinyi; Cao, Jia; Ao, Lin; Zhang, Shaoxiang
2014-08-04
The increasing exposure to radiofrequency (RF) radiation emitted from mobile phone use has raised public concern regarding the biological effects of RF exposure on the male reproductive system. Autophagy contributes to maintaining intracellular homeostasis under environmental stress. To clarify whether RF exposure could induce autophagy in the spermatocyte, mouse spermatocyte-derived cells (GC-2) were exposed to 1800MHz Global System for Mobile Communication (GSM) signals in GSM-Talk mode at specific absorption rate (SAR) values of 1w/kg, 2w/kg or 4w/kg for 24h, respectively. The results indicated that the expression of LC3-II increased in a dose- and time-dependent manner with RF exposure, and showed a significant change at the SAR value of 4w/kg. The autophagosome formation and the occurrence of autophagy were further confirmed by GFP-LC3 transient transfection assay and transmission electron microscopy (TEM) analysis. Furthermore, the conversion of LC3-I to LC3-II was enhanced by co-treatment with Chloroquine (CQ), indicating autophagic flux could be enhanced by RF exposure. Intracellular ROS levels significantly increased in a dose- and time-dependent manner after cells were exposed to RF. Pretreatment with anti-oxidative NAC obviously decreased the conversion of LC3-I to LC3-II and attenuated the degradation of p62 induced by RF exposure. Meanwhile, phosphorylated extracellular-signal-regulated kinase (ERK) significantly increased after RF exposure at the SAR value of 2w/kg and 4w/kg. Moreover, we observed that RF exposure did not increase the percentage of apoptotic cells, but inhibition of autophagy could increase the percentage of apoptotic cells. These findings suggested that autophagy flux could be enhanced by 1800MHz GSM exposure (4w/kg), which is mediated by ROS generation. Autophagy may play an important role in preventing cells from apoptotic cell death under RF exposure stress. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
8. Preparation of fluorescent mesoporous hollow silica-fullerene nanoparticles via selective etching for combined chemotherapy and photodynamic therapy
Yang, Yannan; Yu, Meihua; Song, Hao; Wang, Yue; Yu, Chengzhong
2015-07-01
Well-dispersed mesoporous hollow silica-fullerene nanoparticles with particle sizes of ~50 nm have been successfully prepared by incorporating fullerene molecules into the silica framework followed by a selective etching method. The fabricated fluorescent silica-fullerene composite with high porosity demonstrates excellent performance in combined chemo/photodynamic therapy.Well-dispersed mesoporous hollow silica-fullerene nanoparticles with particle sizes of ~50 nm have been successfully prepared by incorporating fullerene molecules into the silica framework followed by a selective etching method. The fabricated fluorescent silica-fullerene composite with high porosity demonstrates excellent performance in combined chemo/photodynamic therapy. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr02769a
9. Static and Dynamic Energetic Disorders in the C 60 , PC 61 BM, C 70 , and PC 71 BM Fullerenes
Tummala, Naga Rajesh
2015-09-17
We use a combination of molecular dynamics simulations and density functional theory calculations to investigate the energetic disorder in fullerene systems. We show that the energetic disorder evaluated from an ensemble average contains contributions of both static origin (time-independent, due to loose packing) and dynamic origin (time-dependent, due to electron-vibration interactions). In order to differentiate between these two contributions, we compare the results obtained from an ensemble average approach with those derived from a time average approach. It is found that in both amorphous C60 and C70 bulk systems, the degrees of static and dynamic disorder are comparable, while in the amorphous PC61BM and PC71BM systems, static disorder is about twice as large as dynamic disorder. © 2015 American Chemical Society.
10. Direct electrochemistry of glucose oxidase and glucose biosensing on a hydroxyl fullerenes modified glassy carbon electrode.
Gao, Yun-Fei; Yang, Tian; Yang, Xiao-Lu; Zhang, Yu-Shuai; Xiao, Bao-Lin; Hong, Jun; Sheibani, Nader; Ghourchian, Hedayatollah; Hong, Tao; Moosavi-Movahedi, Ali Akbar
2014-10-15
Direct electrochemistry of glucose oxidase (GOD) was achieved when GOD-hydroxyl fullerenes (HFs) nano-complex was immobilized on a glassy carbon (GC) electrode and protected with a chitosan (Chit) membrane. The ultraviolet-visible absorption spectrometry (UV-vis), transmission electron microscopy (TEM), and circular dichroism spectropolarimeter (CD) methods were utilized for additional characterization of the GOD, GOD-HFs and Chit/GOD-HFs. Chit/HFs may preserve the secondary structure and catalytic properties of GOD. The cyclic voltammograms (CVs) of the modified GC electrode showed a pair of well-defined quasi-reversible redox peaks with the formal potential (E°') of 353 ± 2 mV versus Ag/AgCl at a scan rate of 0.05 V/s. The heterogeneous electron transfer constant (ks) was calculated to be 2.7 ± 0.2s(-1). The modified electrode response to glucose was linear in the concentrations ranging from 0.05 to 1.0mM, with a detection limit of 5 ± 1 μM. The apparent Michaelis-Menten constant (Km(app)) was 694 ± 8 μM. Thus, the modified electrode could be applied as a third generation biosensor for glucose with high sensitivity, selectivity and low detection limit. Copyright © 2014 Elsevier B.V. All rights reserved.
11. Inflammogenic effect of well-characterized fullerenes in inhalation and intratracheal instillation studies
Yamamoto Kazuhiro
2010-03-01
Full Text Available Abstract Background We used fullerenes, whose dispersion at the nano-level was stabilized by grinding in nitrogen gas in an agitation mill, to conduct an intratracheal instillation study and an inhalation exposure study. Fullerenes were individually dispersed in distilled water including 0.1% Tween 80, and the diameter of the fullerenes was 33 nm. These suspensions were directly injected as a solution in the intratracheal instillation study. The reference material was nickel oxide in distilled water. Wistar male rats intratracheally received a dose of 0.1 mg, 0.2 mg, or 1 mg of fullerenes and were sacrificed after 3 days, 1 week, 1 month, 3 months, and 6 months. In the inhalation study, Wistar rats were exposed to fullerene agglomerates (diameter: 96 ± 5 nm; 0.12 ± 0.03 mg/m3; 6 hours/days for 5 days/week for 4 weeks and were sacrificed at 3 days, 1 month, and 3 months after the end of exposure. The inflammatory responses and gene expression of cytokine-induced neutrophil chemoattractants (CINCs were examined in rat lungs in both studies. Results In the intratracheal instillation study, both the 0.1 mg and 0.2 mg fullerene groups did not show a significant increase of the total cell and neutrophil count in BALF or in the expression of CINC-1,-2αβ and-3 in the lung, while the high-dose, 1 mg group only showed a transient significant increase of neutrophils and expression of CINC-1,-2αβ and -3. In the inhalation study, there were no increases of total cell and neutrophil count in BALF, CINC-1,-2αβ and-3 in the fullerene group. Conclusion These data in intratracheal instillation and inhalation studies suggested that well-dispersed fullerenes do not have strong potential of neutrophil inflammation.
12. Effect of С(60 fullerene on metabolic and proliferative activity of PKE cell line
I. V. Belochkina
2014-04-01
Full Text Available The effect of С60 fullerene aqueous colloid solution (C60FAS on activity of redox and proliferative processes in PKE (transplantable cell line of pig kidney embryo cells has been studied. In particular, it was established that the presence of С60 fullerene (127 μМ in culturing medium of PKE cells during 48 h did not change their ability to reduce non-toxic АlamarBlue redox indicator and proliferative activity.
13. On the continuous spectrum electromagnetic radiation in electron-fullerene collision
Amusia, M.Y.
1995-01-01
It is demonstrated that the electromagnetic radiation spectrum in electron-fullerene collisions is dominated by a huge maximum of multielectron nature, similar to that already predicted and observed in photoabsorption. Due to coherence, the intensity of this radiation is much stronger than the sum of the intensities of isolated atoms. Experimental detection of such radiation would be of great importance for understanding the mechanism of its formation and for investigating fullerene structures. A paper describing these results was published
14. Fullerene Soot in Eastern China Air: Results from Soot Particle-Aerosol Mass Spectrometer
Wang, J.; Ge, X.; Chen, M.; Zhang, Q.; Yu, H.; Sun, Y.; Worsnop, D. R.; Collier, S.
2015-12-01
In this work, we present for the first time, the observation and quantification of fullerenes in ambient airborne particulate using an Aerodyne Soot Particle - Aerosol Mass Spectrometer (SP-AMS) deployed during 2015 winter in suburban Nanjing, a megacity in eastern China. The laser desorption and electron impact ionization techniques employed by the SP-AMS allow us to differentiate various fullerenes from other aerosol components. Mass spectrum of the identified fullerene soot is consisted by a series of high molecular weight carbon clusters (up to m/z of 2000 in this study), almost identical to the spectral features of commercially available fullerene soot, both with C70 and C60 clusters as the first and second most abundant species. This type of soot was observed throughout the entire study period, with an average mass loading of 0.18 μg/m3, accounting for 6.4% of the black carbon mass, 1.2% of the total organic mass. Temporal variation and diurnal pattern of fullerene soot are overall similar to those of black carbon, but are clearly different in some periods. Combining the positive matrix factorization, back-trajectory and analyses of the meteorological parameters, we identified the petrochemical industrial plants situating upwind from the sampling site, as the major source of fullerene soot. In this regard, our findings imply the ubiquitous presence of fullerene soot in ambient air of industry-influenced area, especially the oil and gas production regions. This study also offers new insights into the characterization of fullerenes from other environmental samples via the advanced SP-AMS technique.
15. Evaluation of the fullerene compound DF-1 as a radiation protector
Shankavaram Uma T
2010-05-01
16. Self-organization processes in polysiloxane block copolymers, initiated by modifying fullerene additives
Voznyakovskii, A. P.; Kudoyarova, V. Kh.; Kudoyarov, M. F.; Patrova, M. Ya.
2017-08-01
Thin films of a polyblock polysiloxane copolymer and their composites with a modifying fullerene C60 additive are studied by atomic force microscopy, Rutherford backscattering, and neutron scattering. The data of atomic force microscopy show that with the addition of fullerene to the bulk of the polymer matrix, the initial relief of the film surface is leveled more, the larger the additive. This trend is associated with the processes of self-organization of rigid block sequences, which are initiated by the field effect of the surface of fullerene aggregates and lead to an increase in the number of their domains in the bulk of the polymer matrix. The data of Rutherford backscattering and neutron scattering indicate the formation of additional structures with a radius of 60 nm only in films containing fullerene, and their fraction increases with increasing fullerene concentration. A comparative analysis of the data of these methods has shown that such structures are, namely, the domains of a rigid block and are not formed by individual fullerene aggregates. The interrelation of the structure and mechanical properties of polymer films is considered.
17. Automatic production of fullerenes by a JxB arc jet discharge
Mieno, Tetsu
1995-01-01
Effective production of many kinds of fullerenes including higher fullerenes and endohedral metallo-fullerenes are necessary to advance fullerene science and technology. Currently, the DC arc discharge method is the most effective method to produce fullerenes. However, carbon atoms evaporated from the anode tend to deposit on the cathode, which grow towards the anode, and obstruct the control of the arc discharge. Furthermore, deposited carbon should be removed to maintain continuous fullerene production. Here, to reduce the deposition of carbon on the cathode, a new discharge method is introduced and the experiment performed. When steady magnetic field is applied perpendicular to the DC current of the arc, ions and electrons are accelerated by JxB force as a plasma jet in the vertical direction. This plasma flow also accelerates helium convection due to the viscosity effect. Therefore, the carbon atoms and carbon neutrals are both blown up by the arc jet before arriving at the cathode. The arc flame in the experiment is actually observed to extend upwards, which dearly indicates the effect of the JxB force
18. TEMPO functionalized C60 fullerene deposited on gold surface for catalytic oxidation of selected alcohols
Piotrowski, Piotr; Pawłowska, Joanna; Sadło, Jarosław Grzegorz; Bilewicz, Renata; Kaim, Andrzej
2017-01-01
C 60 TEMPO 10 catalytic system linked to a microspherical gold support through a covalent S-Au bond was developed. The C 60 TEMPO 10 @Au composite catalyst had a particle size of 0.5–0.8 μm and was covered with the fullerenes derivative of 2.3 nm diameter bearing ten nitroxyl groups; the organic film showed up to 50 nm thickness. The catalytic composite allowed for the oxidation under mild conditions of various primary and secondary alcohols to the corresponding aldehyde and ketone analogues with efficiencies as high as 79–98%, thus giving values typical for homogeneous catalysis, while retaining at the same time all the advantages of heterogeneous catalysis, e.g., easy separation by filtration from the reaction mixture. The catalytic activity of the resulting system was studied by means of high pressure liquid chromatography. A redox mechanism was proposed for the process. In the catalytic cycle of the oxidation process, the TEMPO moiety was continuously regenerated in situ with an applied primary oxidant, for example, O 2 /Fe 3+ system. The new intermediate composite components and the final catalyst were characterized by various spectroscopic methods and thermogravimetry.
19. Theory of interfacial charge-transfer complex photophysics in π-conjugated polymer-fullerene blends
Aryanpour, K.; Psiachos, D.; Mazumdar, S.
2010-03-01
We present a theory of the electronic structure and photophysics of 1:1 blends of derivatives of polyparaphenylenevinylene and fullerenes [1]. Within the same Coulomb-correlated Hamiltonian applied previously to interacting chains of single-component π-conjugated polymers [2], we find an exciplex state that occurs below the polymer's optical exciton. Weak absorption from the ground state occurs to the exciplex. We explain transient photoinduced absorptions in the blend [3], observed for both above-gap and below-gap photoexcitations, within our theory. Photoinduced absorptions for above-gap photoexcitation are from the optical exciton as well as the exciplex, while for below-gap photoexcitation induced absorptions are from the exciplex alone. In neither case are free polarons generated in the time scale of the experiment. Importantly, the photophysics of films of single-component π-conjugated polymers and blends can both be understood by extending Mulliken's theory of ground state charge-transfer to the case of excited state charge-transfer. [1] K. Aryanpour, D. Psiachos, and S. Mazumdar, arXiv:0908.0366 [2] D. Psiachos and S. Mazumdar, Phys. Rev. B. 79 155106 (2009) [3] T. Drori et al., Phys. Rev. Lett. 101, 037402 (2008)
20. Reduction of conspicuous facial pores by topical fullerene: possible role in the suppression of PGE2 production in the skin
Inui, Shigeki; Mori, Ayako; Ito, Masayuki; Hyodo, Sayuri; Itami, Satoshi
2014-01-01
Background Conspicuous facial pores are therapeutic targets for cosmeceuticals. Here we examine the effect of topical fullerene on conspicuous facial pores using a new image analyser called the VISIA® system. Ten healthy Japanese females participated in this study, and they received applications of 1% fullerene lotion to the face twice a day for 8 weeks. Findings Fullerene lotion significantly decreased conspicuous pores by 17.6% (p | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6079186797142029, "perplexity": 9836.311682028554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00197.warc.gz"} |
http://www.aanda.org/articles/aa/abs/2004/30/aa0874/aa0874.html | Free access
Issue A&A Volume 422, Number 3, August II 2004 793 - 816 Astrophysical processes http://dx.doi.org/10.1051/0004-6361:20035874
A&A 422, 793-816 (2004)
DOI: 10.1051/0004-6361:20035874
## Local models of stellar convection:
##### Reynolds stresses and turbulent heat transport
P. J. Käpylä1, 2, M. J. Korpi1, 3 and I. Tuominen1
1 Astronomy Division, Department of Physical Sciences, University of Oulu, PO Box 3000, 90014 University of Oulu, Finland
e-mail: petri.kapyla@oulu.fi
2 Kiepenheuer-Institut für Sonnenphysik, Schöneckstrasse 6, 79104 Freiburg, Germany
3 Laboratoire d'Astrophysique, Observatoire Midi-Pyr n es, 14 Av. E. Belin, 31400 Toulouse, France
(Received 16 December 2003 / Accepted 14 March 2004 )
Abstract
We study stellar convection using a local three-dimensional MHD model, with which we investigate the influence of rotation and large-scale magnetic fields on the turbulent momentum and heat transport and their role in generating large-scale flows in stellar convection zones. The former is studied by computing the turbulent velocity correlations, known as Reynolds stresses, the latter by calculating the correlation of velocity and temperature fluctuations, both as functions of rotation and latitude. We find that the horizontal correlation, , capable of generating horizontal differential rotation, attains significant values and is mostly negative in the southern hemisphere for Coriolis numbers exceeding unity, corresponding to equatorward flux of angular momentum. This result is also in accordance with solar observations. The radial component is negative for slow and intermediate rotation indicating inward transport of angular momentum, while for rapid rotation, the transport occurs outwards. Parametrisation in terms of the mean-field -effect shows qualitative agreement with the turbulence model of Kichatinov & Rüdiger (1993) for the horizontal part , whereas for the vertical -effect, , agreement only for intermediate rotation exists. The -coefficients become suppressed in the limit of rapid rotation, this rotational quenching being stronger and occurring with slower rotation for the V component than for H. We have also studied the behaviour of the Reynolds stresses under the influence of a large-scale azimuthal magnetic field of varying strength. We find that the stresses are enhanced by the presence of the magnetic field for field strengths up to and above the equipartition value, without significant quenching. Concerning the turbulent heat transport, our calculations show that the transport in the radial direction is most efficient at the equatorial regions, obtains a minimum at midlatitudes, and shows a slight increase towards the poles. The latitudinal heat transport does not show a systematic trend as a function of latitude or rotation.
Key words: convection -- hydrodynamics -- Sun: rotation | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636873960494995, "perplexity": 2219.521164968938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00383-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.degruyter.com/view/j/anly.2017.37.issue-4/anly-2017-0047/anly-2017-0047.xml?format=PAP | Show Summary Details
More options …
# Analysis
### International mathematical journal of analysis and its applications
CiteScore 2018: 0.72
SCImago Journal Rank (SJR) 2018: 0.363
Source Normalized Impact per Paper (SNIP) 2018: 0.530
Mathematical Citation Quotient (MCQ) 2018: 0.36
Print
ISSN
0174-4747
See all formats and pricing
More options …
GO
Volume 37, Issue 4
# Existence of variational solutions for time dependent integrands via minimizing movements
Leah Schätzler
• Corresponding author
• Department Mathematik, Friedrich-Alexander-Universität Erlangen–Nürnberg, Cauerstraße 11, 91058 Erlangen, Germany
• Email
• Other articles by this author:
Published Online: 2017-10-25 | DOI: https://doi.org/10.1515/anly-2017-0047
## Abstract
We prove the existence of variational solutions to equations of the form
${\partial }_{t}u-\mathrm{div}\left({D}_{\xi }f\left(x,t,Du\right)\right)=0,$
where the function f merely satisfies a p-growth condition and is convex with respect to the gradient variable. In particular, we do not require any regularity assumption with respect to time. We obtain an existence result for integrands that are Lipschitz continuous in time via the method of minimizing movements. For the general existence result, we show stability of solutions with respect to approximation of the integrands. In this context, we prove a result related to Γ-convergence that is also valid for functionals with $\left(p,q\right)$-growth.
Keywords: minimizing movements
MSC 2010: 35K20; 49J40
## References
• [1]
L. Ambrosio, Minimizing movements, Rend. Accad. Naz. Sci. XL Mem. Mat. Appl. (5) 19 (1995), 191–246. Google Scholar
• [2]
L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, 2nd ed., Lect. Math. ETH Zürich, Birkhäuser, Basel, 2008. Google Scholar
• [3]
V. Bögelein, F. Duzaar and P. Marcellini, Parabolic systems with $p,q$-growth: A variational approach, Arch. Ration. Mech. Anal. 210 (2013), no. 1, 219–267.
• [4]
V. Bögelein, F. Duzaar, P. Marcellini and C. Scheven, Doubly nonlinear equations of porous medium type: Existence via minimizing movements, preprint (2017). Google Scholar
• [5]
V. Bögelein, F. Duzaar and C. Scheven, The obstacle problem for parabolic minimizers, J. Evol. Equ. (2017), 10.1007/s00028-017-0384-4. Google Scholar
• [6]
V. Bögelein, T. Lukkari and C. Scheven, The obstacle problem for the porous medium equation, Math. Ann. 363 (2015), no. 1–2, 455–499.
• [7]
H. Brézis and F. E. Browder, Strongly nonlinear parabolic initial-boundary value problems, Proc. Natl. Acad. Sci. USA 76 (1979), no. 1, 38–40.
• [8]
F. E. Browder and H. Brézis, Strongly nonlinear parabolic variational inequalities, Proc. Natl. Acad. Sci. USA 77 (1980), no. 2, 713–715.
• [9]
M. Carozza, J. Kristensen and A. Passarelli di Napoli, Regularity of minimizers of autonomous convex variational integrals, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 13 (2014), no. 4, 1065–1089. Google Scholar
• [10]
M. Carozza, J. Kristensen and A. Passarelli di Napoli, Regularity of minimizers of variational integrals with wide range of anisotropy, Report no. OxPDE-14/01, University of Oxford, 2014, https://www.maths.ox.ac.uk/system/files/attachments/OxPDE\%20\%2014.01.pdf.
• [11]
G. Dal Maso, An Introduction to Γ-Convergence, Progr. Nonlinear Differential Equations Appl. 8, Birkhäuser, Boston, 1993. Google Scholar
• [12]
E. DiBenedetto, Degenerate Parabolic Equations, Universitext, Springer, New York, 1993. Google Scholar
• [13]
I. Ekeland and R. Témam, Convex Analysis and Variational Problems, Engl. ed., Class. Appl. Math. 28, Society for Industrial and Applied Mathematics, Philadelphia, 1999. Google Scholar
• [14]
U. Gianazza and G. Savaré, Abstract evolution equations on variable domains: An approach by minimizing movements, Ann. Sc. Norm. Super. Pisa Cl. Sci. (4) 23 (1996), no. 1, 149–178. Google Scholar
• [15]
A. Lichnewsky and R. Temam, Pseudosolutions of the time-dependent minimal surface problem, J. Differential Equations 30 (1978), no. 3, 340–364.
• [16]
J.-L. Lions, Quelques méthodes de résolution des problèmes aux limites non linéaires, Dunod, Paris, 1969. Google Scholar
• [17]
P. Marcellini, Approximation of quasiconvex functions, and lower semicontinuity of multiple integrals, Manuscripta Math. 51 (1985), no. 1–3, 1–28.
• [18]
L. Schätzler, Existence of variational solutions for time dependent integrands via minimizing movements, Masterarbeit, 2017. Google Scholar
• [19]
R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations, Math. Surveys Monogr. 49, American Mathematical Society, Providence, 1997. Google Scholar
Accepted: 2017-09-21
Published Online: 2017-10-25
Published in Print: 2017-11-01
Citation Information: Analysis, Volume 37, Issue 4, Pages 199–222, ISSN (Online) 2196-6753, ISSN (Print) 0174-4747,
Export Citation
© 2017 Walter de Gruyter GmbH, Berlin/Boston. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6918442845344543, "perplexity": 3932.0306364385624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00246.warc.gz"} |
http://www.latex-community.org/forum/viewtopic.php?f=45&t=9264 | Who is online
In total there are 13 users online :: 0 registered, 0 hidden and 13 guests (based on users active over the past 5 minutes)
Most users ever online was 1327 on Tue Nov 05, 2013 7:11 pm
Users browsing this forum: No registered users and 13 guests
Minipage Alignment
Add tags Information and discussion about graphics, figures & tables in LaTeX documents.
Minipage Alignment
I'm trying to get three minipages to align properly on my custom titlepage, the first two vertically aligned along the first line and the third one lying below the first. But it's not coming out quite right, the first two won't align along the "Mentors" line and the third minipage comes out below the third but it's indented. But if I don't put the \vspace command in, it runs right below the minipage, which I think looks awkward.
Code: Select all • Open in writeLaTeX
\begin{minipage}{0.5\textwidth}\begin{flushleft}\normalsize\textbf{Industry Mentors:}\\Person #1 \\Person #2\\Person #3 \\\end{flushleft}\end{minipage}\begin{minipage}{0.5\textwidth}\begin{flushright}\normalsize \textbf{Academic Mentor}\\Person #4\end{flushright}\end{minipage}\vspace{0.5cm}\begin{minipage}[c]{0.5\textwidth}\begin{flushleft}{ \normalsize \textbf{Student Researchers:}\\Person #5 (project manager)Person #6\\Person #7\\Person #8\\\end{flushleft}\end{minipage}
mdk31
Posts: 20
Joined: Wed Mar 11th, 2009
Re: Minipage Alignment
To make the first two minipages line up, use the [t] option to align them both at the top vertically.
I wasn't exactly sure what you meant about the third one being indented. If I had to guess though, it's probably that a space is being put in because you didn't comment out the linefeed at the end of the \vspace command. Might as well put in the \noindent command as well.
Try something like:
Code: Select all • Open in writeLaTeX
\documentclass{article}\begin{document}\noindent%\begin{minipage}[t]{0.5\textwidth}\begin{flushleft}\normalsize\textbf{Industry Mentors:}\\Person \#1 \\Person \#2\\Person \#3 \\\end{flushleft}\end{minipage}\begin{minipage}[t]{0.5\textwidth}\begin{flushright}\normalsize\textbf{Academic Mentor}\\Person \#4\end{flushright}\end{minipage}\vspace{0.5cm}%\noindent%\begin{minipage}[t]{0.5\textwidth}\begin{flushleft}\normalsize \textbf{Student Researchers:}\\Person \#5 (project manager)\\Person \#6\\Person \#7\\Person \#8\\\end{flushleft}\end{minipage}\end{document}
However, since you didn't provide a full, compiliable example, it may well be that there are differences between what I have here and what you had in mind. (Your code used # where it needs \#, and there's a stray { or a missing } on one of the lines.)
frabjous
Posts: 2065
Joined: Fri Mar 6th, 2009
Location: Amherst, MA
Re: Minipage Alignment
Bless you, thanks, this works perfectly! I can't tell you how many times this forum has helped me with a LaTeX problem.
mdk31
Posts: 20
Joined: Wed Mar 11th, 2009
Re: Minipage Alignment
Hi mdk31,
great to hear that it's working.
Generally if a problem has been solved please the topic as "solved". This can be done by editing the initial post changing the topic icon to the green checkmark.
Kind regards,
Stefan
Stefan_K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6581622362136841, "perplexity": 6085.764033928514}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469077.97/warc/CC-MAIN-20150226074109-00188-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://docmadhattan.fieldofscience.com/2015/09/hints-of-physics-behind-standard-model.html | ### Hints of physics behind standard model?
The LHCb collaboration is studying the decay of mesons $B$ in order to find some violations in standard model rules. In particular LHCb has measured a particular ratio, named $R (D^*)$, between two decay modes of $\overline{B}^0$ and they find a violation from the standard model prediction that is compatible with other similar measures:
In the SM all charged leptons, such as taus ($\tau$) or muons ($\mu$), interact in an identical fashion (or, in physicists' language, have the same "couplings"). This property is called "lepton universality". However, differences in mass between the leptons must be accounted for, and affect decays involving these particles. The $\tau$ lepton is much heavier than the $\mu$ lepton and therefore the SM prediction for the ratio $R(D^*)$ is substantially smaller than 1. This ratio is considered to be precisely calculable thanks to the cancellation of uncertainties associated with the $B$ to $D^*$ meson transition.
But there is another hint of new physics. At the end of July Nature Physics published a new paper from the LHCb collaboration about the possible existence of a new particle:
The LHCb collaboration published in Nature Physics a paper based on run 1 data which reports the determination of the parameter $|V_{ub}|$ describing the transition of a $b$ quark to a $u$ quark. This measurement was made by studying a particular decay of the $\Lambda_b^0$ baryon. Other measurements of $|V_{ub}|$ by previous experiments had returned two sets of inconsistent results, depending on which method was used to determine the parameter. Theorists had suggested that this discrepancy could be explained by the presence a new particle contributing to the decay process, which affected the result differently, depending on the measurement method. Today's result from LHCb removes the need for this new particle, while the puzzle of why the original sets of measurements do not agree persists.
where $|V_{ub}|$ is connected to the Cabibbo-Kobayashi-Maskawa matrix. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786820411682129, "perplexity": 706.3539025937242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00399-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.degruyter.com/view/j/zkri.2017.232.issue-1-3/zkri-2016-1965/zkri-2016-1965.xml?format=INT | Show Summary Details
More options …
# Zeitschrift für Kristallographie - Crystalline Materials
Editor-in-Chief: Pöttgen, Rainer
Ed. by Antipov, Evgeny / Bismayer, Ulrich / Boldyreva, Elena V. / Huppertz, Hubert / Petrícek, Václav / Tiekink, E. R. T.
12 Issues per year
IMPACT FACTOR 2016: 3.179
CiteScore 2016: 3.30
SCImago Journal Rank (SJR) 2016: 1.097
Source Normalized Impact per Paper (SNIP) 2016: 2.592
Online
ISSN
2196-7105
See all formats and pricing
More options …
GO
Volume 232, Issue 1-3 (Feb 2017)
# Thermal annealing of natural, radiation-damaged pyrochlore
Peter Zietlow
/ Tobias Beirau
• Department of Earth Sciences, University of Hamburg, 20146 Hamburg, Germany
• Department of Geological Sciences, Stanford University, Stanford, CA 94305-2115, USA
• Other articles by this author:
/ Boriana Mihailova
/ Lee A. Groat
• Department of Earth, Ocean and Atmospheric Sciences, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
• Other articles by this author:
/ Thomas Chudy
• Department of Earth, Ocean and Atmospheric Sciences, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
• Other articles by this author:
/ Anna Shelyug
• Peter A. Rock Thermochemistry Laboratory and Nanomaterials in the Environment, Agriculture, and Technology Organized Research Unit, University of California Davis, Davis, CA 95616, USA
• Other articles by this author:
/ Alexandra Navrotsky
• Peter A. Rock Thermochemistry Laboratory and Nanomaterials in the Environment, Agriculture, and Technology Organized Research Unit, University of California Davis, Davis, CA 95616, USA
• Other articles by this author:
/ Rodney C. Ewing
/ Jochen Schlüter
• Institute of Geological Sciences, Faculty of Science, Masaryk University, 611 37 Brno, Czech Republic
• Other articles by this author:
/ Ulrich Bismayer
Published Online: 2016-08-30 | DOI: https://doi.org/10.1515/zkri-2016-1965
## Abstract
Radiation damage in minerals is caused by the α-decay of incorporated radionuclides, such as U and Th and their decay products. The effect of thermal annealing (400–1000 K) on radiation-damaged pyrochlores has been investigated by Raman scattering, X-ray powder diffraction (XRD), and combined differential scanning calorimetry/thermogravimetry (DSC/TG). The analysis of three natural radiation-damaged pyrochlore samples from Miass/Russia [6.4 wt% Th, 23.1·1018 α-decay events per gram (dpg)], Panda Hill/Tanzania (1.6 wt% Th, 1.6·1018 dpg), and Blue River/Canada (10.5 wt% U, 115.4·1018 dpg), are compared with a crystalline reference pyrochlore from Schelingen (Germany). The type of structural recovery depends on the initial degree of radiation damage (Panda Hill 28%, Blue River 85% and Miass 100% according to XRD), as the recrystallization temperature increases with increasing degree of amorphization. Raman spectra indicate reordering on the local scale during annealing-induced recrystallization. As Raman modes around 800 cm−1 are sensitive to radiation damage (M. T. Vandenborre, E. Husson, Comparison of the force field in various pyrochlore families. I. The A2B2O7 oxides. J. Solid State Chem. 1983, 50, 362, S. Moll, G. Sattonnay, L. Thomé, J. Jagielski, C. Decorse, P. Simon, I. Monnet, W. J. Weber, Irradiation damage in Gd2Ti2O7 single crystals: Ballistic versus ionization processes. Phys. Rev. 2011, 84, 64115.), the degree of local order was deduced from the ratio of the integrated intensities of the sum of the Raman bands between 605 and 680 cm−1 divided by the sum of the integrated intensities of the bands between 810 and 860 cm−1. The most radiation damaged pyrochlore (Miass) shows an abrupt recovery of both, its short- (Raman) and long-range order (X-ray) between 800 and 850 K, while the weakly damaged pyrochlore (Panda Hill) begins to recover at considerably lower temperatures (near 500 K), extending over a temperature range of ca. 300 K, up to 800 K (Raman). The pyrochlore from Blue River shows in its initial state an amorphous X-ray diffraction pattern superimposed by weak Bragg-maxima that indicates the existence of ordered regions in a damaged matrix. In contrast to the other studied pyrochlores, Raman spectra of the Blue River sample show the appearance of local modes above 560 K between 700 and 800 cm−1 resulting from its high content of U and Ta impurities. DSC measurements confirmed the observed structural recovery upon annealing. While the annealing-induced ordering of Panda Hill begins at a lower temperature (ca. 500 K) the recovery of the highly-damaged pyrochlore from Miass occurs at 800 K. The Blue-River pyrochlore shows a multi-step recovery which is similarly seen by XRD. Thermogravimetry showed a continuous mass loss on heating for all radiation-damaged pyrochlores (Panda Hill ca. 1%, Blue River ca. 1.5%, Miass ca. 2.9%).
## Introduction
Metamict minerals that is those that have sustained a degree of radiation-induced structural disorder from the decay of incorporated radionuclides are often annealed in order to establish their original crystalline structure [1], [2], [3], [4]. However, recent studies of radiation damage induced by ion beam damage (e.g. [5], [6], [7]) show that materials and minerals may respond by different disordering mechanisms, including decomposition into new phases, cation and anion disordering to derivate crystalline structures and the formation of amorphous domains in the recoil-atom cascade. Thus, the thermal recovery of the crystalline state in pyrochlore will depend on the degree of atomic-scale ordering and the microstructure of the damaged domains within the crystalline matrix. In this study, we investigate the recovery of radiation damage in pyrochlores that have different levels of radiation damage.
The pyrochlore structure (Fdm) with the ideal formula VIIIA2VIB2IVX6IVY is an anion-deficient derivative of the fluorite structure (AX2) type with ordered cation sites [8], [9], [10], [11]. Pyrochlore consists of corner-sharing BX6 octahedra and A2Y chains with eight-fold-coordinated A cations [11], [12], [13], [14] (see Figure 1). The 48f position is occupied by the X-anion, while the Y-site anion is located at the 8b position and anion vacancies with respect to the fluorite structure are at the 8a position [13]. X-site oxygen atoms have a variable x parameter from 0.375 (undistorted A-site cubes) to 0.4375 (undistorted B-site octahedra, shown in Figure 1). Materials with pyrochlore structure display a large variety of properties that have important technical applications, e.g. catalytic abilities, luminescence, piezoelectricity, ferro- and ferri-magnetism, and giant magnetoresistance [15].
Fig. 1:
(a) Pyrochlore structure with A-site cations (small dark gray), B-site cations (centers of octahedra), X-site oxygen anions (large gray), Y-site anions (black) and vacancies compared to fluorite structure (‘missing’ anion in white). The arrow marks channels of A-site cations along [110]. (b) A structural fragment along [111] visualizing a ring of corner-sharing BX6 octahedra.
Natural pyrochlore often occurs with a large variety of incorporated cations. Mainly Na, Ca, U, Th, Y, and REEs occupy the A-site, but also minor elements like Fe2+, Mn, Sn2+, Sb, Bi, Sr, Ba, Pb, K, and Cs have been found [16]. The B-site can be occupied by Nb, Ta, Ti, Fe3+, Zr, and Sn4+ [16]. O can occupy the X- as well as the Y-site, whereas OH and F only occupy the latter [16]. The pyrochlore supergroup is divided, based on the B-site cations, into three subgroups, namely pyrochlore with Nb5+, betafite with Ti4+, and microlite with Ta5+ as the major B-site cation [11], [17]. Natural pyrochlores can incorporate up to 30 wt% UO2 and 9 wt% ThO2 [18]. Compositions with pyrochlore structure, hosting actinides, e.g. U, Th, Cm, and Pu, have been successfully synthesized [19], [20], [21], [22], [23], [24], [25]. Chakoumakos and Ewing [26] have proposed that actinides with higher valence states like Np6+ and Pu6+ can be incorporated into an ideal or defect pyrochlore structure at the B-site.
Resulting mainly from the α-decay events of the incorporated actinides, the periodically ordered atomic structure can become disordered, often leading to amorphization (metamictization). This structural damage process has been described in great detail (i.e. [13], [27], [28], [29], [30]. The α-decay of the unstable nucleus generates two different types of particles, an α-particle, a ${}_{2}^{4}H{e}^{2+}$ core with an energy of ~4.5–5.8 MeV (for actinides), and a heavy recoil nucleus with an energy of ~70–100 keV. The smaller α-particle displaces only several hundreds of atoms, mostly close to the end of its trajectory at ~15–22 μm, inducing Frenkel defects in the structure by elastic collisions. The other particle is a heavier recoil nucleus displacing, in spite of its lower kinetic energy (~86 keV for 235U recoil from decay of 239Pu), several thousands of atoms in its path of ~30–40 nm through the crystal structure. In zircon ~5000 atoms are displaced per decay event [31], [32]. The difference leading to displaced atoms is attributed to the fact that the α-particle deposits most of its energy by ionization processes; whereas, the recoil nucleus loses its energy by elastic collisions. Therefore, the recoil nucleus introduces atomic recoil- or collision-cascades into the ordered structure. The overlap of these disordered aperiodic regions finally saturates and the long-range order is lost. In pyrochlore, radiation-induced structural changes can be described by the direct impact model [15], [33]. Accordingly, recoil-related discrete regions consisting of several thousand displaced atoms generate percolation paths by their overlap and create a composite structure of coexisting aperiodic and crystalline regions that are enriched in defects. From the thermodynamical point of view, the radiation-damaged structural state is metastable and can (at least to a certain extent) be recovered by thermal annealing due to activated epitaxial recrystallization or nucleation and crystal growth, depending on the original damage structure. Therefore, the degree of order of metamict natural mineral structures depends on the thermal history with respect to geological processes that can affect, e.g. crystallization and chemical composition.
Using TEM and X-ray diffraction techniques, Lumpkin and Ewing [16] were able to follow the dose related structural damage process in natural pyrochlore. At the earliest stage, isolated α-recoil tracks show little effect. With increasing dose these tracks overlap, producing aperiodic regions (1–5 nm) and a “mottled” diffraction contrast could be observed. This leads to the coexistence of aperiodic and crystalline nanoregions, resulting in a predominant amorphous matrix with embedded crystalline “islands” with further structural damage. Finally, when the lattice fringe periodicity is completely lost and diffraction pattern show diffuse scattering bands, the fully metamict state has been attained. Critical amorphization doses are reported in the order of 1018–1020 α-decay events per gram [16]. Ion irradiation studies on synthetic rare earth titanates have been carried out by Lian et al. [34], who also described several steps of increased disorder prior to amorphization detected by high-resolution transmission electron microscopy (HRTEM). The pyrochlores also showed an increase of their ionic conductivity.
Synthetic and to a smaller extent natural pyrochlores with a broad range of different cation substitutes have been thoroughly investigated [16], [34], [35], [36], [37], [38], [39], [40], [41]. Gregg et al. [42] observed additional Raman modes between 700 and 800 cm−1 related to U-impurities in the pyrochlore structure.
The annealing-induced recrystallization behavior of moderately to fully metamict, altered natural pyrochlores has been followed among others by differential thermal and thermogravimetric analysis (DTA/TG), revealing strong exothermic reactions and a noticeable weight loss [37], [38], [43].
Because of their long-term stability with respect to radiation-induced damage, pyrochlore-type compounds have been considered as potential hosts for a variety of radionuclides [13], [15], [30]. Therefore, important properties like the critical amorphization dose [18], [36], [44], [45], [46] and the leaching behavior [18], [47], [48], [49], [50], [51], [52], [53] have been studied in detail. To obtain better understanding of the behavior of pyrochlore under extreme conditions, combined swift heavy-ion irradiation and high-pressure studies have been performed (see [54]). Extensive work has been done on the formation enthalpies of synthetic titanates (e.g. [55], [56]). Nevertheless, knowledge about the thermodynamic stability and physical properties of disordered pyrochlore is still limited [37], [38], [57].
The local structure of synthetic pyrochlore-type materials have been extensively studied by Raman spectroscopy and the observed Raman signals could be assigned to certain phonon modes on the basis of model calculations [6], [7], [10], [45], [48], [49], [52], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [68], [69], [70], [71].
The motivation of this paper is to address the question of how the initial degree of radiation induced structural damage affects the thermally-induced recrystallization of pyrochlore on different length scales. Therefore, four natural fluorcalciopyrochlores with overall comparable chemistry, but different degrees of structural damage (highly crystalline to fully-metamict) were investigated. In order to better understand the influence of U- and Ta-impurities, one sample with high U/Ta-amount was selected. Local and longer-range probes, such as Raman spectroscopy and powder X-ray diffraction (XRD), respectively, have been used to follow the structural reorganization, while the overall recrystallization behavior has been determined by simultaneous differential scanning calorimetry and thermogravimetric analysis (DSC-TG). The results obtained in this study, reveal a direct dependence between the recrystallization on the short- and long-range and the initial degree of structural damage as well as the occurrence of local modes due to U and Ta contamination.
## Electron microprobe analysis
The chemical composition of the pyrochlore was determined by electron microprobe analysis (CAMECA Camebax microbeam SX SEM system), averaging over 9–65 analysis points with an acceleration voltage of 15 keV and a probe current of 20 nA. The beam diameter was set at 1 μm and ZAF correction was used for intensity-mass correlation. Standards used were LiF (F-Kα), albite (Na-Kα), andradite (Ca-Kα, Fe-Kα), MnTiO3 (Ti-Kα), SrTiO3 (Sr-Lα), ZrSiO4 (Zr-Lα), Nb2O5 (Nb-Lα), ree3 (La-Lα), ree2 (Ce-Lβ), ree4 (Nd-Lβ), Ta2O5 (Ta-Lα), Th glass (Th-Mα) and UO2 (U-Mβ).
## X-ray powder diffraction
Powder XRD patterns were recorded using a Philips XPert powder diffractometer with Bragg Bretano geometry and Cu-Kα radiation. Multistep annealing of the ground pyrochlores was carried out in a Thermo-Scientific Laboratory Chamber Furnace K114. The temperature was controlled by an AHLBORN THERM 2420 temperature-measuring device equipped with a NiCr–Ni thermocouple, which ensured a thermal stability of ±2 K. The samples were annealed for 1 h in air in the temperature range of 500–1000 K. Diffraction measurements were performed after a relaxation time of 30 min.
## Spectroscopy
Raman spectra were recorded from the crystallographic plane (111), respectively. The measurements were performed on polished plane-parallel specimens using a Horiba Jobin-Yvon T64000 triple monochromator system operating in the subtractive regime and equipped with a liquid N2-cooled charge-coupled device (CCD) detector and an Olympus BX41 microscope. Spectra were collected in backscattering geometry without analyzer of the scattered light, using the 514.5 nm line of an Ar+-ion laser and a long-working distance objective with magnification 50×. The Raman spectroscopic system was always calibrated to the position of the Si peak at 520.5 cm−1 with a precision of ±0.35 cm−1. The measured spectra were reduced by the Bose-Einstein occupation factor {Ireduced=Imeasured/[n(ω, T)+1], n(ω, T)=1/(eω/kT–1)} and fitted by Lorentzian functions using the software package Origin 8.5, to determine the peak positions, full widths at half maximum (FWHM), and integrated intensities. Before annealing and after the last annealing step the samples were checked for photoluminescence with the 488 nm-laser line and for homogeneity at different spots on the polished (111) surfaces. For vibrational mode assignment the parallel polarized measurements were compared to the cross-polarized incident and outgoing beam. As the structurally damaged samples showed varying 810 cm−1 intensities before annealing, areas with a low 605–810 cm−1 intensity ratio were chosen for analysis. The samples were annealed in a Linkam stage TS1200EV-10/5 with annealing times of 1 h and a heating/cooling ramp of 15 min per temperature step. Raman measurements were performed at room temperature, respectively.
In order to verify whether in general the compositional disorder in pyrochlore violates the phonon selection rules, additional infrared spectra from the crystalline Schelingen reference sample were collected with a Bruker Equinox Fourier Transform Infrared spectrometer. The conventional pellet technique was used for FTIR. The matrix material was KBr (1 mg powdered sample and 200 mg KBr powder).
## Thermal analysis
Simultaneous DSC-TG analyses were performed with a Netsch STA 449 C instrument. The samples were annealed in a Pt pan in air from room temperature up to 1000 K with a heating rate of 10 K/min. A weak weight loss due to volatile species, as has been observed in other radiation-damaged minerals (e.g. [27]) could not be excluded.
## Sample description
The samples (Figure 2) were obtained from the Centrum für Naturkunde – Mineralogical Museum Hamburg, (referred as Schelingen and Miass), the Pacific Museum of the Earth of the University British Columbia (referred as Panda Hill) and the collection of Thomas Chudy from the University of British Columbia (referred as Blue River).
Fig. 2:
Pyrochlores from (a) Schelingen, Germany, (b) Panda Hill, Tanzania, (c) Blue River, Canada, and (d) Miass, Russia.
All samples were octahedral in form, but the most radiation-damaged samples showed a glassy luster (Figure 2). The Schelingen pyrochlore originates from the Kaiserstuhl carbonatite complex in southwest Germany. With an age of 16±2 Ma [72] it was the youngest sample in this study. Schelingen pyrochlore appeared as brown idiomorphic octahedrons of millimeter size. Panda Hill pyrochlore samples were from the Mbeya mountain range in southwest Tanzania. The corresponding carbonatite complex has been dated to an age of 116±6 Ma [73]. The Panda Hill pyrochlores were brown in color with an idiomorphic octahedral form of 1–3 cm diameter. The initially already partially recrystallized Blue River pyrochlore sample originated from the Upper Fir carbonatite complex in the Monashee Mountains, Canada, whose deposit age is 328±30 Ma (U-Pb dating of zircon) according to the carbonatite data base [74]. The black Blue River sample showed glassy reflection on preserved octahedral surfaces. In addition, black Miass pyrochlores from the Ilmen Mountains in the southern Urals, Russia, showed a glassy luster of their idiomorphic octahedral shape. Ages were reported as 432±12 Ma [75]. Schelingen pyrochlore was selected as a crystalline reference, the Panda Hill sample as a weakly radiation-damaged, the Blue River pyrochlore as a heavily damaged, containing some ordered crystalline areas, and Miass pyrochlore as a fully-amorphous sample.
The pyrochlore samples had Nb contents ranging from 32.5 to 42.8 wt% at the B-cation site with some content of Ti (0.6–6.0 wt%) or Ta (0.1–14.9 wt%). Ca (7.4–13.1 wt%) and Na (1.6–4.3 wt%) are assigned as A-cations. At the 8a Wyckoff positions there are O and F. All probed pyrochlores have a rather high F content of 2–4 wt%. Incorporated OH/H2O is considered as the main contribution to fractions missing to 100%.
The results of the electron microprobe analyses are listed in Table 1. The atoms per formula unit were calculated with respect to two B site cations and with the assumption that each element has only one oxidation state. We also assumed that the B site is occupied by two cations and that charge is balanced by molecular OH, the approximated formulas of the analyzed pyrochlores are:
• Schelingen:
• (Na0.3Ca1.4Fe0.1REE0.2)(Ti0.1Zr0.1Nb1.9)O6 (O0.8(OH)0.1F0.4),
• Panda Hill:
• (Na0.6Ca1.20.2)(Ti0.2Nb1.7)O6 (O0.1F0.80.1),
• Blue River:
• (Na0.6Ca0.8U0.20.4) (Ti0.2Nb1.5Ta0.3)O6(O0.2F0.50.3),
• Miass:
• (Na0.7Ca1.0REE0.2Th0.1)(Ti0.5Nb1.5)O6(O0.1F0.9).
Tab. 1:
Chemical composition of pyrochlore samples in wt% oxides.
For detailed composition see Table 1.
The only sample containing larger amount of U and Ta is from Blue River, while the other pyrochlore samples rather contain Th- and Ti-impurities. From the actinide content measured by EMP (Table 1) and geologic age the total radiation dose can be calculated using α-decays of the 232Th-, 235U- and 238U-decay series as given by Holland and Gottfried [76]
$Dα = 8cUNA0.9928M238106(eλ238t − 1)+ 7cUNA0.0072M235106(eλ235t − 1)+ 6cThNAM232106(eλ232t − 1)$(1)
with U and Th concentrations cU and cTh in ppm, Avogadro’s number NA, molecular weights of parent isotopes M238, M235 and M232, decay constants λ238, λ235 and λ232 [77] and the reported geologic age t in years (Table 2). Actinide contents range from qualitative detection limit to 10.6 wt%, resulting in doses of <0.1–115.4·1018 decay events per gram by considering the reported crystallization ages of the carbonatite host rocks. After radiation damage, geological conditions may have altered the partially amorphous to an amorphous state of the samples. More recent deformation ages of host rocks were the greenschist-facies metamorphism of the Rocky Mountains and the Urals [75], where temperatures needed for structural healing could have been reached and the crystal structure might have been (partially) reset. Doses shown in Table 2 are the maximum life-time doses. Microprobe analyses results averaged over 9–65 points and showed only minor chemical deviation, except the Panda Hill sample with narrow chemical zoning towards the sample rim.
Tab. 2:
Ages and doses of pyrochlore samples.
## Long-range order: effect of radiation damage and subsequent thermal annealing
The X-ray powder diffraction patterns of the three investigated virgin, radiation damaged pyrochlores show three broad amorphous background features centered at 28.9°, 50.0° and 58.4° 2θ (Panda Hill), 30.3°, 51.5° and 61.4° 2θ (Blue River), and 30.3°, 50.9° and 59.4° 2θ (Miass) (Figure 3). As these features decrease in intensity with increasing annealing temperature, resulting from structural reorganization, the evolution of the amorphous fraction (Xamorph) could be determined by
$Xamorph = IamorphIamorph + IBragg$(2)
where Iamorph and IBragg are the integrated intensities of the amorphous background and the Bragg signals, respectively (errors are in the order of 5%) (Figure 4a). The natural crystalline sample from Schelingen served as reference with 9% disorder probably due to impurities, remaining constant on annealing (Figure 4a). The resulting amorphous fractions of the untreated samples are 28% (Panda Hill), 85% (Blue River) and 100% (Miass). Annealing at T= 1000 K reduced the amorphous fractions to 14% (Panda Hill), 8% (Blue River) and 23% (Miass) with an experimental error of ±2%.
Fig. 3:
Diffractograms of samples (a) Schelingen (a: 295 K, b: 500 K, c: 700 K, d: 750 K, e: 770 K, f: 780 K), (b) Panda Hill (a: 295 K, b: 500 K, c: 700 K, d: 710 K, e: 720 K, f: 800 K, g: 900 K, h: 1000 K), (c) Blue River (a: 295 K, b: 500 K, c: 550 K, d: 600 K, e: 650 K, f: 700 K, g: 710 K, h: 720 K, i: 730 K, j: 800 K, k: 900 K, l: 1000 K) and (d) Miass (a: 295 K, b: 780 K, c: 790 K, d: 800 K, e: 810 K, f: 820 K, g: 830 K, h: 900 K, i: 1000 K); annealing for 1 h in air. ‘*’ indicates impurities.
Fig. 4:
(a) Amorphous fraction deduced from the XRD amorphous background and the total integrated intensities according to Iamorph/(Iamorph+IBragg) and (b) FWHM of individual (440) powder XRD signals for pyrochlores from Schelingen (, <0.1·1018 dpa), Panda Hill (▲, 1.6·1018 dpa), Blue River (□, 115.4·1018 dpa) and Miass (●, 23.2·1018 dpa), respectively. The Miass pyrochlore shows no (440) signal exceeding the background noise below 800 K. Lines are guides for eye.
The diffraction signals of all three radiation damaged samples sharpen and the amorphous background decreases significantly during the step-wise thermal annealing, indicating recrystallization on the mesoscopic length scale. Evolution of the full width at half maximum (FWHM) of the prominent (440) diffraction signal is shown as a function of temperature in Figure 4b. In weakly damaged Panda Hill pyrochlore it gradually decreases from 0.2° to 0.1° 2θ accompanied by a decreasing amorphous fraction from ~24% to 10% between 500 and 700 K.
The FWHM of the strongly radiation-damaged Blue River sample is stepwise reduced from 0.3° to 0.1° 2θ between ~650 and 1000 K. The amorphous fraction starts to decrease from ~80% to <10% between ~500 and 800 K, but shows a strong reduction at 650 K.
Metamict Miass pyrochlore remains fully-amorphous on the XRD length scale until 790 K. Up to this temperature the diffraction patterns only show a broad Gaussian-shaped amorphous background centered near 30.3° 2θ. The FWHM of this background feature decreases, following a linear trend, during step-wise annealing from room temperature up to 790 K. The recrystallization occurs relatively abrupt between ca. 790 and 850 K, indicated by a sharpening of the diffraction signals and a decrease of the amorphous background. Only a small and constant amorphous fraction (~25%) remains visible at temperatures above ca. 850 K.
In addition to the broad background signal, the weakly damaged Panda Hill and strongly damaged Blue River pyrochlores show narrow Bragg diffraction signals, which may indicate the presence of small crystalline clusters in the virgin samples. The intensities of these Bragg signals increase on heating between 500 and 700 K and remain constant above 730 K. In the Miass pyrochlore a NaNbO3 phase starts to occur around 820 K. With further annealing up to 1000 K this fraction increases, but remains minor compared to the pyrochlore phase. The Panda Hill and Blue River samples do not show evidences for secondary phases up to annealing temperatures of 1000 K, except a single unidentified diffraction signal at 26.7° in the Panda Hill sample. In all three samples a small amorphous fraction remains visible up to 1000 K. The obtained results show a positive correlation between the degree of structural damage on the long-range scale and the recrystallization behavior, according to the annealing induced sharpening of the diffraction signals.
## Short-range order: effect of radiation damage and subsequent thermal annealing
First-order infrared absorption and Raman scattering spectra of pure pyrochlore can be described by the corresponding optically active irreducible representations at the center of the Brillouin zone [78]. According to group theory, pyrochlore has six Raman and seven infrared active, 12 inactive optical and one acoustic mode [78]. The Γ-point phonon modes associated with the occupied Wyckoff positions in pyrochlore are given in Table 3.
Tab. 3:
Vibrational modes of the pyrochlore structure (Fdm).
The experimentally observed Raman signals are strongly broadened and show considerable overlap. In addition, the metamict Miass pyrochlore shows photoluminescence in the whole measured spectral range. Observed modes are centered at 65, 105, 140, 180, 275 (T2g), 365 (Eg), 430, 495 (Ag), 540 (T2g), 605 (T2g), 680, 810 and 860 cm−1. The Raman band intensities between 450 and 1000 cm−1 show in cross-polarized arrangements considerably lower intensities than with parallel-polarization, indicating symmetrical stretching A-modes.
Various studies assigned the fundamental vibrational modes in the range between 140 and 650 cm−1 to different chemical compositions as shown for comparison in Figure 5 (after [5], [6], [7], [10], [42], [45], [48], [49], [52], [54], [57], [58], [59], [60], [61], [62], [63], [64], [65], [66], [67], [71], [79], [80], [81], [82], [83], [84], [85], [86]). The observed broad mode above 700 cm−1 was assigned either to the T2g vibration [64], to a forbidden mode or a combination of excitations [10], [58]. Frost et al. [87] attributed the complexity of the excitation pattern in the region 600–700 cm−1 to Ti–O stretching vibrations combined with peaks due to Nb–O and Y–O stretching bands.
Fig. 5:
Experimentally observed peak positions of the Raman bands of pyrochlores and their mode assignments [58]1; [64]2; [61]3; [66]4; [62]5; [57]6; [67]7; [63]8; [60]9; [52]10; [59]11; [71]12; [79]13; [10]14; [80]15; [81]16; [82]17; [7]18; [45]19; [6]20; [65]21; [48], [49]22; [54], [83]23; [84]24; [5]25; [42]26; [85]27; [86]28.
Structural recrystallization on the local length scale can be followed from the evolution of the intensity relation of the sum of the bands in the range 605–680 cm−1 and 810–860 cm−1 at different annealing steps. This ratio corresponds inversely to the FWHM of the diffraction signals.
The Schelingen pyrochlore with the highest degree of crystallinity among the studied pyrochlore samples shows five main Raman bands at 180, 275, 540, 605 and 810 cm−1 (Figures 68). The signals are strongly broadened due to structural vacancies and chemical heterogeneity. The latter was observed by microprobe analysis, confirming the presence of several different cations occupying the two structural cation positions.
Fig. 6:
Parallel polarized Raman spectra of weakly damaged (a) Panda Hill, (b) Blue River and (c) Miass pyrochlores at room temperature (ref: reference sample) and after annealing at given temperatures for 1 h individually; the Raman spectrum of non-amorphous Schelingen pyrochlore is shown for comparison.
Fig. 7:
Ratio of the sum of integrated intensities of fitted modes 605 and 680 cm−1 divided by the integrated intensity sum of modes 810 and 860 cm−1 for weakly amorphous Panda Hill (▲), intermediate Blue River (□) excluding the sharp additional modes and metamict Miass (●) pyrochlores (dashed lines are guide for the eye).
Fig. 8:
Comparison of Raman and FTIR spectra of the untreated Schelingen sample.
Weakly damaged Panda Hill pyrochlore shows a band-pattern in the virgin sample which approaches the pattern of the Schelingen sample on annealing. The 810 cm−1 signal clearly decreases with annealing temperature but does not disappear completely until 1000 K.
In contrast to all the other studied samples the radiation-damaged Blue River pyrochlore shows four additional sharp signals between 700 and 800 cm−1 occurring after annealing at temperatures >560 K, although the material appears to be disordered on the long-range length-scale (Figure 3c). The main differences to the other studied pyrochlores are a larger Ta and U content. Sharp modes between 700 and 800 cm−1 were assigned to U–O [42] and Nb/Ta-O stretching [88]. Local modes can occur in systems with a large amount of interacting impurities [89] which do not form proper phases. This is in excellent agreement with the observed Raman signals in the same wavenumber region in the Blue River sample (Figure 6b). In order to evaluate the general intensity evolution on annealing these additional sharp signals have been excluded from the calculated intensity ratio (Figure 7). The high U/Ta-impurity concentrations (Table 1) are assumed to dilute the Ca and Nb position, respectively. In this respect, the Blue River pyrochlore is an excellent example to study local mode behavior in natural pyrochlore.
While Ta5+ cations have a similar ionic radius to Nb5+, their masses (180.9 amu and 92.9 amu, respectively [77]) differ considerably. Despite differences in the compositions of the partially amorphous samples all signal positions could be clearly determined at higher annealing temperatures, also the spectra indicated a generally low influence of the chemistry on the Raman shift (e.g. substitution of Nb by Ta and of A-site cations by U in the Blue River sample).
In the metamict Miass sample, the Raman signal at 810 cm−1 is the only one exceeding the photoluminescence. The collected Raman spectra consist of strongly broadened overlapping bands. The photoluminescence disappears with the inset of recrystallization at temperatures above 800 K. With increasing crystallinity the 605 cm−1 mode (T2g) becomes the strongest signal, while the mode near 810 cm−1 sharply shrinks.
Vandenborre et al. [58] assumed the 810 cm−1 band to be a combination band. A further possible explanation for the appearance of the Raman band near 810 cm−1 may be broken selection rules. This would be related to a lower symmetry in the real structure, compared to the ideal arrangement, which would be the case for the relevant short range coordination, if additional vacancies and impurities distort the local symmetry. To verify this assumption, additionally IR spectroscopic measurements have been conducted on the crystalline reference sample from Schelingen. Like the Raman bands, the infrared bands appear strongly broadened, due to impurities and defects. There are two peak intensities centered at 553 and 710 cm−1, respectively (Figure 8). Both show right-hand shoulders (centered on 620 and 664 cm−1, and 753, 829 and 895 cm−1, respectively).
The comparison of Raman and infrared spectra also confirms the violation of the selection rules in pyrochlore. With its centrosymmetric space group simultaneous Raman- and IR-active modes are symmetry forbidden in the pyrochlore structure. Figure 8 shows that the crystalline reference sample has infrared maxima centered at 553, 620, 664 and 829 cm−1, corresponding to Raman shifts centered at 540, 605, 680 and 810 cm−1. Six of the seven fundamental infrared-active modes were observed between 70 and 523 cm−1 in chemically comparable niobate pyrochlores [64], [66]. The seventh mode was assigned to an infrared band between 550 and 850 cm−1, which could correspond to the observed band at 553 cm−1 in the Schelingen sample. Additional infrared bands within this range are either not fundamental modes or related to clusters with lighter or stronger bond substitutional ions.
## Thermal analysis of the structural reorganization process
The annealing induced thermal behavior of the three investigated radiation-damaged pyrochlore samples from Panda Hill, Blue River and Miass has been additionally studied by combined DSC-TG analysis (Figure 9). The obtained results correlate well with the above mentioned investigations of the annealing related short- and long-range ordering behavior, enable a complete picture of the underlying recrystallization processes.
Fig. 9:
Differential scanning calometry curves (solid line, left axis) and thermogravimetry (dotted line, right axis) of pyrochlores from Panda Hill (a), Blue River (b) and Miass (c).
The weakly damaged Panda Hill sample shows a broad, less defined exothermic DSC signal between ~450 and 800 K, with a maximum around 620 K (Figure 9a). The observed general mass loss during annealing up to ca. 1000 K is around 0.6%. The temperature range of the observed exothermic reaction, accompanied by an increase in weight loss (Figure 9a), is in very good agreement with the findings of the Raman spectroscopic and powder XRD measurements. This indicates, that the major structural reorganization of this relatively slightly disordered sample takes place between ca. 500 and 800 K (Figures 4, 7, 9a).
The relatively strongly radiation damaged Blue River sample shows in a comparable temperature range to Panda Hill (~435 and 800 K) a broad exothermic DSC feature with at least two prominent maxima. A broader one at ~610 K and a sharp defined one at ~785 K (Figure 9b). The measured mass decrease shows a noticeable increase at temperatures between 560 and 653 K with a general decrease of 1.5%. Raman spectroscopy revealed an onset of the recrystallization process around 550 K with a strong increase up to ~620 K on the local length scale (Figures 4, 7). This is consistent with DSC results, which show a first broadened exothermic maximum around 610 K (Figure 9b). The Raman measurements indicate further structural ordering up to ca. 800 K, indicated by the second sharp and defined DSC maximum around 785 K (Figures 7, 9b). This ongoing recrystallization is also indicated by the occurrence of the four additional Raman peaks between 730 and 800 cm−1 above 560 K (Figures 7, 9b). This observation is consistent with the behavior of the FWHM of the main (440) pyrochlore diffraction signal, which indicates the reestablishment of the long-range order, at least up to ~1000 K (Figure 9b). The main decrease of the amorphous fraction takes place between ca. 650 and 800 K. Hence, the intermediate damaged pyrochlore structure shows first a strong recovery on the local length-scale, followed by the reestablishment of the long range order at slightly higher annealing temperatures.
The DSC curve of the metamict Miass sample (completely amorphous on XRD length scale) shows a strong distinct exothermic peak occurring between 800 and 900 K with a sharp maximum around 840 K (Figure 9c). The observed general mass loss during annealing is around 2.9% with a distinct drop around 490 K. This range is in good agreement with the results of the Raman spectroscopic and powder XRD measurements which indicate annealing-induced recrystallization between ~800 and 850 K. The observed change of the shape of the Raman signal between 820 and 850 K (Figures 4, 9c) coincides very well with the relatively high enthalpy of the exothermic DSC peak in this temperature region (Figure 9c).
Initially existing crystalline clusters in the radiation damaged pyrochlores (Figure 3) coincide with the observed decrease of the recrystallization temperature (Figures 7, 9). This indicates a reduction of the recrystallization energy in case of existing crystalline seeds. HRTEM studies by Lumpkin et al. [14] revealed in structurally and chemically comparable pyrochlore samples the existence of crystalline “islands” and decreasing crystalline fractions with increasing radiation dose. Similar behavior has been observed e.g. in zircon [90] and titanite [27].
## Conclusion
This study revealed a correlation between the initial structural damage state and the recrystallization of metamict natural pyrochlore. Considerable differences occur during thermally-induced recrystallization depending on the degree of damage. The amorphous fraction was reduced in Panda Hill from 28% to 14%, in Blue River from 85% to 8% and in Miass from 100% to 23% after annealing at T= 1000 K.
The observed differences in the recrystallization lead to a picture of highly recrystallization resistant metamict pyrochlore (Miass) and a significantly less resistant structure with some initial crystalline clusters (Blue River) (Figures 3, 4). Recrystallization of the fully metamict phase takes place in a very narrow temperature window and at the highest annealing temperature of all probed samples. Initially existing crystalline clusters in the Blue River sample result in a stepwise recrystallization process (Figures 4, 9). The weakly radiation-damaged pyrochlore recrystallized rather continuous starting at low annealing temperature. Thermogravimetry showed that the recrystallization is not accompanied by a significant weight loss.
The observed Raman bands of all samples deviate from theoretically predicted modes and are very broad, due to chemical impurities and radiation damage. Nevertheless, the degree of local order could be successfully obtained from the ratio between integrated intensities of prominent spectral features (e.g. Raman bands around 605–680 cm−1 and 810–860 cm−1 for pyrochlore). According to the longer-ranging X-ray diffraction the thermal recrystallization happens in the temperature interval 500–700 K for Panda Hill, 650–800 K for Blue River and 800–850 K for Miass while the local Raman probe finds recrystallization in ranges 500–800 K for Panda Hill, 550–620 K for Blue River and 820–850 K for Miass, respectively.
High amounts of U and Ta lead to the formation of local modes in the Blue River pyrochlore. This effect occurs above 560 K when the material is still disordered (Figure 3c) and impurity interactions of U/Ta-O stretching modes become prominent (Figure 6b). This has been observed in natural pyrochlore for the first time.
## Acknowledgments
Financial support by the DFG (SPP 1415) is gratefully acknowledged (P.Z. and U.B.). T.B. is grateful for the support by the DAAD with funds from the BMBF and the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° 605728 (P.R.I.M.E. – Postdoctoral position) and also to the University of Hamburg. We would like to thank Stefanie Heidrich for microprobe analysis, Joachim Ludwig for powder XRD measurements and Peter Stutz for sample preparation. The thermal analysis at UC Davis was supported by the U.S. Department of Energy through the Energy Frontier Research Center “Materials Science of Actinides” under Award Number DE-SC001089.
## References
• [1]
A. Pabst, The metamict state. Am. Mineral. 1952, 37, 137. Google Scholar
• [2]
V. Bouska, A systematic review of metamict minerals. Acta Univ. Carol. Geol. 1970, 3, 143. Google Scholar
• [3]
R. C. Ewing, The metamict state: 1993 – the centennial. Nucl. Instrum. Meth. B 1994, 91, 22. Google Scholar
• [4]
G. R. Lumpkin, T. Geisler-Wierwille, Minerals and Natural Analogues, 1st ed., Elsevier Inc, 2012Google Scholar
• [5]
B. P. Mandal, M. Pandey, A. K. Tyagi, Gd2Zr2O7 pyrochlore: potential host matrix for some constituents of thoria based reactor’s waste. J. Nucl. Mater. 2010, 406, 238. Google Scholar
• [6]
G. Sattonnay, S. Moll, L. Thomé, C. Decorse, C. Legros, P. Simon, J. Jagielski, I. Jozwik, I. Monnet, Phase transformations induced by high electronic excitation in ion-irradiated Gd2(ZrxTi1–x)2O7 pyrochlores. J. Appl. Phys. 2010, 108, 103512. Google Scholar
• [7]
S. Moll, G. Sattonnay, L. Thomé, J. Jagielski, C. Decorse, P. Simon, I. W. J. Monnet Weber, Irradiation damage in Gd2Ti2O7 single crystals: Ballistic versus ionization processes. Phys. Rev. 2011, 84, 64115. Google Scholar
• [8]
B. C. Chakoumakos, Systematics of the pyrochlore structure type, ideal A2B2X6Y. J. Solid State Chem. 1984, 53, 120. Google Scholar
• [9]
D. D. Hogarth, C. T. Williams, P. Jones, Primary zoning in pyrochlore group minerals from carbonatites. Mineral. Mag. 2000, 64, 683. Google Scholar
• [10]
M. Glerup, O. F. Nielsen, F. W. Poulsen, The structural transformation from the pyrochlore structure, A2B2O7, to the fluorite structure, AO2, studied by Raman spectroscopy and defect chemistry modeling. J. Solid State Chem. 2001, 160, 25. Google Scholar
• [11]
D. Atencio, R. Gieré, M. B. Andrade, A. G. Christy, P. M. Kartashov, The pyrochlore supergroup of minerals: nomenclature. Can. Mineral. 2010, 48, 673. Google Scholar
• [12]
R. C. Ewing, A. Meldrum, L. Wang, S. Wang, Radiation-induced amorphization. Rev. Mineral. Geochem. 2000, 39, 319. Google Scholar
• [13]
R. C. Ewing, Actinides and radiation effects: impact on the back-end of the nuclear fuel cycle. Mineral. Mag. 2011, 75, 2359. Google Scholar
• [14]
G. R. Lumpkin, R. C. Ewing, Y. Eyal, Preferential leaching and natural annealing of alpha-recoil tracks in metamict betaflte and samarskite. J. Mater. Res. 1988, 3, 357. Google Scholar
• [15]
R. C. Ewing, W. J. Weber, J. Lian, Nuclear waste disposal-pyrochlore (A2B2O7): Nuclear waste form for the immobilization of plutonium and “minor” actinides. J. Appl. Phys. 2004, 95, 5949. Google Scholar
• [16]
G. R. Lumpkin, R. C. Ewing, Alpha-decay damage in minerals of the pyrochlore group. Phys. Chem. Miner. 1988, 16, 2. Google Scholar
• [17]
D. D. Hogarth, Classification and nomenclature of the pyrochlore group. Am. Mineral. 1977, 62, 403. Google Scholar
• [18]
G. R. Lumpkin, Alpha-decay damage and aqueous durability of actinide host phases in natural systems. J. Nucl. Mater. 2001, 289, 136. Google Scholar
• [19]
J. W. Wald, P. Offermann, Scientific Basis for Nuclear Waste Management V, (Ed. W. Lutze) Elsevier, New York, p. 369, 1982Google Scholar
• [20]
W. J. Weber, J. W. Wald, Hj. Matzke, Self-radiation damage in actinide host phases of nuclear waste forms. Mater. Res. Soc. Symp. Proc. 1985, 8, 679. Google Scholar
• [21]
W. J. Weber, J. W. Wald, Hj. Matzke, Mater. Lett. 1985, 3, 173. Google Scholar
• [22]
W. J. Weber, R. C. Ewing, C. R. A. Catlow, T. D. de la Rubia, L. W. Hobbs, C. Kinoshita, H. Matzke, A. T. Motta, M. Nastasi, E. K. H. Salje, E. R. Vance, S. J. Zinkle, Radiation effects in crystalline ceramics for the immobilization of high-level nuclear waste and plutonium. J. Mat. Res. 1998, 13, 1434. Google Scholar
• [23]
N. K. Kulkarni, S. Sampath, V. Venugopal, Preparation and characterisation of Pu-pyrochlore: [La1–xPux]2Zr2O7 (x=0–1). J. Nucl. Mater. 2000, 281, 248. Google Scholar
• [24]
N. P. Laverov, S. V. Yudintsev, S. V. Stefanovsky, Y. N. Jang, New actinide matrix with pyrochlore structure. Dokl. Earth Sci. 2001, 381, 1053. Google Scholar
• [25]
N. P. Laverov, S. V. Yudintsev, T. S. Yudintseva, S. V. Stefanovsky, R. C. Ewing, J. Lian, S. Utsunomiya, L. A. Wang, Effect of radiation on properties of confinement matrices for immobilization of actinide-bearing wastes. Geol. Ore Deposit. 2003, 45, 423. Google Scholar
• [26]
B. C. Chakoumakos, R. C. Ewing, Crystal chemical constraints on the formation of actinide pyrochlores. Mat. Res. Soc. Symp. Proc. 1985, 44, 641. Google Scholar
• [27]
F. C. Hawthorne, L. A. Groat, M. Raudsepp, N. A. Ball, M. Kimata, F. D. Spike, R. Gaba, N. M. Halden, G. R. Lumpkin, R. C. Ewing, R. B. Greegor, F. W. Lytle, G. R. Ercit, G. R. Rossman, F. J. Wicks, R. A. Ramik, B. L. Sheriff, M. E. Fleet, C. McCammon, Alpha-decay damage in titanite. Am. Mineral. 1991, 76, 370. Google Scholar
• [28]
R. C. Ewing, W. J. Weber, F. Clinard, Radiation effects in nuclear waste forms for high-level radioactive waste. Prog. Nucl. Energ. 1995, 29, 63. Google Scholar
• [29]
K. O. Trachenko, M. T. Dove, E. K. H. Salje, Atomistic modelling of radiation damage in zircon. J. Phys. Condens. Matter 2001, 13, 1947. Google Scholar
• [30]
R. C. Ewing, Displaced by radiation. Nature 2007, 445, 161. Google Scholar
• [31]
I. Farnan, H. Cho, W. J. Weber, Quantification of actinide α-radiation damage in minerals and ceramics. Nature 2007, 445, 190. Google Scholar
• [32]
E. K. H. Salje, R. D. Taylor, D. J. Safarik, J. C. Lashley, L. A. Groat, U. Bismayer, R. J. Evans, R. Friedman, Evidence for direct impact damage in metamict titanite CaTiSiO5. J. Phys. Condens. Matter 2012, 24, 052202. Google Scholar
• [33]
W. J. Weber, Models and mechanisms of irradiation-induced amorphization in ceramics. Nucl. Instrum. Meth. B 2000, 166, 98. Google Scholar
• [34]
J. Lian, K. B. Helean, B. J. Kennedy, L. M. Wang, A. Navrotsky, R. C. Ewing, The order-disorder transition in ion-irradiated pyrochlore. Acta Mater. 2003, 51, 1493. Google Scholar
• [35]
A. A. Digeos, J. A. Valdez, K. E. Sickafus, A. R. Boccaccini, S. Atiq, R. W. Grimes, Glass matrix/pyrochlore phase composites for nuclear wastes encapsulation. J. Mater. Sci. 2003, 38, 1597. Google Scholar
• [36]
D. M. Strachan, R. D. Scheele, E. C. Buck, A. E. Kozelisky, R. L. Sell, R. J. Elovich, W. C. Buchmiller, Radiation damage effects in candidate titanates for Pu disposition: zirconolite. J. Nucl. Mater. 2005, 345, 109. Google Scholar
• [37]
G. R. Lumpkin, R. C. Ewing, B. C. Chakoumakos, Alpha-recoil damage in zirconolite (CaZrTi2O7). J. Mater. Res. 1986, 1, 564. Google Scholar
• [38]
G. R. Lumpkin, E. M. Foltyn, R. C. Ewing, Thermal recrystallization of alpha-recoil damaged minerals of the pyrochlore structure type. J. Nucl. Mater. 1986, 139, 113. Google Scholar
• [39]
M. Jafar, P. Sengupta, S. N. Achary, A. K. Tyagi, Phase evolution and microstructural studies in CaZrTi2O7(zirconolite)–Sm2Ti2O7(pyrochlore) system. J. Eur. Ceram. Soc. 2014, 34, 4373. Google Scholar
• [40]
J. Lima-de-Faria, Heat treatment of metamict euxenites, polymignites, yttrotantalites, samarskites, pyrochlores, and allanites. Min. Mag. 1958, 31, 937. Google Scholar
• [41]
R. C. Ewing, Environmental impact of the nuclear fuel cycle. In: Energy, Waste, and the Environment: A Geochemical Perspective (Eds. R. Giere and P. Stille) Geological Society, London, p. 7, Special Publications, Vol. 236, 2004Google Scholar
• [42]
D. J. Gregg, Y. Zhang, Z. Zhang, I. Karatchevtseva, M. G. Blackford, G. Triani, G. R. Lumpkin, E. R. Vance, Crystal chemistry and structures of uranium-doped gadolinium zirconates. J. Nucl. Mater. 2013, 438, 144. Google Scholar
• [43]
N. Tomašić, V. Bermanec, A. Gajović, R. Linari, Metamict minerals: an insight into a relic crystal structure using XRD, Raman spectroscopy, SAED and HRTEM. Croat. Chem. Acta 2008, 81, 391. Google Scholar
• [44]
R. C. Ewing, The nuclear fuel cycle: a role for mineralogy and geochemistry. Elements 2006, 2, 331. Google Scholar
• [45]
S. Park, M. Lang, C. L. Tracy, J. Zhang, F. Zhang, C. Trautmann, P. Kluth, M. D. Rodriguez, R. C. Ewing, Swift heavy ion irradiation-induced amorphization of La2Ti2O7. Nucl. Instrum. Meth. B 2014, 326, 145. Google Scholar
• [46]
S. V. Yudintsev, T. S. Livshits, J. Zhang, R. C. Ewing, The behavior of rare-earth pyrochlores and perovskites under ion irradiation. Dokl. Earth Sci. 2015, 461, 247. Google Scholar
• [47]
S. S. Shoup, C. E. Bamberger, T. J. Haverlock, J. R. Peterson, Aqueous leachability of lanthanide and plutonium titanates. J. Nucl. Mater. 1997, 240, 112. Google Scholar
• [48]
B. D. Begg, N. J. Hess, D. E. McCready, S. Thevuthasan, W. J. Weber, Heavy-ion irradiation effects in Gd2(Ti2–xZrx)O7 pyrochlores. J. Nucl. Mater. 2001, 289, 188. Google Scholar
• [49]
B. D. Begg, N. J. Hess, W. J. Weber, R. Devanathan, J. P. Icenhower, S. Thevuthasan, B. P. McGrail, Heavy-ion irradiation effects on structures and acid dissolution of pyrochlores. J. Nucl. Mater. 2001, 288, 208. Google Scholar
• [50]
Y. Zhang, K. P. Hart, W. L. Bourcier, R. A. Day, M. Colella, B. Thomas, Z. Aly, A. Jostsons, Kinetics of uranium release from Synroc phases. J. Nucl. Mater. 2001, 289, 254. Google Scholar
• [51]
J. P. Icenhower, E. A. Rodriguez, D. M. Strachan, J. L. Steele, M. M. Lindberg, Dissolution Kinetics of Titanate-Based Ceramic Waste Forms: Results from Single-Pass Flow Tests on Radiation Damaged Specimens, Pacific Northwest National Laboratory, Richland, 2003Google Scholar
• [52]
T. Geisler, J. Berndt, H. W. Meyer, K. Pollok, A. Putnis, Low-temperature aqueous alteration of crystalline pyrochlore: correspondence between nature and experiment. Mineral. Mag. 2004, 68, 905. Google Scholar
• [53]
G. R. Lumpkin, K. L. Smith, R. Giere, C. T. Williams, Geochemical Behaviour of Host Phases for Actinides and Fission Products in Crystalline Ceramic Nuclear Waste Forms. Geological Society, Special Publications, London, Vol. 236, 2004, p. 89. Google Scholar
• [54]
M. Lang, F. Zhang, J. Zhang, J. Wang, J. Lian, W. J. Weber, B. Schuster, C. Trautmann, R. Neumann, R. C. Ewing, Review of A2B2O7 pyrochlore response to irradiation and pressure. Nucl. Instr. Meth. Phys. Res. B 2010, 268, 2951. Google Scholar
• [55]
K. B. Helean, A. Navrotzky, G. R. Lumpkin, M. Collela, J. Lian, R. C. Ewing, B. Ebbinghaus, J. G. Catalano, Enthalpies of formation of U-, Th-, Ce-brannerite: implications for plutonium immobilization. J. Nucl. Mater. 2003, 320, 231. Google Scholar
• [56]
K. B. Helean, S. V. Ushakov, C. E. Brown, A. Navrotsky, J. Lian, R. C. Ewing, J. M. Farmer, L. A. Boatner, Formation enthalpies of rare earth titanate pyrochlore. J. Solid State Chem. 2004, 177, 1858. Google Scholar
• [57]
Y. Li, X. Zhu, T. A. Kassab, Atomic-scale microstructures, Raman spectra and dielectric properties of cubic pyrochlore-typed Bi1.5MgNb1.5O7 dielectric ceramics. Ceram. Int. 2014, 40, 8125. Google Scholar
• [58]
M. T. Vandenborre, E. Husson, J. P. Chatry, D. Michel, Rare-earth titanates and stannates of pyrochlore structure; vibrational spectra and force fields. J. Raman Spectrosc. 1983, 14, 63. Google Scholar
• [59]
B. Mihailova, S. Stoyanov, V. Gaydarov, M. Gospodinov, L. Konstantinov, Raman spectroscopy study of pyrochlore Pb2Sc0.5Tal.5O6.5 crystals. Solid State Commun. 1997, 103, 623. Google Scholar
• [60]
S. Kamba, V. Porokhonskyy, A. Pashkin, V. Bovtun, J. Petzelt, J. Nino, S. Trolier-McKinstry, M. Lanagan, C. Randall, Anomalous broad dielectric relaxation in Bi1.5Zn1.0Nb1.5O7 pyrochlore. Phys. Rev. B 2002, 66, 1. Google Scholar
• [61]
C. M. Ronconi, O. L. Alves, Structural evolution and optical properties of Cd2Nb2O7 films prepared by metallo-organic decomposition. Thin Solid Films 2003, 441, 121. Google Scholar
• [62]
W. Hong, D. Huiling, Y. Xi, Structural study of Bi2O3-ZnO-Nb2O5 based pyrochlores. Mat. Sci. Eng. B 2003, 99, 20. Google Scholar
• [63]
Q. Wang, H. Wang, X. Yao, Structure, dielectric and optical properties of Bi1.5ZnNb1.5–xTaxO7 cubic pyrochlores. J. Appl. Phys. 2007, 101, 104116. Google Scholar
• [64]
M. Fischer, T. Malcherek, U. Bismayer, P. Blaha, K. Schwarz, Structure and stability of Cd2Nb2O7 and Cd2Ta2O7 explored by ab initio calculations. Phys. Rev. B 2008, 78, 14108. Google Scholar
• [65]
T. T. A. Lummen, I. P. Handayani, M. C. Donker, D. Fausti, G. Dhalenne, P. Berthet, A. Revcolevschi, P. H. M. Van Loosdrecht, Phonon and crystal field excitations in geometrically frustrated rare earth titanates. Phys. Rev. B 2008, 77, 1. Google Scholar
• [66]
D. J. Arenas, L. V. Gasparov, W. Qiu, J. C. Nino, C. H. Patterson, D. B. Tanner, Raman study of phonon modes in bismuth pyrochlores. Phys. Rev. B 2010, 82, 1. Google Scholar
• [67]
A. N. Radhakrishnan, P. P. Rao, K. S. M. Linsa, M. Deepa, P. Koshy, Influence of disorder-to-order transition on lattice thermal expansion and oxide ion conductivity in (CaxGd1–x)2(Zr1–xMx)2O7 pyrochlore solid solutions. Dalton Trans. 2011, 40, 3839. Google Scholar
• [68]
D. S. D. Gunn, N. L. Allan, H. Foxhall, J. H. Harding, J. A. Purton, W. Smith, M. J. Stein, I. T. Todorov, K. P. Travis, Novel potentials for modelling defect formation and oxygen vacancy migration in Gd2Ti2O7 and Gd2Zr2O7 pyrochlores. J. Mater. Chem. 2012, 22, 4675. Google Scholar
• [69]
V. S. Urusov, A. E. Grechanovsky, N. N. Eremin, Molecular dynamics study of self-radiation damage in Gd2Zr2O7 and Gd2Ti2O7 pyrochlores. Dokl. Phys. 2014, 59, 263. Google Scholar
• [70]
A. Archer, H. R. Foxhall, N. L. Allan, D. S. D. Gunn, J. H. Harding, I. T. Todorov, K. P. Travis, J. A. Purton, Order parameter and connectivity topology analysis of crystalline ceramics for nuclear waste immobilization. J. Phys. Condens. Matter 2014, 26, 485011. Google Scholar
• [71]
P. K. Kulriya, R. Kumari, R. Kumar, V. Grover, R. Shukla, A. K. Tyagi, D. K. Avasthi, In-situ high temperature irradiation setup for temperature dependent structural studies of materials under swift heavy ion irradiation. Nucl. Instrum. Meth. B 2015, 342, 98. Google Scholar
• [72]
H. Lippolt, J. W. Gentner, W. Wimmenauer, Altersbestimmungen nach der Kalium-Argon-Methode an tertiären Eruptivgesteinen Südwestdeutschlands. Jahreshefte des Geologischen Landesamtes Baden-Württemberg 1963, 6, 507. Google Scholar
• [73]
K. Bell, J. Blenkinsop, Nd and Sr isotopic compositions of East African carbonatites: implications for mantle heterogeneity. Geology 1987, 15, 99. Google Scholar
• [74]
V. Berger, D. Singer, G. Orris, Carbonatites of the world, explored deposits of Nb and REE – database and grade and tonnage models. US Geological Survey, Reston, Virginia, 2009, 1. Google Scholar
• [75]
A. A. Krasnobaev, A. I. Rusin, P. M. Valizer, S. V. Busharina, Zirconology of calcite carbonatite of the Vishnevogorsk massif, southern Urals. Dok. Earth Sci. 2010, 431, 390. Google Scholar
• [76]
H. Holland, D. Gottfried, The effect of nuclear radiation on the structure of zircon. Acta Crystallogr. 1955, 8, 291. Google Scholar
• [77]
M. E. Wieser, N. Holden, T. B. Coplen, J. K. Böhlke, M. Berglund, W. A. Brand, P. De Bièvre, M. Gröning, R. D. Loss, J. Meija, T. Hirata, T. Prohaska, R. Schoenberg, G. O’Connor, T. Walczyk, S. Yoneda, X.-K. Zhu, Atomic weights of the elements 2001 (IUPAC Technical Report). Pure Appl. Chem. 2013, 85, 1047. Google Scholar
• [78]
E. Kroumova, M. I. Aroyo, J. M. Perez-Mato, A. Kirov, C. Capillas, S. Ivantchev, H. Wondratschek, Bilbao crystallographic server: useful databases and tools for phase-transition studies. Phase Transit. 2003, 76, 155. Google Scholar
• [79]
J. Zhang, J. Lian, F. Zhang, J. Wang, A. F. Fuentes, R. C. Ewing, Intrinsic structural disorder and radiation response of nanocrystalline Gd2(Ti0.65Zr0.35)2O7 Pyrochlore. J. Phys. Chem. C 2010, 114, 11810. Google Scholar
• [80]
M. Mączka, J. Hanuza, K. Hermanowicz, A. F. Fuentes, K. Matsuhira, Z. Hiroi, Temperature-dependent Raman scattering studies of the geometrically frustrated pyrochlores Dy2Ti2O7, Gd2Ti2O7 and Er2Ti2O7. J. Raman Spectrosc. 2008, 39, 5537. Google Scholar
• [81]
S. Saha, S. Prusty, S. Singh, R. Suryanarayanan, A. Revcolevschi, A. K. Sood, Anomalous temperature dependence of phonons and photoluminescence bands in pyrochlore Er2Ti2O7: signatures of structural deformation at 130 K Surajit. J. Phys. Condens. Matter 2011, 23, 445402. Google Scholar
• [82]
M. Mączka, M. Sanjuán, A. Fuentes, L. Macalik, J. Hanuza, K. Matsuhira, Z. Hiroi, Temperature-dependent studies of the geometrically frustrated pyrochlores Ho2Ti2O7 and Dy2Ti2O7. Phys. Rev. B 2009, 79, 1. Google Scholar
• [83]
M. Lang, J. Lian, J. Zhang, F. Zhang, W. J. Weber, C. Trautmann, R. C. Ewing, Single-ion tracks in Gd2Zr2–xTixO7 pyrochlores irradiated with swift heavy ions. Phys. Rev. B 2009, 79, 1. Google Scholar
• [84]
G. Sattonnay, N. Sellami, L. Thomé, C. Legros, C. Grygiel, I. Monnet, J. Jagielski, I. Jozwik-Biala, P. Simon, Structural stability of Nd2Zr2O7 pyrochlore ion-irradiated in a broad energy range. Acta Mater. 2013, 61, 6492. Google Scholar
• [85]
N. J. Hess, B. D. Begg, S. D. Conradson, D. E. McCready, P. L. Gassman, W. J. Weber, Spectroscopic investigations of the structural phase transition in Gd2(Ti1–yZry)2O7 pyrochlores. J. Phys. Chem. B 2002, 106, 4663. Google Scholar
• [86]
B. P. Mandal, N. Garg, S. M. Sharma, A. K. Tyagi, Solubility of ThO2 in Gd2Zr2O7: XRD, SEM and Raman spectroscopic studies. J. Nucl. Mater. 2009, 392, 95. Google Scholar
• [87]
R. L. Frost, S. J. Palmer, Reddy B. J., Raman spectroscopic study of the uranyl titanite mineral euxenite (Y, Ca, U, Ce, Th)(Nb, Ta, Ti)2O6. J. Raman Spectrosc. 2011, 42, 1160. Google Scholar
• [88]
L. Francis, P. P. Rao, M. Thomas, S. K. Mahesh, V. R. Reshmi, T. S. Sreene, Structural influence on the photoluminescence properties of Eu3+ doped Gd3MO7 (M = Nb, Sb, and Ta) red phosphors. Phys. Chem. Chem. Phys. 2014, 16, 17108. Google Scholar
• [89]
I. F. Chang, S. S. Mitra, Long wavelength optical phonons in mixed crystals. Adv. Phys. 1971, 20, 359. Google Scholar
• [90]
T. Murakami, B. C. Chakoumakos, R. C. Ewing, G. R. Lumpkin, W. J. Weber, Alpha-decay event damage in zircon. Am. Mineral. 1991, 76, 1510. Google Scholar
Accepted: 2016-07-19
Published Online: 2016-08-30
Published in Print: 2017-02-01
Citation Information: Zeitschrift für Kristallographie - Crystalline Materials, ISSN (Online) 2196-7105, ISSN (Print) 2194-4946,
Export Citation
©2017 Walter de Gruyter GmbH, Berlin/Boston. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7856375575065613, "perplexity": 12008.054361617968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820556.7/warc/CC-MAIN-20171017013608-20171017033608-00474.warc.gz"} |
https://askbot.fedoraproject.org/en/answers/118336/revisions/ | # Revision history [back]
I found on fedora forum a similar issue with yours.
I am not actually use the master Volume to increase and decrease sound ( driver issue ) and i cant tested by myself. So PabloTwo suggest to change the owner of /var/lib/alsa/asound.state
chown user:user /var/lib/alsa/asound.state
chown kainblock:kainblock /var/lib/alsa/asound.state
whoami | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19326511025428772, "perplexity": 8177.559699757234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00337.warc.gz"} |
http://www.barnesandnoble.com/w/convex-polytopes-branko-grunbaum/1101515282?ean=9780387404097 | # Convex Polytopes / Edition 2
Paperback (Print)
$45.22 (Save 35%) Used and New from Other Sellers Used and New from Other Sellers from$56.67
Usually ships in 1-2 business days
(Save 18%)
Other sellers (Paperback)
• All (9) from $56.67 • New (6) from$56.67
• Used (3) from \$94.43
### Overview
"The original edition [...] inspired a whole generation of grateful workers in polytope theory. Without it, it is doubtful whether many of the subsequent advances in the subject would have been made. The many seeds it sowed have since grown into healthy trees, with vigorous branches and luxuriant foliage. It is good to see it in print once again." —Peter McMullen, University College London
### Editorial Reviews
##### From the Publisher
"The appearance of Grünbaum's book Convex Polytopes in 1967 was a moment of grace to geometers and combinatorialists. The special spirit of the book is very much alive even in those chapters where the book's immense influence made them quickly obsolete. Some other chapters promise beautiful unexplored land for future research. The appearance of the new edition is going to be another moment of grace. Kaibel, Klee and Ziegler were able to update the convex polytope saga in a clear, accurate, lively, and inspired way." (Gil Kalai, The Hebrew University of Jerusalem)
"The original book of Grünbaum has provided the central reference for work in this active area of mathematics for the past 35 years...I first consulted this book as a graduate student in 1967; yet, even today, I am surprised again and again by what I find there. It is an amazingly complete reference for work on this subject up to that time and continues to be a major influence on research to this day." (Louis J. Billera, Cornell University)
"The original edition of Convex Polytopes inspired a whole generation of grateful workers in polytope theory. Without it, it is doubtful whether many of the subsequent advances in the subject would have been made. The many seeds it sowed have since grown into healthy trees, with vigorous branches and luxuriant foliage. It is good to see it in print once again." (Peter McMullen, University College London)
From the reviews of the second edition:
"Branko Grünbaum’s book is a classical monograph on convex polytopes … . As was noted by many researchers, for many years the book provided a central reference for work in the field and inspired a whole generation of specialists in polytope theory. … Every chapter of the book is supplied with a section entitled ‘Additional notes and comments’ … these notes summarize the most important developments with respect to the topics treated by Grünbaum. … The new edition … is an excellent gift for all geometry lovers." (Alexander Zvonkin, Mathematical Reviews, 2004b)
### Product Details
• ISBN-13: 9780387404097
• Publisher: Springer New York
• Publication date: 10/1/2003
• Series: Graduate Texts in Mathematics Series , #221
• Edition description: 2nd ed. 2003. Softcover reprint of the original 2nd ed. 2003
• Edition number: 2
• Pages: 471
• Product dimensions: 9.21 (w) x 6.14 (h) x 1.14 (d)
% feaagaart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn
% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr
% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9
% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x
% aGmUaaaeacaYOaiaiJigdaaeacaYOaiaiJikdaaaacbiGaiaiJ-rga
% aiaawUfacaGLDbaaaaa!40CC!
$$\left[ {\frac{1} {2}d} \right]$$-Neighborly d-polytopes.- 7.3 Exercises.- 7.4 Remarks.- 7.5 Additional notes and comments.- 8 Euler’s relation.- 8.1 Euler’s theorem.- 8.2 Proof of Euler’s theorem.- 8.3 A generalization of Euler’s relation.- 8.4 The Euler characteristic of complexes.- 8.5 Exercises.- 8.6 Remarks.- 8.7 Additional notes and comments.- 9 Analogues of Euler’s relation.- 9.1 The incidence equation.- 9.2 The Dehn-Sommerville equations.- 9.3 Quasi-simplicial polytopes.- 9.4 Cubical polytopes.- 9.5 Solutions of the Dehn-Sommerville equations.- 9.6 The f-vectors of neighborly d-polytopes.- 9.7 Exercises.- 9.8 Remarks.- 9.9 Additional notes and comments.- 10 Extremal problems concerning numbers of faces.- 10.1 Upper bounds for fi, i— 1, in terms of fo.- 10.2 Lower bounds for fi, i— 1, in terms of fo.- 10.3 The sets f(P3) and f(PS3).- 10.4 The set fP4).- 10.5 Exercises.- 10.6 Additional notes and comments.- 11 Properties of boundary complexes.- 11.1 Skeletons of simplices contained in—(P).- 11.2 A proof of the van Kampen-Flores theorem.- 11.3 d-Connectedness of the graphs of d-polytopes.- 11.4 Degree of total separability.- 11.5 d-Diagrams.- 11.6 Additional notes and comments.- 12 k-Equivalence of polytopes.- 12.1 k-Equivalence and ambiguity.- 12.2 Dimensional ambiguity.- 12.3 Strong and weak ambiguity.- 12.4 Additional notes and comments.- 13 3-Polytopes.- 13.1 Steinitz’s theorem.- 13.2 Consequences and analogues of Steinitz’s theorem.- 13.3 Eberhard’s theorem.- 13.4 Additional results on 3-realizable sequences.- 13.5 3-Polytopes with circumspheres and circumcircles.- 13.6 Remarks.- 13.7 Additional notes and comments.- 14 Angle-sums relations; the Steiner point.- 14.1 Gram’s relation for angle-sums.- 14.2 Angle-sums relations for simplicial polytopes.- 14.3 The Steiner point of a polytope (by G. C. Shephard).- 14.4 Remarks.- 14.5 Additional notes and comments.- 15 Addition and decomposition of polytopes.- 15.1 Vector addition.- 15.2 Approximation of polytopes by vector sums.- 15.3 Blaschke addition.- 15.4 Remarks.- 15.5 Additional notes and comments.- 16 Diameters of polytopes (by Victor Klee).- 16.1 Extremal diameters of d-polytopes.- 16.2 The functions— and—b.- 16.3 Wv Paths.- 16.4 Additional notes and comments.- 17 Long paths and circuits on polytopes.- 17.1 Hamiltonian paths and circuits.- 17.2 Extremal path-lengths of polytopes.- 17.3 Heights of polytopes.- 17.4 Circuit codes.- 17.5 Additional notes and comments.- 18 Arrangements of hyperplanes.- 18.1 d-Arrangements.- 18.2 2-Arrangements.- 18.3 Generalizations.- 18.4 Additional notes and comments.- 19 Concluding remarks.- 19.1 Regular polytopes and related notions.- 19.2 k-Content of polytopes.- 19.3 Antipodality and related notions.- 19.4 Additional notes and comments.- Tables.- Addendum.- Errata for the 1967 edition.- Additional Bibliography.- Index of Terms.- Index of Symbols.
## Customer Reviews
Be the first to write a review
( 0 )
Rating Distribution
(0)
(0)
(0)
(0)
### 1 Star
(0)
Your Name: Create a Pen Name or
### Barnes & Noble.com Review Rules
Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & Noble.com that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & Noble.com does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.
### Reviews by Our Customers Under the Age of 13
We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at BN.com or to post customer reviews. Please see our Terms of Use for more details.
### What to exclude from your review:
Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.
### Reviews should not contain any of the following:
• - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
• - Time-sensitive information such as tour dates, signings, lectures, etc.
• - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
• - Comments focusing on the author or that may ruin the ending for others
• - Phone numbers, addresses, URLs
• - Pricing and availability information or alternative ordering information
### Reminder:
• - By submitting a review, you grant to Barnes & Noble.com and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Noble.com Terms of Use.
• - Barnes & Noble.com reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & Noble.com also reserves the right to remove any review at any time without notice.
Search for Products You'd Like to Recommend
### Recommend other products that relate to your review. Just search for them below and share!
Create a Pen Name
Your Pen Name is your unique identity on BN.com. It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.
Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.
Continue Anonymously
If you find inappropriate content, please report it to Barnes & Noble
Why is this product inappropriate? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16556549072265625, "perplexity": 7619.617112811193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654450/warc/CC-MAIN-20140305060734-00020-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/cpaa.2020098 | # American Institute of Mathematical Sciences
April 2020, 19(4): 2235-2255. doi: 10.3934/cpaa.2020098
## Lyapunov exponents and Oseledets decomposition in random dynamical systems generated by systems of delay differential equations
1 Faculty of Pure and Applied Mathematics, Wrocław University of Science and Technology, PL-50-370 Wrocław, Poland 2 Departamento de Matemática Aplicada, E.I. Industriales, Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain
Dedicated to Professor Tomás Caraballo on occasion of his Sixtieth Birthday
Received September 2018 Revised October 2019 Published January 2020
Fund Project: The first author is supported by the NCN grant Maestro 2013/08/A/ST1/00275 and the last two authors are partly supported by MICIIN/FEDER under project RTI2018-096523-B-100 and EU Marie-Skłodowska-Curie ITN Critical Transitions in Complex Systems (H2020-MSCA-ITN-2014 643073 CRITICS).
Linear skew-product semidynamical systems generated by random systems of delay differential equations are considered, both on a space of continuous functions as well as on a space of $p$-summable functions. The main result states that in both cases, the Lyapunov exponents are identical, and that the Oseledets decompositions are related by natural embeddings.
Citation: Janusz Mierczyński, Sylvia Novo, Rafael Obaya. Lyapunov exponents and Oseledets decomposition in random dynamical systems generated by systems of delay differential equations. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2235-2255. doi: 10.3934/cpaa.2020098
##### References:
show all references
##### References:
[1] Doan Thai Son. On analyticity for Lyapunov exponents of generic bounded linear random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3113-3126. doi: 10.3934/dcdsb.2017166 [2] Nguyen Dinh Cong, Thai Son Doan, Stefan Siegmund. On Lyapunov exponents of difference equations with random delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 861-874. doi: 10.3934/dcdsb.2015.20.861 [3] Kening Lu, Alexandra Neamţu, Björn Schmalfuss. On the Oseledets-splitting for infinite-dimensional random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1219-1242. doi: 10.3934/dcdsb.2018149 [4] Paul L. Salceanu, H. L. Smith. Lyapunov exponents and persistence in discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 187-203. doi: 10.3934/dcdsb.2009.12.187 [5] Dimitri Breda, Sara Della Schiava. Pseudospectral reduction to compute Lyapunov exponents of delay differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2727-2741. doi: 10.3934/dcdsb.2018092 [6] Paul L. Salceanu. Robust uniform persistence in discrete and continuous dynamical systems using Lyapunov exponents. Mathematical Biosciences & Engineering, 2011, 8 (3) : 807-825. doi: 10.3934/mbe.2011.8.807 [7] Tomás Caraballo, Francisco Morillas, José Valero. On differential equations with delay in Banach spaces and attractors for retarded lattice dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 51-77. doi: 10.3934/dcds.2014.34.51 [8] Nguyen Dinh Cong, Nguyen Thi Thuy Quynh. Coincidence of Lyapunov exponents and central exponents of linear Ito stochastic differential equations with nondegenerate stochastic term. Conference Publications, 2011, 2011 (Special) : 332-342. doi: 10.3934/proc.2011.2011.332 [9] María J. Garrido–Atienza, Kening Lu, Björn Schmalfuss. Random dynamical systems for stochastic partial differential equations driven by a fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 473-493. doi: 10.3934/dcdsb.2010.14.473 [10] Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355 [11] Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1 [12] Felix X.-F. Ye, Hong Qian. Stochastic dynamics Ⅱ: Finite random dynamical systems, linear representation, and entropy production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4341-4366. doi: 10.3934/dcdsb.2019122 [13] A. Domoshnitsky. About maximum principles for one of the components of solution vector and stability for systems of linear delay differential equations. Conference Publications, 2011, 2011 (Special) : 373-380. doi: 10.3934/proc.2011.2011.373 [14] Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521 [15] Yujun Zhu. Preimage entropy for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 829-851. doi: 10.3934/dcds.2007.18.829 [16] Ji Li, Kening Lu, Peter W. Bates. Invariant foliations for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3639-3666. doi: 10.3934/dcds.2014.34.3639 [17] Weigu Li, Kening Lu. Takens theorem for random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3191-3207. doi: 10.3934/dcdsb.2016093 [18] Robert Hesse, Alexandra Neamţu. Global solutions and random dynamical systems for rough evolution equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020029 [19] Boris Kalinin, Victoria Sadovskaya. Lyapunov exponents of cocycles over non-uniformly hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5105-5118. doi: 10.3934/dcds.2018224 [20] Xueting Tian, Shirou Wang, Xiaodong Wang. Intermediate Lyapunov exponents for systems with periodic orbit gluing property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1019-1032. doi: 10.3934/dcds.2019042
2018 Impact Factor: 0.925 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3400733768939972, "perplexity": 4397.6400146680735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00209.warc.gz"} |
https://www.zbmath.org/serials/?q=se%3A127 | ×
# zbMATH — the first resource for mathematics
## Journal of Computational Physics
Short Title: J. Comput. Phys. Publisher: Elsevier (Academic Press), Amsterdam ISSN: 0021-9991 Online: http://www.sciencedirect.com/science/journal/00219991
Documents Indexed: 13,904 Publications (since 1966) References Indexed: 12,897 Publications with 430,047 References.
all top 5
#### Latest Issues
417 (2020) 416 (2020) 415 (2020) 414 (2020) 413 (2020) 412 (2020) 411 (2020) 410 (2020) 409 (2020) 406 (2020) 405 (2020) 404 (2020) 403 (2020) 402 (2020) 401 (2020) 400 (2020) 399 (2019) 398 (2019) 397 (2019) 396 (2019) 395 (2019) 394 (2019) 393 (2019) 392 (2019) 391 (2019) 390 (2019) 389 (2019) 388 (2019) 387 (2019) 386 (2019) 385 (2019) 384 (2019) 383 (2019) 382 (2019) 381 (2019) 380 (2019) 378 (2019) 377 (2019) 376 (2019) 375 (2018) 374 (2018) 373 (2018) 372 (2018) 371 (2018) 370 (2018) 369 (2018) 368 (2018) 367 (2018) 366 (2018) 365 (2018) 364 (2018) 363 (2018) 362 (2018) 361 (2018) 360 (2018) 359 (2018) 358 (2018) 357 (2018) 356 (2018) 355 (2018) 354 (2018) 353 (2018) 352 (2018) 351 (2017) 350 (2017) 349 (2017) 348 (2017) 347 (2017) 346 (2017) 345 (2017) 344 (2017) 343 (2017) 342 (2017) 341 (2017) 340 (2017) 339 (2017) 338 (2017) 337 (2017) 336 (2017) 335 (2017) 334 (2017) 333 (2017) 332 (2017) 331 (2017) 330 (2017) 329 (2017) 328 (2017) 327 (2016) 326 (2016) 325 (2016) 324 (2016) 323 (2016) 322 (2016) 321 (2016) 320 (2016) 319 (2016) 318 (2016) 317 (2016) 316 (2016) 315 (2016) ...and 584 more Volumes
all top 5
#### Authors
96 Shu, Chi-Wang 83 Karniadakis, George Em 58 Nordström, Jan 57 Xu, Kun 55 Osher, Stanley Joel 48 Adams, Nikolaus A. 47 Shashkov, Mikhail J. 46 Colella, Phillip 39 Balsara, Dinshaw S. 39 Boyd, John Philip 39 Dumbser, Michael 39 Zabaras, Nicholas J. 38 Gibou, Frédéric 38 Jenny, Patrick 35 Greengard, Leslie F. 35 Hesthaven, Jan S. 34 Knoll, Dana A. 33 Moin, Parviz 33 Toro, Eleuterio F. 33 Turkel, Eli L. 32 Sagaut, Pierre 31 Chacón, Luis 30 Fedkiw, Ronald P. 30 Jin, Shi 29 Koumoutsakos, Petros D. 29 Shen, Jie 28 Cai, Wei 28 Degond, Pierre 28 Tchelepi, Hamdi A. 27 Carpenter, Mark H. 27 Lowengrub, John Samuel 27 Qiu, Jianxian 27 Smolarkiewicz, Piotr K. 27 Xiao, Feng 27 Xiu, Dongbin 26 Brackbill, Jeremiah U. 26 Henshaw, William D. 26 Hou, Thomas Yizhao 25 Abgrall, Rémi 25 Chung, Tsz Shun Eric 25 Morel, Jim E. 25 Nishikawa, Hiroaki 25 Peskin, Charles S. 25 Sethian, James A. 25 Shu, Chang 24 Banks, Jeffrey W. 24 Khoo, Boo Cheong 24 Lin, Guang 24 Loubère, Raphaël 23 Majda, Andrew J. 23 Yee, Helen C. 22 Bao, Weizhu 22 E, Weinan 22 Efendiev, Yalchin R. 22 Koren, Barry 22 Pope, Stephen B. 22 Shadid, John N. 22 Sherwin, Spencer J. 22 Wang, Hong 22 Ying, Lexing 21 Christlieb, Andrew J. 21 Fornberg, Bengt 21 Giraldo, Francis X. 21 Huang, Weizhang 21 Luo, Lishi 21 Shyy, Wei 21 Tang, Huazhong 21 Vasilyev, Oleg V. 20 Lai, Mingchih 20 Larsen, Edward W. 20 Luo, Hong 20 Merriman, Barry 20 Schwendeman, Donald W. 20 Tornberg, Anna-Karin 20 Zhao, Hongkai 19 Antoine, Xavier 19 Bell, John B. 19 Cockburn, Bernardo 19 Engquist, Bjorn E. 19 Iaccarino, Gianluca 19 Iollo, Angelo 19 Monaghan, Joseph J. 19 Munz, Claus-Dieter 19 Najm, Habib N. 19 Rokhlin, Vladimir 19 van der Vegt, Jaap J. W. 19 Wei, Guowei 18 Bruno, Oscar P. 18 Colonius, Tim 18 Lapenta, Giovanni 18 Mattsson, Ken 18 Mavriplis, Dimitri J. 18 Nguyen, Ngoc Cuong 18 Qiu, Jingmei 18 Quartapelle, Luigi 18 Saurel, Richard 18 Sjogreen, Bjorn 18 Strain, John A. 18 Tsai, Yen-Hsi Richard 18 van Leer, Bram ...and 16,570 more Authors
all top 5
#### Fields
8,463 Numerical analysis (65-XX) 7,383 Fluid mechanics (76-XX) 3,178 Partial differential equations (35-XX) 1,228 Statistical mechanics, structure of matter (82-XX) 935 Optics, electromagnetic theory (78-XX) 733 Mechanics of deformable solids (74-XX) 547 Geophysics (86-XX) 499 Classical thermodynamics, heat transfer (80-XX) 463 Quantum theory (81-XX) 416 Biology and other natural sciences (92-XX) 327 Ordinary differential equations (34-XX) 268 Probability theory and stochastic processes (60-XX) 253 Integral equations (45-XX) 207 Statistics (62-XX) 192 Computer science (68-XX) 149 Dynamical systems and ergodic theory (37-XX) 147 Calculus of variations and optimal control; optimization (49-XX) 145 Astronomy and astrophysics (85-XX) 113 Special functions (33-XX) 107 Mechanics of particles and systems (70-XX) 93 Approximations and expansions (41-XX) 59 Harmonic analysis on Euclidean spaces (42-XX) 56 Operations research, mathematical programming (90-XX) 55 Systems theory; control (93-XX) 48 Relativity and gravitational theory (83-XX) 43 Information and communication theory, circuits (94-XX) 40 Linear and multilinear algebra; matrix theory (15-XX) 32 Integral transforms, operational calculus (44-XX) 29 Functions of a complex variable (30-XX) 26 Differential geometry (53-XX) 25 Global analysis, analysis on manifolds (58-XX) 22 Potential theory (31-XX) 20 Difference and functional equations (39-XX) 17 Real functions (26-XX) 16 Convex and discrete geometry (52-XX) 15 History and biography (01-XX) 15 Operator theory (47-XX) 14 Combinatorics (05-XX) 11 General and overarching topics; collections (00-XX) 11 Functional analysis (46-XX) 9 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 8 Group theory and generalizations (20-XX) 6 Number theory (11-XX) 6 Measure and integration (28-XX) 4 Nonassociative rings and algebras (17-XX) 4 Topological groups, Lie groups (22-XX) 4 Sequences, series, summability (40-XX) 2 Commutative algebra (13-XX) 2 Abstract harmonic analysis (43-XX) 2 Geometry (51-XX) 2 Algebraic topology (55-XX) 2 Manifolds and cell complexes (57-XX) 1 Field theory and polynomials (12-XX) 1 Associative rings and algebras (16-XX) 1 Category theory; homological algebra (18-XX)
#### Citations contained in zbMATH Open
11,466 Publications have been cited 220,583 times in 60,069 Documents Cited by Year
Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. Zbl 0659.65132
Osher, Stanley; Sethian, James A.
1988
Approximate Riemann solvers, parameter vectors, and difference schemes. Zbl 0474.65066
Roe, P. L.
1981
Efficient implementation of weighted ENO schemes. Zbl 0877.65065
Jiang, Guang-Shan; Shu, Chi-Wang
1996
Efficient implementation of essentially nonoscillatory shock-capturing schemes. Zbl 0653.65072
Shu, Chi-Wang; Osher, Stanley
1988
Compact finite difference schemes with spectral-like resolution. Zbl 0759.65006
Lele, Sanjiva K.
1992
Towards the ultimate conservative difference scheme. V. A second-order sequel to Godunov’s method. Zbl 1364.65223
van Leer, Bram
1979
Volume of fluid (VOF) method for the dynamics of free boundaries. Zbl 0462.76020
Hirt, C. W.; Nichols, B. D.
1981
High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method. Zbl 0511.76031
Ghia, U.; Ghia, K. N.; Shin, C. T.
1982
A level set approach for computing solutions to incompressible two-phase flow. Zbl 0808.76077
Sussman, Mark; Smereka, Peter; Osher, Stanley
1994
A perfectly matched layer for the absorption of electromagnetic waves. Zbl 0814.65129
Berenger, Jean-Pierre
1994
Uniformly high order accurate essentially non-oscillatory schemes. III. Zbl 0652.65067
Harten, Ami; Engquist, Bjorn; Osher, Stanley; Chakravarthy, Sukumar
1987
Numerical analysis of blood flow in the heart. Zbl 0403.76100
Peskin, Charles S.
1977
A fast algorithm for particle simulations. Zbl 0629.65005
Greengard, L.; Rokhlin, V.
1987
A continuum method for modeling surface tension. Zbl 0775.76110
Brackbill, J. U.; Kothe, D. B.; Zemach, C.
1992
Efficient implementation of essentially nonoscillatory shock-capturing schemes. II. Zbl 0674.65061
Shu, Chi-Wang; Osher, Stanley
1989
High resolution schemes for hyperbolic conservation laws. Zbl 0565.65050
Harten, Ami
1983
Weighted essentially non-oscillatory schemes. Zbl 0811.65076
Liu, Xu-Dong; Osher, Stanley; Chan, Tony
1994
The numerical simulation of two-dimensional fluid flow with strong shocks. Zbl 0573.76057
Woodward, Paul; Colella, Phillip
1984
Application of a fractional-step method to incompressible Navier-Stokes equations. Zbl 0582.76038
Kim, John; Moin, Parviz
1985
The Runge-Kutta discontinuous Galerkin method for conservation laws. I: Multidimensional systems. Zbl 0920.65059
Cockburn, Bernardo; Shu, Chi-Wang
1998
Finite difference/spectral approximations for the time-fractional diffusion equation. Zbl 1126.65121
Lin, Yumin; Xu, Chuanju
2007
A numerical method for solving incompressible viscous flow problems. Zbl 0149.44802
Chorin, Alexandre Joel
1967
A non-oscillatory Eulerian approach to interfaces in multimaterial flows (the ghost fluid method). Zbl 0957.76052
Fedkiw, Ronald P.; Aslam, Tariq; Merriman, Barry; Osher, Stanley
1999
A multiscale finite element method for elliptic problems in composite materials and porous media. Zbl 0880.73065
Hou, Thomas Y.; Wu, Xiao-Hui
1997
A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws. Zbl 0387.76063
Sod, Gary A.
1978
A front-tracking method for viscous, incompressible multi-fluid flows. Zbl 0758.76047
Unverdi, Salih Ozen; Tryggvason, Grétar
1992
TVB Runge-Kutta local projection discontinuous Galerkin finite element method for conservation laws. III: One-dimensional systems. Zbl 0677.65093
Cockburn, Bernardo; Lin, San-Yih; Shu, Chi-Wang
1989
Flow patterns around heart valves: A numerical method. Zbl 0244.92002
Peskin, Charles S.
1972
A high-order accurate discontinuous finite element method for the numerical solution of the compressible Navier-Stokes equations. Zbl 0871.76040
Bassi, F.; Rebay, S.
1997
Fully multidimensional flux-corrected transport algorithms for fluids. Zbl 0416.76002
Zalesak, Steven T.
1979
An arbitrary Lagrangian-Eulerian computing method for all flow speeds. Zbl 0292.76018
Hirt, C. W.; Amsden, A. A.; Cook, J. L.
1974
Structural optimization using sensitivity analysis and a level-set method. Zbl 1136.74368
Allaire, Grégoire; Jouve, François; Toader, Anca-Maria
2004
Boundary conditions for direct simulations of compressible viscous flows. Zbl 0766.76084
Poinsot, T. J.; Lele, S. K.
1992
Simulating free surface flows with SPH. Zbl 0794.76073
Monaghan, J. J.
1994
A front-tracking method for the computations of multiphase flow. Zbl 1047.76574
Tryggvason, G.; Bunner, B.; Esmaeeli, A.; Juric, D.; Al-Rawahi, N.; Tauber, W.; Han, J.; Nas, S.; Jan, Y.-J.
2001
Adaptive mesh refinement for hyperbolic partial differential equations. Zbl 0536.65071
Berger, Marsha J.; Oliger, Joseph
1984
Combined immersed-boundary finite-difference methods for three-dimensional complex flow simulations. Zbl 0972.76073
Fadlun, E. A.; Verzicco, R.; Orlandi, P.; Mohd-Yusof, J.
2000
A spectral element method for fluid dynamics: Laminar flow in a channel expansion. Zbl 0535.76035
Patera, Anthony T.
1984
Local adaptive mesh refinement for shock hydrodynamics. Zbl 0665.76070
Berger, M. J.; Colella, P.
1989
High-order splitting methods for the incompressible Navier-Stokes equations. Zbl 0738.76050
Karniadakis, George Em; Israeli, Moshe; Orszag, Steven A.
1991
Dispersion-relation-preserving finite differene schemes for computational acoustics. Zbl 0790.76057
Tam, Christopher K. W.; Webb, Jay C.
1993
The piecewise parabolic method (PPM) for gas-dynamical simulations. Zbl 0531.76082
Colella, Phillip; Woodward, Paul R.
1984
Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy. Zbl 0961.65078
Balsara, Dinshaw S.; Shu, Chi-Wang
2000
Exponential time differencing for stiff systems. Zbl 1005.65069
Cox, S. M.; Matthews, P. C.
2002
Non-oscillatory central differencing for hyperbolic conservation laws. Zbl 0697.65068
1990
A second-order projection method for the incompressible Navier-Stokes equations. Zbl 0681.76030
Bell, John B.; Colella, Phillip; Glaz, Harland M.
1989
Jacobian-free Newton-Krylov methods: a survey of approaches and applications. Zbl 1036.65045
Knoll, D. A.; Keyes, D. E.
2004
Solution of the implicitly discretized fluid flow equations by operator- splitting. Zbl 0619.76024
Issa, R. I.
1986
Level set methods: An overview and some recent results. Zbl 0988.65093
Osher, Stanley; Fedkiw, Ronald P.
2001
Reconstructing volume tracking. Zbl 0933.76069
Rider, William J.; Kothe, Douglas B.
1998
Modeling low Reynolds number incompressible flows using SPH. Zbl 0889.76066
Morris, Joseph P.; Fox, Patrick J.; Zhu, Yi
1997
Towards the ultimate conservative difference scheme. II: Monotonicity and conservation combined in a second-order scheme. Zbl 0276.65055
van Leer, Bram
1974
Rapid solution of integral equations of classical potential theory. Zbl 0629.65122
Rokhlin, V.
1985
Flux-corrected transport. I: SHASTA, a fluid transport algorithm that works. Zbl 0251.76004
Boris, Jay P.; Book, David L.
1973
Towards the ultimate conservative difference scheme. IV: A new approach to numerical convection. Zbl 0339.76056
van Leer, Bram
1977
A coupled level set and volume-of-fluid method for computing 3D and axisymmetric incompressible two-phase flows. Zbl 0977.76071
Sussman, Mark; Puckett, Elbridge Gerry
2000
A multiphase Godunov method for compressible multifluid and multiphase flows. Zbl 0937.76053
Saurel, Richard; Abgrall, Rémi
1999
New high-resolution central schemes for nonlinear conservation laws and convection-diffusion equations. Zbl 0987.65085
2000
An immersed boundary method with direct forcing for the simulation of particulate flows. Zbl 1138.76398
Uhlmann, Markus
2005
Calculation of two-phase Navier-Stokes flows using phase-field modeling. Zbl 0966.76060
Jacqmin, David
1999
An immersed boundary method with formal second-order accuracy and reduced numerical viscosity. Zbl 0954.76066
Lai, Ming-Chih; Peskin, Charles S.
2000
Weighted essentially non-oscillatory schemes on triangular meshes. Zbl 0926.65090
Hu, Changqing; Shu, Chi-Wang
1999
Preconditioning techniques for large linear systems: A survey. Zbl 1015.65018
Benzi, Michele
2002
A PDE-based fast local level set method. Zbl 0964.76069
Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo
1999
Differential quadrature: A technique for the rapid solution of nonlinear partial differential equations. Zbl 0247.65061
Bellman, Richard; Kashef, B. G.; Casti, J.
1972
An improved weighted essentially non-oscillatory scheme for hyperbolic conservation laws. Zbl 1136.65076
Borges, Rafael; Carmona, Monique; Costa, Bruno; Don, Wai Sun
2008
Flux vector splitting of the inviscid gasdynamic equations with application to finite-difference methods. Zbl 0468.76066
Steger, Joseph L.; Warming, R. F.
1981
A second-order accurate numerical approximation for the fractional diffusion equation. Zbl 1089.65089
Tadjeran, Charles; Meerschaert, Mark M.; Scheffler, Hans-Peter
2006
An immersed-boundary finite volume method for simulations of flow in complex geometries. Zbl 1057.76039
Kim, Jungwoo; Kim, Dongjoo; Choi, Haecheon
2001
Balancing source terms and flux gradients in high-resolution Godunov methods: The quasi-steady wave-propagation algorithm. Zbl 0931.76059
LeVeque, Randall J.
1998
The accuracy and stability of an implicit solution method for the fractional diffusion equation. Zbl 1072.65123
Langlands, T. A. M.; Henry, B. I.
2005
A fictitious domain approach to the direct numerical simulation of incompressible viscous flow past moving rigid bodies: application to particulate flow. Zbl 1047.76097
Glowinski, R.; Pan, T. W.; Hesla, T. I.; Joseph, D. D.; Périaux, J.
2001
Modeling a no-slip flow boundary with an external force field. Zbl 0768.76049
Goldstein, D.; Handler, R.; Sirovich, L.
1993
Accurate projection methods for the incompressible Navier-Stokes equations. Zbl 1153.76339
Brown, David L.; Cortez, Ricardo; Minion, Michael L.
2001
Modeling uncertainty in flow simulations via generalized polynomial chaos. Zbl 1047.76111
2003
A hybrid particle level set method for improved interface capturing. Zbl 1021.76044
Enright, Douglas; Fedkiw, Ronald; Ferziger, Joel; Mitchell, Ian
2002
Mapped weighted essentially non-oscillatory schemes: Achieving optimal order near critical points. Zbl 1072.65114
Henrick, Andrew K.; Aslam, Tariq D.; Powers, Joseph M.
2005
Fully conservative higher order finite difference schemes for incompressible flow. Zbl 0932.76054
Morinishi, Y.; Lund, T. S.; Vasilyev, O. V.; Moin, P.
1998
Exact non-reflecting boundary conditions. Zbl 0671.65094
Keller, Joseph B.; Givoli, Dan
1989
On Godunov-type methods near low densities. Zbl 0709.76102
Einfeldt, B.; Munz, C. D.; Roe, P. L.; Sjögreen, B.
1991
An accurate Cartesian grid method for viscous incompressible flows with complex immersed boundaries. Zbl 0957.76043
Ye, T.; Mittal, R.; Udaykumar, H. S.; Shyy, W.
1999
Modelling merging and fragmentation in multiphase flows with SURFER. Zbl 0809.76064
Lafaurie, Bruno; Nardone, Carlo; Scardovelli, Ruben; Zaleski, Stéphane; Zanetti, Gianluigi
1994
A study of numerical methods for hyperbolic conservation laws with stiff source terms. Zbl 0682.76053
LeVeque, R. J.; Yee, H. C.
1990
Compact finite difference method for the fractional diffusion equation. Zbl 1179.65107
Cui, Mingrong
2009
Numerical simulation of interfacial flows by smoothed particle hydrodynamics. Zbl 1028.76039
Colagrossi, Andrea; Landrini, Maurizio
2003
A new difference scheme for the time fractional diffusion equation. Zbl 1349.65261
Alikhanov, Anatoly A.
2015
Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes. Zbl 0832.65098
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul
1994
How to prevent pressure oscillations in multicomponent flow calculations: A quasi conservative approach. Zbl 0847.76060
Abgrall, Rémi
1996
A boundary condition capturing method for Poisson’s equation on irregular domains. Zbl 0958.65105
Liu, Xu-Dong; Fedkiw, Ronald P.; Kang, Myungjoo
2000
Summation by parts for finite difference approximations for $$d/dx$$. Zbl 0792.65011
Strand, Bo
1994
A unified framework for the construction of one-step finite volume and discontinuous Galerkin schemes on unstructured meshes. Zbl 1147.65075
Dumbser, Michael; Balsara, Dinshaw S.; Toro, Eleuterio F.; Munz, Claus-Dieter
2008
A level set formulation of Eulerian interface capturing methods for incompressible fluid flows. Zbl 0847.76048
Chang, Y. C.; Hou, T. Y.; Merriman, B.; Osher, S.
1996
Time dependent boundary conditions for hyperbolic systems. Zbl 0619.76089
Thompson, Kevin W.
1987
Structural boundary design via level set and immersed interface methods. Zbl 0994.74082
Sethian, J. A.; Wiegmann, Andreas
2000
Computational design for long-term numerical integration of the equations of fluid motion: two-dimensional incompressible flow. Part I. Zbl 0147.44202
Arakawa, Akio
1966
An accurate adaptive solver for surface-tension-driven interfacial flows. Zbl 1280.76020
Popinet, Stéphane
2009
Preconditioned methods for solving the incompressible and low speed compressible equations. Zbl 0633.76069
Turkel, Eli
1987
Fast parallel algorithms for short-range molecular dynamics. Zbl 0830.65120
Plimpton, Steve
1995
Nodal high-order methods on unstructured grids. I: Time-domain solution of Maxwell’s equations. Zbl 1014.78016
Hesthaven, J. S.; Warburton, T.
2002
A novel thermal model for the lattice Boltzmann method in incompressible limit. Zbl 0919.76068
He, Xiaoyi; Chen, Shiyi; Doolen, Gary D.
1998
What is the fractional Laplacian? A comparative review with new results. Zbl 1453.35179
Lischke, Anna; Pang, Guofei; Gulian, Mamikon; Song, Fangying; Glusa, Christian; Zheng, Xiaoning; Mao, Zhiping; Cai, Wei; Meerschaert, Mark M.; Ainsworth, Mark; Karniadakis, George Em
2020
Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders. Zbl 1454.65184
Lee, Kookjin; Carlberg, Kevin T.
2020
A parallel-in-time iterative algorithm for Volterra partial integro-differential problems with weakly singular kernel. Zbl 1437.65237
Gu, Xian-Ming; Wu, Shu-Lin
2020
Data-driven POD-Galerkin reduced order model for turbulent flows. Zbl 1437.76015
Hijazi, Saddam; Stabile, Giovanni; Mola, Andrea; Rozza, Gianluigi
2020
Learning macroscopic parameters in nonlinear multiscale simulations using nonlocal multicontinua upscaling techniques. Zbl 1436.76041
Vasilyeva, Maria; Leung, Wing T.; Chung, Eric T.; Efendiev, Yalchin; Wheeler, Mary
2020
Deep learning observables in computational fluid dynamics. Zbl 1436.76051
Lye, Kjetil O.; Mishra, Siddhartha; Ray, Deep
2020
Weak adversarial networks for high-dimensional partial differential equations. Zbl 1436.65156
Zang, Yaohua; Bao, Gang; Ye, Xiaojing; Zhou, Haomin
2020
A second-order and nonuniform time-stepping maximum-principle preserving scheme for time-fractional Allen-Cahn equations. Zbl 1440.65116
Liao, Hong-lin; Tang, Tao; Zhou, Tao
2020
An immersed finite element method for elliptic interface problems in three dimensions. Zbl 1440.65207
Guo, Ruchi; Lin, Tao
2020
Random batch methods (RBM) for interacting particle systems. Zbl 1453.82065
Jin, Shi; Li, Lei; Liu, Jian-Guo
2020
Two classes of linearly implicit local energy-preserving approach for general multi-symplectic Hamiltonian PDEs. Zbl 1453.65437
Cai, Jiaxiang; Shen, Jie
2020
Corner treatments for high-order local absorbing boundary conditions in high-frequency acoustic scattering. Zbl 1453.65340
Modave, A.; Geuzaine, C.; Antoine, X.
2020
Solving electrical impedance tomography with deep learning. Zbl 1453.65041
Fan, Yuwei; Ying, Lexing
2020
Recurrent neural network closure of parametric POD-Galerkin reduced-order models based on the Mori-Zwanzig formalism. Zbl 1436.65093
Wang, Qian; Ripamonti, Nicolò; Hesthaven, Jan S.
2020
Subcell flux limiting for high-order Bernstein finite element discretizations of scalar hyperbolic conservation laws. Zbl 1436.65141
Kuzmin, Dmitri; Quezada de Luna, Manuel
2020
Fisher information regularization schemes for Wasserstein gradient flows. Zbl 1437.65055
Li, Wuchen; Lu, Jianfeng; Wang, Li
2020
Local meshless methods for second order elliptic interface problems with sharp corners. Zbl 1437.65208
2020
Dynamically orthogonal tensor methods for high-dimensional nonlinear PDEs. Zbl 1453.65280
Dektor, Alec; Venturi, Daniele
2020
4D large scale variational data assimilation of a turbulent flow with a dynamics error model. Zbl 1436.76010
Chandramouli, Pranav; Memin, Etienne; Heitz, Dominique
2020
A deep-learning-based surrogate model for data assimilation in dynamic subsurface flow problems. Zbl 1436.76058
Tang, Meng; Liu, Yimin; Durlofsky, Louis J.
2020
Moving surface mesh-incorporated particle method for numerical simulation of a liquid droplet. Zbl 1435.76057
Matsunaga, Takuya; Koshizuka, Seiichi; Hosaka, Tomoyuki; Ishii, Eiji
2020
Vertex approximate gradient discretization preserving positivity for two-phase Darcy flows in heterogeneous porous media. Zbl 1435.76042
Brenner, K.; Masson, R.; Quenjel, E. H.
2020
Regularized integral equation methods for elastic scattering problems in three dimensions. Zbl 1436.65211
Bruno, Oscar P.; Yin, Tao
2020
Solution of Stokes flow in complex nonsmooth 2D geometries via a linear-scaling high-order adaptive integral equation scheme. Zbl 1436.65202
Wu, Bowei; Zhu, Hai; Barnett, Alex; Veerapaneni, Shravan
2020
Implicit shock tracking using an optimization-based high-order discontinuous Galerkin method. Zbl 1436.76035
Zahr, M. J.; Shi, A.; Persson, P.-O.
2020
A domain decomposition method for the time-dependent Navier-Stokes-Darcy model with Beavers-Joseph interface condition and defective boundary condition. Zbl 1436.65135
Qiu, Changxin; He, Xiaoming; Li, Jian; Lin, Yanping
2020
On the solution of a generalized Higgs boson equation in the de Sitter space-time through an efficient and Hamiltonian scheme. Zbl 1437.65109
Muñoz-Pérez, Luis F.; Macías-Díaz, J. E.
2020
A neural network scheme for recovering scattering obstacles with limited phaseless far-field data. Zbl 1437.68157
Yin, Weishi; Yang, Wenhong; Liu, Hongyu
2020
A low-rank projector-splitting integrator for the Vlasov-Maxwell equations with divergence correction. Zbl 1453.65357
Einkemmer, Lukas; Ostermann, Alexander; Piazzola, Chiara
2020
Decoupled, non-iterative, and unconditionally energy stable large time stepping method for the three-phase Cahn-Hilliard phase-field model. Zbl 1453.76223
Zhang, Jun; Yang, Xiaofeng
2020
A roadmap for discretely energy-stable schemes for dissipative systems based on a generalized auxiliary variable with guaranteed positivity. Zbl 1453.65276
Yang, Zhiguo; Dong, Suchuan
2020
Instantaneous control of interacting particle systems in the mean-field limit. Zbl 1454.82025
Burger, Martin; Pinnau, René; Totzeck, Claudia; Tse, Oliver; Roth, Andreas
2020
A conservative, consistent, and scalable meshfree mimetic method. Zbl 1435.65221
Trask, Nathaniel; Bochev, Pavel; Perego, Mauro
2020
Controlling oscillations in high-order discontinuous Galerkin schemes using artificial viscosity tuned by neural networks. Zbl 1435.65156
Discacciati, Niccolò; Hesthaven, Jan S.; Ray, Deep
2020
Fully implicit hybrid two-level domain decomposition algorithms for two-phase flows in porous media on 3D unstructured grids. Zbl 1435.76038
Luo, Li; Liu, Lulu; Cai, Xiao-Chuan; Keyes, David E.
2020
Stability analysis of hierarchical tensor methods for time-dependent PDEs. Zbl 1435.65149
Rodgers, Abram; Venturi, Daniele
2020
Fast Calderón preconditioning for Helmholtz boundary integral equations. Zbl 1435.65218
Fierro, Ignacia; Jerez-Hanckes, Carlos
2020
Exponential sum approximation for Mittag-Leffler function and its application to fractional Zener wave equation. Zbl 1436.65023
Lam, P. H.; So, H. C.; Chan, C. F.
2020
FFT-based high order central difference schemes for three-dimensional Poisson’s equation with various types of boundary conditions. Zbl 1436.65159
Feng, Hongsong; Zhao, Shan
2020
A hybrid approach to couple the discrete velocity method and method of moments for rarefied gas flows. Zbl 1436.76054
Yang, Weiqi; Gu, Xiao-Jun; Wu, Lei; Emerson, David R.; Zhang, Yonghao; Tang, Shuo
2020
A purely frequency based Floquet-Hill formulation for the efficient stability computation of periodic solutions of ordinary differential systems. Zbl 1437.65074
Guillot, Louis; Lazarus, Arnaud; Thomas, Olivier; Vergez, Christophe; Cochelin, Bruno
2020
Learning constitutive relations from indirect observations using deep neural networks. Zbl 1437.65192
Huang, Daniel Z.; Xu, Kailai; Farhat, Charbel; Darve, Eric
2020
Sparse polynomial chaos expansions using variational relevance vector machines. Zbl 1437.62114
Tsilifis, Panagiotis; Papaioannou, Iason; Straub, Daniel; Nobile, Fabio
2020
Using hierarchical matrices in the solution of the time-fractional heat equation by multigrid waveform relaxation. Zbl 1437.65131
Hu, Xiaozhe; Rodrigo, Carmen; Gaspar, Francisco J.
2020
Data driven approximation of parametrized PDEs by reduced basis and neural networks. Zbl 1437.65224
Dal Santo, Niccolò; Deparis, Simone; Pegolotti, Luca
2020
On Lagrangian schemes for porous medium type generalized diffusion equations: a discrete energetic variational approach. Zbl 1437.76041
Liu, Chun; Wang, Yiwei
2020
Constraint energy minimizing generalized multiscale finite element method for nonlinear poroelasticity and elasticity. Zbl 1437.74026
Fu, Shubin; Chung, Eric; Mai, Tina
2020
A Hermite WENO scheme with artificial linear weights for hyperbolic conservation laws. Zbl 1437.76033
Zhao, Zhuang; Qiu, Jianxian
2020
Stability-enhanced AP IMEX-LDG schemes for linear kinetic transport equations under a diffusive scaling. Zbl 1440.65147
Peng, Zhichao; Cheng, Yingda; Qiu, Jing-Mei; Li, Fengyan
2020
A fourth-order kernel-free boundary integral method for implicitly defined surfaces in three space dimensions. Zbl 1440.65158
Xie, Yaning; Ying, Wenjun
2020
The conservative splitting domain decomposition method for multicomponent contamination flows in porous media. Zbl 1453.65305
Liang, Dong; Zhou, Zhongguo
2020
A weighted meshfree collocation method for incompressible flows using radial basis functions. Zbl 1453.65364
Wang, Lihua; Qian, Zhihao; Zhou, Yueting; Peng, Yongbo
2020
3-d topology optimization of modulated and oriented periodic microstructures by the homogenization method. Zbl 1453.74072
Geoffroy-Donders, Perle; Allaire, Grégoire; Pantz, Olivier
2020
Range-separated tensor decomposition of the discretized Dirac delta and elliptic operator inverse. Zbl 1453.65035
Khoromskij, Boris N.
2020
A hierarchical butterfly LU preconditioner for two-dimensional electromagnetic scattering problems involving open surfaces. Zbl 1453.65455
Liu, Yang; Yang, Haizhao
2020
Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks. Zbl 1454.65130
Geneva, Nicholas; Zabaras, Nicholas
2020
A discontinuous Galerkin method for the simulation of compressible gas-gas and gas-water two-medium flows. Zbl 1453.76063
Cheng, Jian; Zhang, Fan; Liu, Tiegang
2020
Filtered stochastic Galerkin methods for hyperbolic equations. Zbl 1453.62579
Kusch, Jonas; McClarren, Ryan G.; Frank, Martin
2020
High-order Runge-Kutta discontinuous Galerkin methods with a new type of multi-resolution WENO limiters. Zbl 1453.65351
Zhu, Jun; Qiu, Jianxian; Shu, Chi-Wang
2020
Fast matrix splitting preconditioners for higher dimensional spatial fractional diffusion equations. Zbl 1453.65062
Bai, Zhong-Zhi; Lu, Kang-Ya
2020
Recovering elastic inclusions by shape optimization methods with immersed finite elements. Zbl 1453.74073
Guo, Ruchi; Lin, Tao; Lin, Yanping
2020
Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. Zbl 1453.68165
Jagtap, Ameya D.; Kawaguchi, Kenji; Karniadakis, George Em
2020
A stabilized semi-implicit Fourier spectral method for nonlinear space-fractional reaction-diffusion equations. Zbl 1453.65370
Zhang, Hui; Jiang, Xiaoyun; Zeng, Fanhai; Karniadakis, George Em
2020
A discontinuous Galerkin method for the Aw-Rascle traffic flow model on networks. Zbl 1453.65310
Buli, Joshua; Xing, Yulong
2020
A hybrid method for linearized wave radiation and diffraction problem by a three dimensional floating structure in a polynya. Zbl 1436.65152
Li, Z. F.; Shi, Y. Y.; Wu, G. X.
2020
A path conservative finite volume method for a shear shallow water model. Zbl 1436.76036
Chandrashekar, Praveen; Nkonga, Boniface; Meena, Asha Kumari; Bhole, Ashish
2020
A sharp interface method for an immersed viscoelastic solid. Zbl 1435.76040
Puelz, Charles; Griffith, Boyce E.
2020
Reactive fluid flow topology optimization with the multi-relaxation time lattice Boltzmann method and a level-set function. Zbl 1435.76064
Dugast, Florian; Favennec, Yann; Josset, Christophe
2020
Deep learning seismic substructure detection using the frozen Gaussian approximation. Zbl 1435.86002
Hateley, James C.; Roberts, Jay; Mylonakis, Kyle; Yang, Xu
2020
Eulerian/Lagrangian formulation for the elasto-capillary deformation of a flexible fibre. Zbl 1435.76024
Lecrivain, Gregory; Pacheco Grein, Taisa Beatriz; Yamamoto, Ryoichi; Hampel, Uwe; Taniguchi, Takashi
2020
Linear stability eigenmodal analysis for steady and temporally periodic boundary-layer flow configurations using a velocity-vorticity formulation. Zbl 1435.76023
Morgan, Scott; Davies, Christopher
2020
Variational training of neural network approximations of solution maps for physical models. Zbl 1435.68292
Li, Yingzhou; Lu, Jianfeng; Mao, Anqi
2020
Meshfree methods on manifolds for hydrodynamic flows on curved surfaces: a generalized moving least-squares (GMLS) approach. Zbl 1435.76059
Gross, B. J.; Trask, N.; Kuberry, P.; Atzberger, P. J.
2020
Constraint-aware neural networks for Riemann problems. Zbl 1435.76046
Magiera, Jim; Ray, Deep; Hesthaven, Jan S.; Rohde, Christian
2020
Conservative finite-volume framework and pressure-based algorithm for flows of incompressible, ideal-gas and real-gas fluids at all speeds. Zbl 1435.76043
Denner, Fabian; Evrard, Fabien; van Wachem, Berend G. M.
2020
A fast integral equation method for the two-dimensional Navier-Stokes equations. Zbl 1435.76020
af Klinteberg, Ludvig; Askham, Travis; Kropinski, Mary Catherine
2020
Computational multiscale methods for first-order wave equation using mixed CEM-GMsFEM. Zbl 1435.76036
Chung, Eric; Pun, Sai-Mang
2020
A new efficient momentum preserving level-set/VOF method for high density and momentum ratio incompressible two-phase flows. Zbl 1436.76043
Zuzio, Davide; Orazzo, Annagrazia; Estivalèzes, Jean-Luc; Lagrange, Isabelle
2020
A second-order measure of boundary oscillations for overhang control in topology optimization. Zbl 1436.65075
2020
Modeling tissue perfusion in terms of 1d-3d embedded mixed-dimension coupled problems with distributed sources. Zbl 1436.76064
Koch, Timo; Schneider, Martin; Helmig, Rainer; Jenny, Patrick
2020
Bound preserving and energy dissipative schemes for porous medium equation. Zbl 1436.65101
Gu, Yiqi; Shen, Jie
2020
Efficient nonlinear optimal smoothing and sampling algorithms for complex turbulent nonlinear dynamical systems with partial observations. Zbl 1436.37091
Chen, Nan; Majda, Andrew J.
2020
Exponential integrators for stochastic Maxwell’s equations driven by Itô noise. Zbl 1436.60061
Cohen, David; Cui, Jianbo; Hong, Jialin; Sun, Liying
2020
A linearity preserving nodal variation limiting algorithm for continuous Galerkin discretization of ideal MHD equations. Zbl 1436.76030
Mabuza, Sibusiso; Shadid, John N.; Cyr, Eric C.; Pawlowski, Roger P.; Kuzmin, Dmitri
2020
Iterative algorithms for the post-processing of high-dimensional data. Zbl 1436.65054
Espig, Mike; Hackbusch, Wolfgang; Litvinenko, Alexander; Matthies, Hermann G.; Zander, Elmar
2020
Second order threshold dynamics schemes for two phase motion by mean curvature. Zbl 1436.65153
Zaitzeff, Alexander; Esedoḡlu, Selim; Garikipati, Krishna
2020
Stochastic simulation algorithms for solving narrow escape diffusion problems by introducing a drift to the target. Zbl 1436.65151
Sabelfeld, Karl
2020
Stochastic conformal schemes for damped stochastic Klein-Gordon equation with additive noise. Zbl 1436.35334
Song, Mingzhan; Qian, Xu; Shen, Tianlong; Song, Songhe
2020
Energy conservative SBP discretizations of the acoustic wave equation in covariant form on staggered curvilinear grids. Zbl 1436.65108
2020
A practical finite difference scheme for the Navier-Stokes equation on curved surfaces in $$\mathbb{R}^3$$. Zbl 1436.76048
Yang, Junxiang; Li, Yibao; Kim, Junseok
2020
Fast evaluation of the Biot-Savart integral using FFT for electrical conductivity imaging. Zbl 1436.65021
Yazdanian, Hassan; Saturnino, Guilherme B.; Thielscher, Axel; Knudsen, Kim
2020
RANS turbulence model development using CFD-driven machine learning. Zbl 1436.76011
Zhao, Yaomin; Akolekar, Harshal D.; Weatheritt, Jack; Michelassi, Vittorio; Sandberg, Richard D.
2020
Direct reconstruction method for discontinuous Galerkin methods on higher-order mixed-curved meshes. II: Surface integration. Zbl 1437.76025
You, Hojun; Kim, Chongam
2020
Optimal experimental design for prediction based on push-forward probability measures. Zbl 1437.62297
Butler, T.; Jakeman, J. D.; Wildey, T.
2020
A scalable computational platform for particulate Stokes suspensions. Zbl 1437.76042
Yan, Wen; Corona, Eduardo; Malhotra, Dhairya; Veerapaneni, Shravan; Shelley, Michael
2020
Low-dissipation centred schemes for hyperbolic equations in conservative and non-conservative form. Zbl 1437.65119
Toro, E. F.; Saggiorato, B.; Tokareva, S.; Hidalgo, A.
2020
High-order ALE gas-kinetic scheme with WENO reconstruction. Zbl 1437.76031
Pan, Liang; Zhao, Fengxiang; Xu, Kun
2020
A fast implicit solver for semiconductor models in one space dimension. Zbl 1437.65132
Laiu, M. Paul; Chen, Zheng; Hauck, Cory D.
2020
Topology optimization of thermal fluid-structure systems using body-fitted meshes and parallel computing. Zbl 1437.74021
Feppon, F.; Allaire, G.; Dapogny, C.; Jolivet, P.
2020
Boussinesq-Peregrine water wave models and their numerical approximation. Zbl 1437.76022
Katsaounis, Theodoros; Mitsotakis, Dimitrios; Sadaka, Georges
2020
...and 1103 more Documents
all top 5
#### Cited by 56,207 Authors
232 Shu, Chi-Wang 219 Dehghan Takht Fooladi, Mehdi 162 Simos, Theodore E. 156 Karniadakis, George Em 117 Liu, Fawang 108 Dumbser, Michael 103 Nordström, Jan 100 Boyd, John Philip 96 Chung, Tsz Shun Eric 96 Shen, Jie 90 Shu, Chang 89 Hughes, Thomas J. R. 89 Wang, Hong 87 Liu, Wing Kam 86 Sagaut, Pierre 85 Băleanu, Dumitru I. 84 Turner, Ian William 81 Hesthaven, Jan S. 80 Xu, Kun 79 Jin, Shi 79 Tezduyar, Tayfun E. 78 Sheu, Tony Wen-Hann 77 Adams, Nikolaus A. 77 Quarteroni, Alfio M. 76 Bazilevs, Yuri 76 Huang, Yunqing 74 Shashkov, Mikhail J. 73 Osher, Stanley Joel 72 Codina, Ramon 72 He, Yinnian 72 Khoo, Boo Cheong 72 Seaïd, Mohammed 71 Majda, Andrew J. 71 Toro, Eleuterio F. 71 Yang, Xiaofeng 70 Kim, Junseok 70 Oñate Ibáñez de Navarra, Eugenio 70 Qiu, Jianxian 69 Li, Jichun 69 Sherwin, Spencer J. 67 Verzicco, Roberto 66 Bao, Weizhu 65 Glowinski, Roland 64 Du, Qiang 64 Farhat, Charbel H. 64 Rebholz, Leo G. 63 Bhrawy, Ali Hassan 62 Abgrall, Rémi 61 Liu, Gui-Rong 61 Sun, Zhizhong 58 Chen, Wen 58 Schwab, Christoph 58 Takizawa, Kenji 58 Wall, Wolfgang A. 58 Ying, Lexing 57 Wheeler, Mary Fanett 56 Abbaszadeh, Mostafa 56 Cockburn, Bernardo 56 Doha, Eid H. 56 Feng, Xinlong 56 Lowengrub, John Samuel 56 Mohanty, Ranjan Kumar 56 Sun, Shuyu 56 Wang, Cheng 55 Degond, Pierre 55 Efendiev, Yalchin R. 55 Guermond, Jean-Luc 55 Hou, Thomas Yizhao 55 Löhner, Rainald 55 Vuik, Cornelis 55 Zhang, Zhimin 54 Greengard, Leslie F. 54 Zhang, Jun 52 Gunzburger, Max D. 52 Guo, Ben-Yu 52 Jenny, Patrick 52 Oden, John Tinsley 52 Wang, Yushun 51 Idelsohn, Sergio Rodolfo 51 Li, Zhilin 51 Moin, Parviz 51 Pullin, Dale I. 50 Bürger, Raimund 50 Huang, Ting-Zhu 50 Kurganov, Alexander 50 Lin, Guang 49 Antoine, Xavier 49 Givoli, Dan 49 Macías-Díaz, Jorge Eduardo 49 Nicholls, David P. 49 Shi, Baochang 49 Turkel, Eli L. 49 Zhang, Luming 48 Belytschko, Ted Bohdan 48 Dawson, Clint N. 48 Jameson, Antony 48 Khaliq, Abdul Q. M. 48 Manzini, Gianmarco 48 Munz, Claus-Dieter 48 Qian, Jianliang ...and 56,107 more Authors
all top 5
#### Cited in 938 Journals
9,262 Journal of Computational Physics 3,362 Computers and Fluids 3,320 Computer Methods in Applied Mechanics and Engineering 3,264 Journal of Fluid Mechanics 2,023 Journal of Computational and Applied Mathematics 1,807 Journal of Scientific Computing 1,753 Computers & Mathematics with Applications 1,583 Applied Mathematics and Computation 1,375 Physics of Fluids 1,345 Applied Numerical Mathematics 1,045 International Journal for Numerical Methods in Fluids 993 SIAM Journal on Scientific Computing 969 Engineering Analysis with Boundary Elements 882 International Journal for Numerical Methods in Engineering 834 Applied Mathematical Modelling 622 Computational Mechanics 569 Mathematics of Computation 519 Numerische Mathematik 455 European Journal of Mechanics. B. Fluids 440 Computer Physics Communications 419 SIAM Journal on Numerical Analysis 402 Mathematics and Computers in Simulation 394 Numerical Algorithms 390 Physica D 379 International Journal of Computer Mathematics 371 Numerical Methods for Partial Differential Equations 336 Computational Geosciences 326 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 318 International Journal of Computational Fluid Dynamics 313 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 313 Mathematical Problems in Engineering 296 Journal of Mathematical Analysis and Applications 296 Applied Mathematics Letters 292 Advances in Computational Mathematics 272 International Journal of Heat and Mass Transfer 267 International Journal of Numerical Methods for Heat & Fluid Flow 265 Communications in Nonlinear Science and Numerical Simulation 252 Wave Motion 241 International Journal of Computational Methods 236 Communications in Numerical Methods in Engineering 235 BIT 218 Multiscale Modeling & Simulation 216 Computational and Applied Mathematics 210 Journal of Engineering Mathematics 204 Journal of Mathematical Physics 203 Mathematical and Computer Modelling 201 Acta Mechanica 199 Journal of Statistical Physics 192 Journal of Mathematical Chemistry 171 Mathematical Methods in the Applied Sciences 166 Applied Mathematics and Mechanics. (English Edition) 159 Flow, Turbulence and Combustion 155 SIAM/ASA Journal on Uncertainty Quantification 151 Advances in Difference Equations 150 Theoretical and Computational Fluid Dynamics 148 Acta Mechanica Sinica 144 Combustion Theory and Modelling 144 Comptes Rendus. Mathématique. Académie des Sciences, Paris 143 Computational Mathematics and Mathematical Physics 138 Inverse Problems in Science and Engineering 137 Chaos, Solitons and Fractals 134 SIAM Journal on Applied Mathematics 132 Inverse Problems 129 Journal of Differential Equations 129 International Journal of Modern Physics C 128 Abstract and Applied Analysis 123 Numerical Linear Algebra with Applications 122 Fluid Dynamics 119 Transport Theory and Statistical Physics 119 Chaos 115 Archives of Computational Methods in Engineering 109 Applicable Analysis 109 Linear Algebra and its Applications 108 ZAMP. Zeitschrift für angewandte Mathematik und Physik 105 Physics of Fluids, A 105 Nonlinear Dynamics 101 Bulletin of Mathematical Biology 101 Journal of Theoretical Biology 98 Calcolo 98 Journal of Computational Acoustics 97 Journal of Turbulence 96 Journal of Applied Mathematics 94 Matematicheskoe Modelirovanie 93 Journal of Nonlinear Science 92 Discrete and Continuous Dynamical Systems. Series B 91 Fractional Calculus & Applied Analysis 88 Journal of Mathematical Biology 88 SIAM Journal on Mathematical Analysis 86 Archive for Rational Mechanics and Analysis 86 Computing 86 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 85 International Journal of Engineering Science 85 Japan Journal of Industrial and Applied Mathematics 84 European Journal of Operational Research 84 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 84 Applied and Computational Harmonic Analysis 83 East Asian Journal on Applied Mathematics 81 International Journal of Applied and Computational Mathematics 80 Journal of Non-Newtonian Fluid Mechanics 79 Shock Waves ...and 838 more Journals
all top 5
#### Cited in 61 Fields
30,656 Numerical analysis (65-XX) 28,835 Fluid mechanics (76-XX) 15,917 Partial differential equations (35-XX) 6,338 Mechanics of deformable solids (74-XX) 2,810 Statistical mechanics, structure of matter (82-XX) 2,557 Classical thermodynamics, heat transfer (80-XX) 2,412 Biology and other natural sciences (92-XX) 2,401 Optics, electromagnetic theory (78-XX) 1,978 Ordinary differential equations (34-XX) 1,833 Geophysics (86-XX) 1,459 Probability theory and stochastic processes (60-XX) 1,291 Calculus of variations and optimal control; optimization (49-XX) 1,290 Dynamical systems and ergodic theory (37-XX) 1,048 Computer science (68-XX) 921 Statistics (62-XX) 920 Quantum theory (81-XX) 907 Operations research, mathematical programming (90-XX) 858 Integral equations (45-XX) 758 Approximations and expansions (41-XX) 716 Real functions (26-XX) 578 Systems theory; control (93-XX) 535 Mechanics of particles and systems (70-XX) 489 Information and communication theory, circuits (94-XX) 476 Linear and multilinear algebra; matrix theory (15-XX) 440 Special functions (33-XX) 411 Harmonic analysis on Euclidean spaces (42-XX) 350 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 321 Operator theory (47-XX) 285 Astronomy and astrophysics (85-XX) 210 Global analysis, analysis on manifolds (58-XX) 208 Differential geometry (53-XX) 188 Integral transforms, operational calculus (44-XX) 180 Relativity and gravitational theory (83-XX) 157 Functions of a complex variable (30-XX) 144 Potential theory (31-XX) 144 Functional analysis (46-XX) 116 Difference and functional equations (39-XX) 111 Combinatorics (05-XX) 85 Number theory (11-XX) 82 General and overarching topics; collections (00-XX) 61 Convex and discrete geometry (52-XX) 59 History and biography (01-XX) 49 Measure and integration (28-XX) 28 Topological groups, Lie groups (22-XX) 24 Manifolds and cell complexes (57-XX) 21 Mathematics education (97-XX) 18 Algebraic geometry (14-XX) 17 Field theory and polynomials (12-XX) 16 Sequences, series, summability (40-XX) 16 Algebraic topology (55-XX) 15 Geometry (51-XX) 14 Group theory and generalizations (20-XX) 12 Mathematical logic and foundations (03-XX) 12 Several complex variables and analytic spaces (32-XX) 9 Commutative algebra (13-XX) 8 Nonassociative rings and algebras (17-XX) 8 Abstract harmonic analysis (43-XX) 7 General topology (54-XX) 5 Associative rings and algebras (16-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Category theory; homological algebra (18-XX) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.69846510887146, "perplexity": 12358.058256353268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00014.warc.gz"} |
http://mathoverflow.net/questions/26582/reference-for-the-geometry-of-horospheres?sort=newest | Reference for the geometry of horospheres
Dear all, I am looking for a reference to a proof of the following well-know fact (cited for example by B.Farb in Relatively hyperbolic groups'', Geom. Funct. Anal. 8 (1998), no. 5, 810--840).
Suppose $X$ is the universal covering of a negatively curved Riemannian manifold, let $O$ be an open horoball in $X$ and let $H=\partial O$ be the horospherical boundary of $O$. Also suppose that $\gamma\colon [0,1]\to X\setminus O$ is a rectifiable path such that $d(\gamma (t), H)\geq k>0$ for every $t\in [0,1]$, and let $\pi\colon X\setminus O\to H$ the (well-defined) nearest-point projection. Then, there exists $\alpha>0$ (only depending on the curvature of $X$) such that the length $L(\pi\circ\gamma)$ of $\pi\circ\gamma$ is bounded above by $e^{-\alpha k} L(\gamma)$.
Of course, this fact can be reduced to the computation of the Lipschitz constant of the projection of a horosphere onto another horosphere having the same basepoint. When $X$ is the real hyperbolic $n$-space, such a computation is very easy, and it is likely that the variable curvature case can be reduced to the hyperbolic case via some comparison theorem. However, I was wondering if there is some standard reference I could rely on.
-
I would write "Applying the comparison for triangle one which vertex running to infinity, we get ..." – Anton Petrunin May 31 '10 at 18:15
Yes, probably it is not too difficult to make such an argument work. Anyway, a little issue arises since one egde of the comparison triangles involved is not geodesic, but lies on a horosphere... – Roberto Frigerio Jun 1 '10 at 7:50
Dear Anton, on second thoughts I think your approach can easily lead to a solution. Even if the edge staying far from the infinity is not geodesic, one can approximate $\gamma$ and $\pi\circ\gamma$ with suitable polygonal'' paths approximating the length, then use your argument on the small segments, and finally put the estimates together. – Roberto Frigerio Jun 7 '10 at 12:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748652577400208, "perplexity": 250.31010801519733}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929832.32/warc/CC-MAIN-20150521113209-00158-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1015018 | uu.seUppsala University Publications
Change search
Cite
Citation style
• apa
• ieee
• modern-language-association
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Searches for Higgs bosons with hadronically decaying τ-leptons: Using Grid and Cloud computing techniques
Uppsala University, Disciplinary Domain of Science and Technology, Physics, Department of Physics and Astronomy, High Energy Physics. ATLAS Collaboration. (ATLAS)ORCID iD: 0000-0003-0407-0957
2016 (English)Doctoral thesis, comprehensive summary (Other academic)
##### Abstract [en]
This thesis describes a measurement of the Standard Model Higgs boson coupling to fermions in decays to two τ-leptons, a search for charged Higgs bosons in decays to a τ-lepton and a neutrino, as well as the reconstruction, identification and triggering of hadronically decaying τ-leptons. The data considered are collected by the ATLAS experiment at the Large Hadron Collider.
The reconstruction and identification of hadronically decaying τ-leptons in the ATLAS experiment during Run 2 of the Large Hadron Collider are described, and the performance of the τ trigger is measured in events with top–antitop-quark pairs using data from 13 TeV proton–proton collisions, corresponding to an integrated luminosity of 3.2 fb-1, collected in 2015, and 11.5 fb-1, collected in 2016.
Hadronically decaying τ-leptons are of importance to many physical processes involving Higgs bosons. The coupling of the Standard Model Higgs boson to fermions is measured in decays to two τ-leptons using 7 TeV data corresponding to an integrated luminosity of 4.5 fb-1, collected in 2011, and 8 TeV data corresponding to an integrated luminosity of 20.3 fb-1, collected in 2012. The signal strength is measured to be μ = 1.4, corresponding to an excess over the background-only model of 4.5σ.
Charged Higgs bosons are searched for in decays to a τ-lepton and a neutrino, where the τ-lepton decays hadronically. 13 TeV data corresponding to an integrated luminosity of 14.7 fb-1, collected in 2015 and 2016, are used. No excess over the Standard Model background is observed, and the 95 % confidence-level exclusion limits on σ(pp → [b]tH+) × (H+ → τν) are set to 2.0 pb–8 fb in the range 200–2000 GeV.
##### Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2016. , p. 111
##### Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1434
##### Keywords [en]
LHC, ATLAS, Higgs, Charged Higgs, Tau lepton, 2HDM, MSSM
##### National Category
Subatomic Physics
##### Research subject
Physics with specialization in Elementary Particle Physics
##### Identifiers
ISBN: 978-91-554-9706-4 (print)OAI: oai:DiVA.org:uu-304291DiVA, id: diva2:1015018
##### Public defence
2016-11-21, Polhemsalen, 10134, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, 13:00 (English)
##### Supervisors
Available from: 2016-10-31 Created: 2016-10-03 Last updated: 2016-11-17
##### List of papers
1. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine
Open this publication in new window or tab >>Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine
2014 (English)In: 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP2013), Parts 1-6, 2014, Vol. 513, p. 032073-Conference paper, Published paper (Refereed)
##### Abstract [en]
With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.
##### Series
Journal of Physics Conference Series, ISSN 1742-6588 ; 513
##### National Category
Physical Sciences
##### Identifiers
urn:nbn:se:uu:diva-235859 (URN)10.1088/1742-6596/513/3/032073 (DOI)000342287200154 ()
##### Conference
20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), OCT 14-18, 2013, Amsterdam, NETHERLANDS
##### Note
Group Author(s): ATLAS Collaboration
Available from: 2014-11-12 Created: 2014-11-11 Last updated: 2016-10-03Bibliographically approved
2. Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector
Open this publication in new window or tab >>Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector
2015 (English)In: Journal of High Energy Physics (JHEP), ISSN 1126-6708, E-ISSN 1029-8479, no 4, article id 117Article in journal (Refereed) Published
##### Abstract [en]
Results of a search for H -> tau tau decays are presented, based on the full set of proton-proton collision data recorded by the ATLAS experiment at the LHC during 2011 and 2012. The data correspond to integrated luminosities of 4.5 fb(-1) and 20.3 fb(-1) at centre-of-mass energies of root s = 7TeV and root s = 8 TeV respectively. All combinations of leptonic (tau -> l nu(nu) over bar with l = e, mu) and hadronic (tau -> hadrons nu) tau decays are considered. An excess of events over the expected background from other Standard Model processes is found with an observed (expected) significance of 4.5 (3.4) standard deviations. This excess provides evidence for the direct coupling of the recently discovered Higgs boson to fermions. The measured signal strength, normalised to the Standard Model expectation, of mu = 1.43(-0.37)(+0.43) is consistent with the predicted Yukawa coupling strength in the Standard Model.
##### National Category
Physical Sciences
##### Identifiers
urn:nbn:se:uu:diva-255002 (URN)10.1007/JHEP04(2015)117 (DOI)000353557400001 ()
##### Note
ATLAS Collaboration, for complete list of authors see http://dx.doi.org/10.1103/PhysRevLett.114.161801
Available from: 2015-06-12 Created: 2015-06-12 Last updated: 2017-12-04Bibliographically approved
3. Commissioning of the reconstruction of hadronic tau lepton decays in ATLAS using pp collisions at √= 13 TeV
Open this publication in new window or tab >>Commissioning of the reconstruction of hadronic tau lepton decays in ATLAS using pp collisions at √= 13 TeV
2015 (English)Report (Refereed)
##### Keywords
tau, ATLAS, LHC, trigger
##### National Category
Subatomic Physics
##### Research subject
Physics with specialization in Elementary Particle Physics
##### Identifiers
urn:nbn:se:uu:diva-304620 (URN)
Available from: 2016-10-06 Created: 2016-10-06 Last updated: 2016-10-06
4. Search for charged Higgs bosons produced in association with a top quark and decaying via H± → τν using pp collision data recorded at √s = 13 TeV by the ATLAS detector
Open this publication in new window or tab >>Search for charged Higgs bosons produced in association with a top quark and decaying via H± → τν using pp collision data recorded at √s = 13 TeV by the ATLAS detector
2016 (English)In: Physics Letters B, ISSN 0370-2693, E-ISSN 1873-2445, Vol. 759, p. 555-574Article in journal (Refereed) Published
##### Abstract [en]
Charged Higgs bosons produced in association with a single top quark and decaying via H ± → τ ν are searched for with the \{ATLAS\} experiment at the LHC, using proton–proton collision data at s = 13 TeV corresponding to an integrated luminosity of 3.2 fb − 1 . The final state is characterised by the presence of a hadronic τ decay and missing transverse momentum, as well as a hadronically decaying top quark, resulting in the absence of high-transverse-momentum electrons and muons. The data are found to be consistent with the expected background from Standard Model processes. A statistical analysis leads to 95% confidence-level upper limits on the production cross section times branching fraction, σ ( p p → [ b ] t H ± ) × \{BR\} ( H ± → τ ν ) , between 1.9 pb and 15 fb, for charged Higgs boson masses ranging from 200 to 2000 GeV. The exclusion limits for this search surpass those obtained with the proton–proton collision data recorded at s = 8 TeV.
##### Keywords
Higgs, charged higgs, atlas, tau, mssm, 2hdm
##### National Category
Subatomic Physics
##### Research subject
Physics with specialization in Elementary Particle Physics
##### Identifiers
urn:nbn:se:uu:diva-304285 (URN)10.1016/j.physletb.2016.06.017 (DOI)000380409200074 ()
##### Note
ATLAS Collaboration, for complete list of authors see http://dx.doi.org/10.1016/j.physletb.2016.06.017
Available from: 2016-10-03 Created: 2016-10-03 Last updated: 2017-11-30Bibliographically approved
5. Charged Higgs boson searches in ATLAS and CMS
Open this publication in new window or tab >>Charged Higgs boson searches in ATLAS and CMS
2016 (English)In: Charged Higgs searches from ATLAS and CMS, 2016, Vol. LHCP2016, article id 085Conference paper, Published paper (Other academic)
##### Abstract [en]
Results from searches for charged Higgs bosons in ATLAS and CMS are presented. The searches cover singly charged Higgs boson decays in the $H^+ \rightarrow \tau\nu$, $H^+ \rightarrow t\bar{b}$, $H^+ \rightarrow c\bar{s}$, and $H^+ \rightarrow W^+Z$ modes, and doubly charged Higgs boson decays in the $\Phi^{++}\Phi^{--} \rightarrow 4l$ and $\Phi^{++}\Phi^{-} \rightarrow 3l$ modes, with \SI{7}{TeV} and \SI{8}{TeV} data from the Run 1 of the LHC. The first search for charged Higgs bosons with \SI{13}{TeV} data in the $H^{+} \rightarrow \tau\nu$ decay mode with ATLAS is also presented. The results are interpreted in various theoretical models.
Results from searches for charged Higgs bosons in ATLAS and CMS are presented. The searches cover singly charged Higgs boson decays in the H+→τνH+→τν, H+→tb¯H+→tb¯, H+→cs¯H+→cs¯, and H+→W+ZH+→W+Z modes, and doubly charged Higgs boson decays in the Φ++Φ−−→4lΦ++Φ−−→4l and Φ++Φ−→3lΦ++Φ−→3l modes, with 7 TeV and 8 TeV data from the Run 1 of the LHC. The first search for charged Higgs bosons with 13 TeV data in the H+→τνH+→τν decay mode with ATLAS is also presented. The results are interpreted in various theoretical models.
##### Series
Proceedings, Fourth Annual Large Hadron Collider Physics (LHCP2016): Lund, Sweden, June 13-18, 2016 ; 085
##### Keywords
higgs, charged higgs, mssm, 2hdm, susy, supersymmetry
##### National Category
Subatomic Physics
##### Research subject
Physics with specialization in Elementary Particle Physics
##### Identifiers
urn:nbn:se:uu:diva-304286 (URN)
##### Conference
Fourth Annual Large Hadron Collider Physics
Available from: 2016-10-03 Created: 2016-10-03 Last updated: 2016-10-06
6. Reconstruction and identification of hadronically decaying tau leptons with the ATLAS experiment
Open this publication in new window or tab >>Reconstruction and identification of hadronically decaying tau leptons with the ATLAS experiment
2016 (English)In: Reconstruction and identification of hadronically decaying tau leptons with the ATLAS experiment, 2016, Vol. LHCP2016, article id 211Conference paper, Published paper (Other academic)
##### Abstract [en]
Tau leptons are important to many physical processes in high-energy physics. They are used for measurements of Standard Model processes, and searches for new physics beyond the Standard Model. With their high mass, tau leptons are prime signatures for e.g. Higgs boson decays to fermions. In these proceedings, the reconstruction and identification algorithms for hadronically decaying tau leptons in Run-2 of the LHC are presented, along with the identification performance in \SI{13}{TeV} data collected in 2015.
##### Series
Proceedings, Fourth Annual Large Hadron Collider Physics (LHCP2016): Lund, Sweden, June 13-18, 2016 ; 211
##### Keywords
tau, tau lepton, hadronic decay, atlas
##### National Category
Subatomic Physics
##### Research subject
Physics with specialization in Elementary Particle Physics
##### Identifiers
urn:nbn:se:uu:diva-304287 (URN)
##### Conference
Fourth Annual Large Hadron Collider Physics
Available from: 2016-10-03 Created: 2016-10-03 Last updated: 2016-10-06
7. Search for charged Higgs bosons in the τ+jets final state using 14.7 fb-1 of pp collision data recorded at √s = 13 TeV with the ATLAS experiment
Open this publication in new window or tab >>Search for charged Higgs bosons in the τ+jets final state using 14.7 fb-1 of pp collision data recorded at √s = 13 TeV with the ATLAS experiment
2016 (English)Conference paper, Published paper (Refereed)
##### Abstract [en]
The experimental observation of charged Higgs bosons, H±, which are predicted by several models with an extended Higgs sector, would indicate physics beyond the Standard Model. This note presents the results of a search for charged Higgs bosons in 14.7 fb−1 of pp collision data at √ s = 13 TeV recorded by the ATLAS detector at the LHC. The search targets the τ+jets channel in top-quark-associated H ± production with a hadronically decaying W boson and τ lepton in the final state. No evidence of a charged Higgs boson is found. For the mass range of mH± = 200 − 2000 GeV, upper limits are set on the production cross section of the charged Higgs boson with the subsequent decay H± → τν in a range of 2.0 − 0.008 pb.
##### Keywords
higgs, charged higgs, atlas, lhc, tau, mssm, 2hdm
##### National Category
Subatomic Physics
##### Research subject
Physics with specialization in Elementary Particle Physics
##### Identifiers
urn:nbn:se:uu:diva-304284 (URN)
##### Conference
38th International Conference on High Energy Physics, Chicago, IL, USA, 03 - 10 Aug 2016
Available from: 2016-10-03 Created: 2016-10-03 Last updated: 2016-10-06
#### Open Access in DiVA
##### File information
File name FULLTEXT01.pdfFile size 1596 kBChecksum SHA-512
Type fulltextMimetype application/pdf
#### Search in DiVA
Öhman, Henrik
##### By organisation
High Energy Physics
##### On the subject
Subatomic Physics
#### Search outside of DiVA
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available
isbn
urn-nbn
#### Altmetric score
isbn
urn-nbn
Total: 1402 hits
Cite
Citation style
• apa
• ieee
• modern-language-association
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.942076563835144, "perplexity": 7245.883956670567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00498.warc.gz"} |
https://root-forum.cern.ch/t/brazilian-flag-like-plot-for-cross-section-upper-limit/44631 | # Brazilian flag - like plot for cross-section upper limit
Hello,
I have some problems to find some tutorial explaining how to obtain with ROOSTATS the upper limit plot for production cross-section for example in function of some other parameter like the one attached.
I have found many tutorials having the CLs on the y axis and the POI on the x axis, and indeed I managed to produce this kind of plot for my study. However, it would be helpful for me to understand how to use your tools to obtain a plot like the one attached with the upper limit on the X-section on y axis.
Please, if you have any suggestion let me know,
Hi @ccazzaniga,
The idea I got after reading your post is that you are trying to find a way to draw something similar to those green/yellow areas. If my assumption is right, I think @moneta is probably the right person to give some hints on how to do that.
Cheers,
J.
Hi
You need to run the limit calculation, such as the one obtained using the `StandardHypoTestInvDemo.C` macro (see ROOT: tutorials/roostats/StandardHypoTestInvDemo.C File Reference)
on different model parameter points (k(lambda)).
The macro will return for each k(lambda) an observed limit, an expected limit, and the +/- 1,2 sigma expected limits and these you can use as input to your final plot as above.
Unfortunately I don’t have an example for this, I should probably do it for a simple model and add in teh tutorials. There is one example for computing the significance as function of the mass, see
https://twiki.cern.ch/twiki/bin/view/RooStats/RooStatsExercisesMarch2015#Exercise_6b_Compute_significance
but in this case is not a limit calculation, but a simple hypothesis test to get the significance.
Cheers
Lorenzo
Lorenzo
Many thanks for pointing me the right path ! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401496052742004, "perplexity": 497.02583392739916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00133.warc.gz"} |
https://www.physicsforums.com/threads/gre-change.121495/ | # GRE Change?
1. May 20, 2006
### JasonJo
GRE Change???
I have been hearing that the GRE will be undergoing changes in it's format and syllabus. I tried searching on gre.org but i couldn't get any specifics. Does anyone have the details??
EDIT: I'm referring to the General GRE
Last edited: May 20, 2006
2. May 20, 2006
### Gokul43201
Staff Emeritus | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.832321286201477, "perplexity": 7161.643224833794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00138-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://openstudy.com/updates/5112ed16e4b07c1a5a6499db | Here's the question you clicked on:
55 members online
• 0 viewing
## ikshjbdckoj 2 years ago maths mind! come in this one!! Delete Cancel Submit
• This Question is Closed
1. Zelda
• 2 years ago
Best Response
You've already chosen the best response.
0
And question is?
2. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
21. Choose the correct solution graph for the inequality. 12x + 4 ≥ 16 or 3x – 5 ≤ –14 (1 point)
3. Zelda
• 2 years ago
Best Response
You've already chosen the best response.
0
Attach a picture?
4. mathstudent55
• 2 years ago
Best Response
You've already chosen the best response.
0
Do you know how to solve an inequality?
5. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
6. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
7. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
8. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
9. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
|dw:1360195089783:dw|
10. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
as for q21 it was the 3rd graph
11. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
thank you. I got it :)
12. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
13. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
|dw:1360195351155:dw|
14. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
these drawing right here are for the solve question I put up.
15. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
for the absolute value question
16. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
$\left| x-\frac{ 1 }{ 5 } \right|=2$
17. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
18. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
11/5 , -9/5
19. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
it doesn't have that as a option.
20. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
oh that is =3 not 2 sorry
21. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
16/5, -14/5
22. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
23. A new book is published and the table shows the number of people who showed up for a book signing in the first, second, third, and fourth month of its release. Which graph could represent the data shown in the table?
23. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
mathminds.. look on that microsoft picture i sent you and look at number 23. it has the graph with this qustion im asking.
24. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
and here are the answers to number 23.
25. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
wait i have a call
26. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
ok.
27. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
28. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
29. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
30. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
31. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
I think I got this. I picked the 2nd graph.
32. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
Hey mathminds. When you get back on we have to go really fast through this test becasue I have on;y a few hours to complete the rest of this. and i have to get it done!!
33. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
24. The table shows the relationship between two variables. Is the relationship a linear function?
34. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
back sorry that was one of my students
35. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
|dw:1360196848549:dw|
36. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
yes or no ?
37. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
its the 2nd graph
38. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
that's what I picked. Do you know whether that tableI showed you is linear or not?
39. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
which question 25
40. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
no 24.
41. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
do you know which one I'm talking about ?
42. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
ok let me see
43. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
ok
44. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
yes it is
45. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
25. The ordered pairs (1, 5), (2, 25), (3, 125), (4, 625), and (5, 3125) represent a function. What is a rule that represents this function? (1 point)y = x5 y = 5x y = 5x y = x + 5
46. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
$y=x^5$
47. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
do u mean that, this is not the answer
48. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
$y=5^x$
49. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
50. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
|dw:1360197550435:dw|yes I have these as my answers.
51. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
y=5^x
52. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
its the first one
53. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
26. Suppose noodle soup is on sale for $0.75 a can and you have a coupon for$0.50 off your total purchase. Write a function rule for the cost of n cans of soup. (1 point)C(n) = 0.75n – 0.5 C(n) = 0.5n – 0.75 C(n) = (0.75 + n) – 0.5 C(n) = (0.5 + n) – 0.75
54. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
when x=1 , 5^1=5 ->(1,5)...
55. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
huh??
56. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
first one
57. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
28. Tell whether the sequence is arithmetic. If it is, what is the common difference? –12, –7, –2, 3, . . . (1 point)yes; 5 yes; 8 yes; 3 no
58. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
0.75n-0.5
59. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
44 ikshjbdckoj 028. Tell whether the sequence is arithmetic. If it is, what is the common difference? –12, –7, –2, 3, . . . (1 point)yes; 5 yes; 8 yes; 3 no
60. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
5
61. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
look u have to know how to work this out
62. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
-7-(-12)=5
63. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
how old are u? which year
64. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
I understand how to work out that equation I just have a hard time figuring out how to put it into a equation. and I'm 15. 9th grade.
65. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
I have to send a graph with this problem. 29. What is the graph of the function rule? y= |2 x | + 1 (1 point)
66. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
no don't send the graph
67. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
ok
68. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
why dont you want me to send the graph?
69. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
the graph will be V
70. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
because i can figure the graphs in my head
71. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
as u can see this is like y=x and y=-x
72. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
yes
73. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
next
74. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
I don't know which graph it is though....There are 2 that are going up.
75. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
ok send the graph
76. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
77. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
next time u do ur work early not in 1 day ok
78. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
##### 1 Attachment
79. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
Yeah I'm definitley gonna do that. This is very frustrating when I wait at the last minute and I don't know how to do anything :(
80. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
never get to that habit, u have to get rid of it now and thats an order ok
81. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
ok
82. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
when u add +1 it shifts up by 1 unit in y axes
83. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
Okay it's very hard for me to understand stuff through typing and texting so is the answer the first graph? cause I think it is.
84. mathsmind
• 2 years ago
Best Response
You've already chosen the best response.
0
yup
85. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
yayyyyyyyy!
86. ikshjbdckoj
• 2 years ago
Best Response
You've already chosen the best response.
0
30. For the data in the table, does y vary directly with x? If it does, write an equation for the direct variation. x y 3 6 6 18 8 24 (1 point)yes; y = 2x yes; y = 3x yes; y = 4x no; y does not vary directly with x heres the table.|dw:1360198952296:dw| | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999840259552002, "perplexity": 11380.078628586456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676622.16/warc/CC-MAIN-20151001215756-00141-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://mathoverflow.net/feeds/question/71524 | How elementary can we go? - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T15:00:02Z http://mathoverflow.net/feeds/question/71524 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/71524/how-elementary-can-we-go How elementary can we go? Asaf Karagila 2011-07-28T22:50:33Z 2011-07-28T23:36:51Z <p>It is a theorem of A. Levy, if $\kappa$ is an <em>inaccessible cardinal</em>, then $V_\kappa\prec_{\Sigma_1} V$ namely $V_\kappa$ is an elementary submodel when considering only $\Sigma_1$ sentences.</p> <p>One might expect that the "amount" of elementarity will grow quickly as we progress with large cardinal axioms, however for the next step, $V_\kappa\prec_{\Sigma_2}V$ we need to get much higher. In order to assure this level of elementarity a <em>supercompact</em> is enough (is it too strong? judging by the stage this theorem appears in Jech's and Kanamori's textbooks I would say that if it is too strong then it is not strong by that much)</p> <p>To have $\Sigma_3$ we need to go even further to <em>extendible</em> cardinals (again, this might be too strong. I am not too familiar with this notion yet).</p> <ul> <li>Is there a known large cardinal notion to give $\Sigma_4$ elementarity of $V_\kappa$? What about larger $n$? </li> <li>I would expect complete elementarity to fail due to some Kunen inconsistency theorem sort of argument, is this true?</li> <li>Are there results in the reverse direction? Namely if $\kappa$ is such that $V_\kappa\prec_{\Sigma_k}V$ then $\kappa$ has to be inaccessible/supercompact/extendible/etc</li> </ul> <p>If we use all sort of set theoretic notions to measure how far $V$ is from an inner model (forcing axioms, large cardinals, how the cardinals behave in the inner model compared to $V$, sharps and covering theorems, etc etc).</p> <p>Assuming the answer to the first question is not "It is inconsistent.", is there a useful way to use this approach to measure the difference between $V$ and its inner models?</p> http://mathoverflow.net/questions/71524/how-elementary-can-we-go/71526#71526 Answer by Andreas Blass for How elementary can we go? Andreas Blass 2011-07-28T23:15:00Z 2011-07-28T23:15:00Z <p>The second and third of the bulleted questions are answered by an old theorem of Montague and Vaught. Suppose $\mu$ is the first inaccessible cardinal. Then there is $\kappa<\mu$ such that <code>$V_\kappa\prec V_\mu$</code>. Thus, from the point of view of <code>$V_\mu$</code>, there is an elementary submodel of the universe of the form <code>$V_\kappa$</code>, even though there is no inaccessible cardinal.</p> http://mathoverflow.net/questions/71524/how-elementary-can-we-go/71527#71527 Answer by Ali Enayat for How elementary can we go? Ali Enayat 2011-07-28T23:25:05Z 2011-07-28T23:25:05Z <p>This following result answers the <strong>third bullet item question</strong> in the negative.</p> <p><strong>Proposition.</strong> Suppose $(M,\in)$ is a transitive model of $ZF$ of uncountable cofinality. Then there is some ordinal $\alpha$ in $M$ of countable cofinality such that $(V_{\alpha})^M$ is a full elementary submodel of $M$.</p> <p><strong>Proof:</strong> Use the reflection theorem to produce an increasing sequence $\alpha_k$ for each $k \in \omega$ such that $(V_{\alpha_k})^M$ is a $\Sigma_k$-elementary submodel of $M$. The desired $\alpha$ is the union of the $\alpha_k$'s. <strong>QED</strong></p> <blockquote> <p>So it is quite possible to have $\kappa$ such that $V_\kappa$ is a full elementary submodel of $V$, without $\kappa$ being even regular, let alone inacessible.</p> </blockquote> http://mathoverflow.net/questions/71524/how-elementary-can-we-go/71528#71528 Answer by Joel David Hamkins for How elementary can we go? Joel David Hamkins 2011-07-28T23:30:09Z 2011-07-28T23:36:51Z <p>The hypothesis that $V_\kappa$ is $\Sigma_k$ elementary or even fully elementary in $V$ is much weaker than you say.</p> <p>One can see part of this quite easily by observing that for any inaccessible cardinal $\delta$, then $V_\delta\models\text{ZFC}$ and there are a club of ordinals $\alpha$ with $V_\alpha\prec V_\delta$. In particular, if $\delta$ is Mahlo, then there are a stationary set of inaccessible cardinals $\kappa$ with $V_\kappa$ fully elementary in $V_\delta$.</p> <p>In particular, if we lived inside $V_\delta$, we would believe that there is a stationary proper class of inaccessible cardinals $\kappa$ with $V_\kappa$ as fully elementary in the universe as desired.</p> <p>It turns out that although we can express $V_\kappa\prec_{\Sigma_k} V$ as a first-order assertion of $\kappa$ and $k$, it is not possible to express full elementary $V_\kappa\prec V$ as a single first-order assertion of set theory. Instead, we may use a scheme.</p> <p>Thus, we introduce $\kappa$ as a constant symbol, and consider the scheme, denoted "$V_\kappa\prec V$ ", asserting of every formula $\varphi$ that $$\forall x\in V_\kappa\ (\varphi(x) \iff V_\kappa\models\varphi[x]\ ).$$ If we add the assumption that $\kappa$ is inaccessible, then this is known as the Levy scheme.</p> <p><b>Theorem.</b> The following are equiconsistent over ZFC.</p> <ul> <li>The Levy scheme. That is, the scheme "$V_\kappa\prec V$ " plus "$\kappa$ is inaccessible."</li> <li>"ORD is Mahlo". That is, the scheme asserting of every definable (with parameters) proper class club, that it contains an inaccessible cardinal.</li> </ul> <p>Proof. The first implies that $V_\kappa$ satisfies ORD is Mahlo, since $\kappa$ will be a limit point and hence an element of any such club as defined in $V$ using parameters below $\kappa$. If the second is consistent, then so is the first by a compactness argument, using the reflection theorem. QED</p> <p>Meanwhile, if you drop the inaccessibility requirement, then you can attain the following, which many set theorists find surprising.</p> <p><b>Theorem.</b> The scheme "$V_\kappa\prec V$ " is equiconsistent merely with ZFC.</p> <p>Proof. If ZFC is consistent, then so is every finite fragment of the scheme $V_\kappa\prec V$, by the reflection theorem. QED</p> <p>One can even attain a proper class club $C\subset\text{ORD}$ of cardinals, with each $\kappa\in C$ satisfying the scheme $V_\kappa\prec V$, without going beyond ZFC in consistency strength.</p> <p>Both versions of the axiom $V_\kappa\prec V$ were important in <a href="http://de.arxiv.org/abs/math.LO/0009240" rel="nofollow">my paper on the maximality principle</a>, the principle asserting that any statement that is forceable in such a way that it remains true in all further extensions is already true. It turned out that one can force the maximality principle only from a model of $V_\kappa\prec V$ (and you need $\kappa$ inaccessible for the boldface maximality principle).</p> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614985585212708, "perplexity": 619.2415192812264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700107557/warc/CC-MAIN-20130516102827-00063-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/discrete-math/191797-equivalence-classes-print.html | # Equivalence classes
• November 13th 2011, 10:34 AM
Lowoctave
Equivalence classes
Hello there,
I have troubles with finding equivalence classes for my homework.
Question goes like this:
Let A={1,2,3,4} and B = {2,4}. P(A) is power set. Relation R defined on P(A) by
X R Y, X, Y elements of P(A) if X ∩ B = Y ∩ B. Now, i have to show that R is equivalence relation on P(A), which i did. and i have to find equivalence classes for R.
Now, im stuck here.. im not too sure how start it. I was wondering if you could guide me please.
EDIT: SOLVED>
• November 13th 2011, 10:59 AM
FernandoRevilla
Re: Equivalence classes
Perhaps the following outline can help you. Given $X\in P(A)$ we have four cases:
$(i)\;X\cap B=\emptyset\quad (ii)\;X\cap B=\{2\}\quad (iii)\;X\cap B=\{4\}\quad (iv)\;X\cap B=B\quad$
So , $[\;\emptyset\;]=\{\;\emptyset,\{1\},\{3\},\{1,3\}\;\}$ , etc
• November 13th 2011, 11:57 AM
Lowoctave
Re: Equivalence classes
I have only one question sir,
how did you find [∅] = {∅, {1}, {3}. {1,3}}? I have troubles understanding that..
• November 13th 2011, 12:10 PM
FernandoRevilla
Re: Equivalence classes
Quote:
Originally Posted by Lowoctave
how did you find [∅] = {∅, {1}, {3}. {1,3}}? I have troubles understanding that..
We have found all elements $Y\in P(A)$ such that $\emptyset\cap B=Y\cap B$ that is , all elements $Y\in P(A)$ such that $\emptyset\; R\; Y$ .
• November 13th 2011, 01:19 PM
Lowoctave
Re: Equivalence classes
Alrighty! makes sense! thank you very much sir:) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220782518386841, "perplexity": 1810.3000102375734}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678674071/warc/CC-MAIN-20140313024434-00065-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/02ZI | Definition 8.5.1. A stack in groupoids over a site $\mathcal{C}$ is a category $p : \mathcal{S} \to \mathcal{C}$ over $\mathcal{C}$ such that
1. $p : \mathcal{S} \to \mathcal{C}$ is fibred in groupoids over $\mathcal{C}$ (see Categories, Definition 4.35.1),
2. for all $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$, for all $x, y\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{S}_ U)$ the presheaf $\mathit{Isom}(x, y)$ is a sheaf on the site $\mathcal{C}/U$, and
3. for all coverings $\mathcal{U} = \{ U_ i \to U\}$ in $\mathcal{C}$, all descent data $(x_ i, \phi _{ij})$ for $\mathcal{U}$ are effective.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9280258417129517, "perplexity": 389.39470926952464}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00536.warc.gz"} |
http://eeer.org/journal/view.php?number=1019 | Environ Eng Res > Volume 25(1); 2020 > Article
Sandhibigraha, Chakraborty, Bandyopadhyay, and Bhunia: A kinetic study of 4-chlorophenol biodegradation by the novel isolated Bacillus subtilis in batch shake flask
### Abstract
Here in this work, a 4-chlorophenol (4-CP)-degrading bacterial strain Bacillus subtilis (B. subtilis) MF447840.1 was isolated from the drain outside the Hyundai car service center, Agartala, Tripura, India. 16S rDNA technique used carried out for genomic recognition of the bacterial species. Isolated bacterial strain was phylogenetically related with B. subtilis. This strain was capable of breaking down both phenol and 4-CP at the concentration of 1,000 mg/L. Also, the isolated strain can able to metabolize five diverse aromatic molecules such as 2-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol, 4-nitrophenol, and pentachlorophenol for their growth. An extensive investigation was performed to portray the kinetics of cell growth along with 4-CP degradation in the batch study utilizing 4-CP as substrate. Various unstructured models were applied to evaluate the intrinsic kinetic factors. Levenspiel’s model demonstrates a comparatively enhanced R2 value (0.997) amongst every analyzed model. The data of specific growth rate (μ), saturation constant (KS), and YX/S were 0.11 h−1, 39.88 mg/L, along with 0.53 g/g, correspondingly. The isolated strain degrades 1,000 mg/L of 4-CP within 40 h. Therefore, B. subtilis MF447840.1 was considered a potential candidate for 4-CP degradation.
### 1. Introduction
Fast advancement, the growth of inhabitants, along with socio-economic progress have enormously increased the anthropogenic impact on the inherent ecosystems which are mainly responsible for environmental pollution [1]. 4-chlorophenol (4-CP) is listed among precedence contaminant by the US Environmental Protection Agency, owing towards carcinogenic and recalcitrant features [2]. 4-CP is widely used in various industries, including pulp and paper, leather tanning, biocide, dye, and herbicide industries as well as in chlorination of intake and waste water [3]. The effluent concentration of 4-CP in several manufacturing units differs within 100 to 1,000 mg/L [4]. 4-CP pollution severely affects the aquatic environment and may cause health-related problems such as diarrhea, amebiasis, dermal damage, respiratory problems, and several types of cancers [5]. Consequently, the anticipation of contamination of water resources along with shielding of communal health via conservation of water resources alongside the extent of ailments is an essential objective of wastewater treatment [6, 7].
The toxicity of 4-CP enforces the importance of 4-CP removal from industrial effluents. Several techniques have been investigated intended for the management of phenolic wastewater like adsorption, chemical oxidation, photo-degradation and solvent extraction, and advanced oxidation [8, 9]. However, the accumulation of toxic intermediate products along with low elimination competence of 4-CP are outstanding bottleneck issues of the existing traditional treatment strategies. On the contrary, biological treatment is advantageous than conventional treatment processes [10], owing to its high removal efficiency and the absence of the formation of toxic intermediates [11]. Therefore, inexpensive, eco-friendly biodegradation processes are desirable [12]. The formation of chloride ions (Cl) during chlorophenol biodegradation aggravates toxicity to bacterial cells along with boosts the resistance towards the cleavage of aromatic rings [13].
Several species of Bacillus and Pseudomonas are reported to biodegrade 4-CP; however, most species can break down 4-CP at low concentrations. Therefore, the screening of the potential microbial strain(s) is an essential task for the efficient biodegradation of 4-CP. As the degradation potential of a biocatalyst may be assessed dependent on the kinetic parameters, knowledge of expansion along with biodegradation kinetics is essential meant for the forecasting of the sewage class and optimization of the reactor-functioning criteria to convene the release standards [14]. Most of the research studies have been focused on the isolation, screening, and characterization of microorganisms for the biodegradation of 4-CP. A few studies have been performed using isolated bacterial strains for biodegradation of 1,000 mg/L of 4-CP. However, very few studies have been directed to understand the kinetic nature of 4-CP biodegradation process. As per our knowledge concern, isolated bacterial strain used in this study, is the first bacterial species which can able to metabolize both 4-CP and phenol at the concentration of 1,000 mg/L as well as five different aromatic compounds. This metabolic adaptability creates the isolated strain an effective microorganism for the bioremediation of industrial effluents contaminated with diverse types of phenolic compounds. Hence, the potential significance of these parameters has been highlighted in the manuscript by experiments with isolated bacterial strain.
Here in this study, a bacterial strain was isolated from drainage water from a Hyundai car service center, Agartala, Tripura, India, and subjected to genetic characterization. The above strain was capable of rising on 4-CP along with utilization the mentioned substrate at an elevated concentration of 1,000 mg/L. The bacterial development and biodegradation kinetics of 4-CP were assessed with various kinetic parameters using different mathematical models.
### 2.1. Materials
4-CP (Sigma, USA), sodium chloride (Himedia, India), potassium nitrate (Himedia, India), nutrient broth (Himedia, India), magnesium sulfate (Sigma, USA), and agar powder (Himedia, India) were used in the current investigation. Every reagent utilized was an analytical standard and commercially accessible in India. GraphPad Prism 5 software was applied for nonlinear regression analysis to evaluate the model parameter used for kinetic analysis.
### 2.2. Culture Medium for Isolation Study
Isolation study was carried out using an inorganic medium supplemented with trace element solution. The media was composed of 0.5 g/L of ammonium nitrate (NH4NO3), 0.2 g/L of magnesium sulfate (MgSO4·7H2O), 0.5 g/L of dipotassium phosphate (K2HPO4), 0.5 g/L of monopotassium phosphate (KH2PO4), and 0.02 g/L of calcium chloride (CaCl2·2H2O). The trace element solution was supplemented to the inorganic medium at 10 mL/L and contained 0.3 g/L of ferrous sulfate (FeSO4·7H2O), 0.05 g/L of manganese sulfate (MnSO4·H2O), 0.1 g/L of cobalt chloride (CoCl2·6H2O), 0.034 g/L of sodium molybdate (Na2MoO4·2H2O), 0.04 g/L of zinc sulfate (ZnSO4), and 0.05 g/L of copper sulfate (CuSO4·5H2O) [15]. 4-CP was used at required concentration as the single source of carbon and energy. The bacterial strain was routinely transferred after ten days interval to an inorganic medium supplemented with 4-CP maintained as the single source of energy. pH of 7.4 was initially maintained during the preparation of media and incubation was performed at 37°C for 48 h. The working quantity of the medium was 50 mL for all experiments conducted in 250-mL Erlenmeyer flasks.
### 2.3. Isolation and Screening of 4-CP-degrading Strain
The bacterial strain used for 4-CP degradation was isolated from the drain outside the Hyundai denting and painting car service center, Agartala, Tripura, India. Ten milliliters of the sample was added to 50 mL of inorganic medium additionally amid 200 mg/L of 4-CP and kept at 37°C for 48 h in an incubator shaker at rpm of 120. 5 mL of previously incubated culture was transferred to new inorganic media via the similar developmental ambiance at every movement, apart from the 4-CP concentration elevation stepwise up to 1,000 mg/L and increment was carried out with 100 mg/L in each step. In each transfer, the absorbance of the culture medium (OD600) was observed at 600 nm from time to time. Acclimatization at each concentration of 4-CP was carried out three times. After acclimatization, the culture was incubated solid inorganic medium containing 1,000 mg/L of 4-CP. The streak plate method was applied to achieve the purified colonies. The purified colonies were moreover sorted intended for most excellent 4-CP degradation competency and used for further study.
### 2.4. 16S rDNA Gene Sequencing, Phylogenetic Analysis, and Thermodynamic Properties Analysis
Genomic analysis of the isolated strain was performed applying 16S rDNA technique. DNA was collected from the isolated species, and the quality of DNA was assessed on 1.2% agarose gel. The isolated DNA was amplified with 16S rRNA-specific primer using a thermal cycler. The PCR amplicon was enzymatically purified and further subjected to Sanger sequencing. Bi-directional DNA sequencing reaction of PCR amplicon was carried out with 8F and 1492R primers using BDT v3.1 cycle sequencing kit on ABI 3730xl Genetic Analyzer. The consensus sequence of 16S rDNA was deposited to the NCBI GenBank catalog, and a resemblance exploration was performed via the online BLAST platform (www.ncbi.nlm.nih.gov). Depends on the highest uniqueness value, first fifteen sequences were chosen and aligned applying ClustalW which is considered as multiple alignment software packages. The distance matrix was prepared via the Ribosomal record along with the highest likelihood phylogenetic tree was created. MEGA 5 was used to find out the evolutionary distance bootstrap data by the Jukes-cantor model of the neighbor-joining technique [16]. The thermodynamic characteristics of the bacterial strain were computed by an online package, (https://cail.cn/biotool/oligo/index.html).
The metabolic adaptability of the species was estimated by inoculating the strain into inorganic medium additionally with various phenolic compounds such as phenol, 2-chlorophenol, 4-CP, 2,4-dichlorophenol, 2,4,6-trichlorophenol, 4-nitrophenol, and pentachlorophenol. The phenolic compounds were supplemented into the inorganic medium at a concentration of 100–1,000 mg/L. 10 mL of cells (OD600 ≈ 0.1) previously acclimatized at 1,000 mg/L of 4-CP was added at each flask and kept at a temperature of 37°C for 48 h in an incubator at rpm of 120. The residual concentrations of phenolic compound were estimated after 48 h of incubation by spectrophotometric method [11, 17].
### 2.6. Batch Kinetics of 4-CP Degradation
To evaluate kinetic factors, batch shake flask experiment was performed by adding ten milliliters of cells (OD600 ≈ 0.1) previously acclimatized at 1,000 mg/L of 4-CP into 50 mL of optimized inorganic medium containing trace element solution and kept at 37°C for 48 h in an incubator at rpm of 150. The composition of media used for biodegradation study was as follows: 0.75 g/L of ammonium nitrate (NH4NO3), 0.25 g/L of magnesium sulfate (MgSO4·7H2O), 0.75 g/L of dipotassium phosphate (K2HPO4), 0.25 g/L of monopotassium phosphate (KH2PO4), and 0.02 g/L of calcium chloride (CaCl2·2H2O). The trace element solution was supplemented as used for seed culture. 4-CP at the concentration of 1 g/L was used in the medium as the single source of energy along with carbon. Sampling was performed at 4 h interval for measurement of residual 4-CP and cell concentration. It is presumed that aeration is sufficient to provide the required oxygen levels and does not limit growth. Also, it was assumed that the growth of isolated bacteria is dependent on the concentration of 4-CP in the media at specified preliminary pH, temperature, along with the rate of aeration [18].
### 2.7. Analytical Procedures
UV-visible spectrophotometer was used for measurement of cell concentration by taking the absorbance at 600 nm wavelength. The optical densities (OD600) of the broth culture were converted as dry cell weight by plotting a calibration curve amid the dry cell weights versus optical density (OD600). The broth was centrifuged at 10,000 g for a predetermined duration of 10 min to separate cell biomass, and the sediment was washed and re-suspended [19]. It was further filtered via a pre-washed 0.45-mm filter paper and dried at 105°C until steady mass was found. For measurement of residual 4-CP, the broth was centrifuged for 10 min at 10,000 g, and 0.22-μm filter was used for filtration of the supernatant. The absorbance of the filtrate was measured at 298 nm wavelength using UV-visible spectrophotometer intended for the assessment of the enduring 4-CP concentration [11, 20].
### 3.1. Isolation and Characterization of 4-CP-degrading Strain
A 4-CP-degrading microbial strain was isolated from wastewater effluent of a Hyundai car service industry, Agartala, India. As the organic matter present in the effluent influence the concentration and variety of microorganisms in waste water, therefore, the organic-matter-rich waste water was chosen in the present study. Also, the waste water effluent from car service was thought to be rich in phenolic compounds; therefore, the isolation of 4-CP-degrading microorganism was carried out from the wastewater effluent of Hyundai car service industry, Agartala, India. Twenty-five effluent samples were collected from fifteen different places for the isolation of 4-CP-degrading bacteria. 10 mL of waste water was mixed to 50 mL of inorganic medium additionally amid trace element solution. Briefly, 200 mg/L of 4-CP was added into media, and the concentration of 4-CP was gradually increased. After two days of inoculation, twenty-five diverse bacterial isolates were separated from 25 waste water samples gathered from 15 diverse sites. The standard techniques were applied to obtain pure cultures [21, 22]. It was evident that only six bacterial strains could survive in the presence of 500 mg/L of 4-CP in media. Also, two bacterial strains namely, ChE_BE_1 and Che_BE_2, showed significant growth at 1,000 mg/L concentration of 4-CP.
The strain Che_BE_1 was chosen in the present study. The said bacterial stain was preserved by sub-culturing in petriplates enclosing inorganic medium supplemented with 1,000 mg/L of 4-CP and 1.5% agar. Stock culture was preserved at 4°C on a slant encompassing similar media. Che_BE_1 strain was initially identified as a gram-positive bacterium through biochemical characterization. To the best of our knowledge, Che_BE_1 is one of the potential isolate capable of metabolizing 1,000 mg/L of 4-CP in media and may serve as a potential candidate for the management of industrial wastewater contaminated with a high concentration of 4-CP.
### 3.2. 16S rDNA Gene Sequencing, Phylogenetic Analysis, and Thermodynamic Properties Analysis
16S rDNA technique was employed for genomic recognition of Che_BE_1 species, which was isolated from wastewater effluent. Genomic DNA of this strain was extracted, and it was amplified using PCR with a specific primer designed for 16S rRNA-specific. A single discrete PCR amplicon band of 1,500 bps was observed when resolved on 1.2% agarose gel (Fig. 1). The PCR amplicon was purified to remove contaminants. Forward and reverse DNA sequencing reactions of PCR amplicon were carried out with DF and DR primers using BDT v3.1 cycle sequencing kit on ABI 3730xl Genetic Analyzer. The consensus sequence of 1,470-bp16S rDNA was generated from forward and reverse sequence data using aligner software (Table S1). The consensus sequence of 16S rDNA gene was submitted to NCBI GenBank record (www.ncbi.nlm.nih.gov) with accession number of MF447840.1 [16].
The consensus sequence of 1,470-bp16S rDNA sequence was applied to perform BLAST amid NCBI GenBank record. Information concerning other adjacent homologs for microorganisms may be obtained in the alignment table (Table 1). The species ChE_BE_1 showed phylogenetically adjacent associated to Bacillus subtilis (B. subtilis), species ZAP018-1 (GenBank number: KU587801.1), B. subtilis, strain F3-3 (GenBank number: KT735214.1), B. subtilis, strain PCD3 (GenBank number: KY910140.1), B. subtilis, strain FL39 (GenBank number: KY818957.1), B. subtilis, strain IP18 (GenBank number: KY621529.1), B. subtilis, strain JL02 (GenBank number: KY400283.1), B. subtilis, strain JL12 (GenBank number: KY400272.1), Bacillus tequilensis, strain 7PJ-7 (GenBank number: KR708845.1), B. subtilis (GenBank number: KX058074.1), B. subtilis, strain D12-5 (GenBank number: KT955740.1), B. subtilis, strain VASB19/TS (GenBank number: KT427429.1), B. subtilis, strain 6R3-15 (GenBank number: LC155964.1), Bacterium, strain CDSHGTR2A-21 (GenBank number: KU743239.1) and Bacterium, strain CDSHGTGPM-16 (GenBank number: KU743237.1), and showed 98% sequence match. Hence, the isolated bacterial strain was recognized as B. subtilis. The highest possibility phylogenetic tree created employing MEGA 5 tool was exhibited in Fig. 2. The evolutionary background was measured applying the neighbor-joining method [23]. The bootstrap consensus tree anecdotal as replicates of 1,000 [24] was used to signify the evolutionary background of the taxa investigated [23]. The evolutionary span was computed applying the Jukes-Cantor technique [25] along with expressed in the units of the numeral of base changeover for each position. The rate difference amongst sites was computed using a gamma distribution (form factor = 1) [16]. The thermodynamic properties were obtained from the online tool (https://cail.cn/biotool/oligo/index.html). Results showed that B. subtilis MF447840.1 has 55% of GC content. Gibbs free energy (ΔG) and enthalpy (ΔH) along with entropy (ΔS) were found to be −2,464.4 kcal mol−1, −13,040.1 kcal mol−1 and 34,081.6 cal mol−1 K−1, respectively.
The metabolic adaptability of B. subtilis MF447840.1 in presence various phenolic compounds is illustrated in Fig. 3. The results showed that the isolated strain could able to degrade 94.79% of phenol, 84.96% of 2-chlorophenol, 90.90% of 4-CP, 92.82% of 2,4-dichlorophenol, 75.53% of 2,4,6-trichlorophenol, 89.52% of 4-nitrophenol and 90.02% of pentachlorophenol after 48 h of incubation at 37°C. Fig. 3 illustrates that pentachlorophenol is found to be more toxic for isolated strain followed by 2,4,6-trichlorophenol, 4-nitrophenol, 2,4-dichlorophenol, 2-chlorophenol, 4-CP along with phenol. Since the degradation ability of various phenolic compounds and their utilization for growth is an intrinsic parameter of the microorganism, therefore, different degradation was observed after 48 h of incubation in inorganic media. It was archived that the species of Bacillus are well known for the degradation of phenolic compounds, however, in most cases, the degradation was found at lower concentration [26, 27]. Additionally, several other bacterial species, such as Pseudomonas sp. [28], Alcaligenes sp. [29, 30], Rhodococcus sp. [31], etc., can assimilate a wide variability of phenolic compounds, however diversity on the degradation of the various aromatic compound is limited. Therefore, researchers are looking for new effective microbial species which can able to degrade the wide variety of aromatic pollutants at higher concentration. As per our knowledge concern, B. subtilis MF447840.1 will be the first bacterial species which can be able to metabolize both 4-CP and phenol as well as five different aromatic compounds at higher concentration after 48 h of incubation. This metabolic adaptability creates B. subtilis MF447840.1 an effective microorganism intended for the bioremediation of an industrial effluent spoiled amid diverse sort of phenolic compounds.
### 3.4. Kinetics of Biodegradation of 4-CP
Biodegradation of 4-CP was conducted using B. subtilis MF447840.1 in batch cultivation with the optimized inorganic media supplemented with 1,000 mg/L of 4-CP. The experimental biomass data were plotted with time, and the same graph was used to plot the residual substrate (4-CP) (Fig. 4). The specific growth rate was calculated by applying Eq. (1), and the residual substrate was measured from Fig. 4. The specific growth rate (μ) of an isolated bacteria on diverse concentrations of 4-CP was calculated during the exponential phase using Eq. (1) [32].
##### (1)
$μ=1XdXdt$
Where, X is the cell concentration (g/L) at time t (h), and μ is the specific growth rate (h−1) [32]. The Monod model is usually applied to portray the association among the specific growth rate (μ) and the concentration of the restraining substrate using Eq. (2). Generally, Monod or Haldane models have been used to characterize the biodegradation kinetics of several phenolic compounds. However, in some cases, these equations are not adequate to represent the biodegradation process under transient conditions [33].
##### (2)
$μ=μmax(S+KS)$
Where, μmax is the highest specific growth rate (h−1), KS is the saturation constant (g/L), and S is the substrate concentration (g/L). In this study, GraphPad Prism 5 was applied to resolve the nonlinear equation applying non-linear regression analysis. Numerous kinetic factors, namely, μ, μmax, and KS were computed by fitting the experimental data. A plot of μ versus substrate concentration designated the substrate inhibition happening in the concentration range of the substrate investigated (Fig. 5). Consequently, an effort was made to fit the experimental data to the obtainable kinetic models on substrate inhibition. Various substrate inhibition kinetic models were investigated and compared in the present work (Table 2) [3437]. The specific growth rate (μ) was plotted with substrate concentration in Fig. 5 to determine various kinetic parameters. By trial and error method, the experimental data were shown to fit reasonably well in Han and Levenspiel’s model [36] for cell growth and 4-CP utilization. The model suggested that the 4-CP degradation rate showed an impediment role, which had an exponential form along with combined the substrate concentration corresponding to the disparity position on growth [36, 38].
##### (3)
$μ=μmax S[1-(SSm)]nS+KS[1-(SSm)]m$
In Han and Levenspiel’s model described in Eq. (3), μ and μmax are the specific growth rate (h−1) and highest specific growth rate (h−1), respectively. Sm is the decisive inhibitor concentration (mg/L) above this concentration the bacterial growth stops. The m and n are the experimental constants. The values of kinetic parameters such as μmax and KS were obtained to be similar to 0.11 h−1 and 39.88 mg/L, respectively. Critical inhibitor concentration for B. subtilis MF447840.1 was found to be 1 g/L. The values of m and n were 1.57 and 1.046, respectively, while the correlation coefficient (R2) was 0.9993. The higher correlation coefficient value of Han and Levenspiel’s model indicates the suitability of the model as compared to other tested substrate inhibition models in the present study. Yx/s value is calculated by plotting ΔX versus ΔS at various time intervals. The slope of this straight line equation indicates the magnitude of Yx/s. From the experimental value, Yx/s was measured and reported as 0.53 g of biomass/g of 4-CP. As 4-CP is environmentally toxic, the value of Yx/s obtained herein this study was minuscule. Generally, the Yx/s value for bacterial cultures is within the array of 0.6 to 0.72 g/g [39]. However, the yield value was 0.65 g/g for Pseudomonas putida in the existence of phenol as the singular carbon basis [40]. As Yx/s is an intrinsic parameter of microorganisms, the values may be different for various microorganisms. Also, the yield also depends on types and concentrations of substrates and environmental parameters [41]. It has reported that 4-CP is degraded by an extracellular enzyme secreted by the microorganism. Although the experimental data were collected throughout 40 h to evaluate the degradation efficiency of B. subtilis, however, the log phase data were only taken into account for the calculation of Yx/s. 4-CP is considered as a limited substrate required for growth. Furthermore, maintenance was not considered during the calculation of Yx/s, as it is considered to be low during the log phase [32]. The simulated data of biomass were calculated using Eq. (3) and used to achieve the predicted residual amount of 4-CP using Eq. (4).
##### (4)
$-dSdt=1YX/SdXdt$
Where, YX/S is yield coefficient (g of biomass produced/g of 4-CP), and –dS/dt is the degradation rate (gL−1h−1). Both experimental and predicted values of biomass and residual 4-CP were illustrated in Fig. 4. Fig. 4 shows that the experimental data are reasonably fit with predicted data, indicating that the Han and Levenspiel’s model is appropriate for the present case.
##### (5)
$q=1XdSdt=1YX/SdXdt1X=1YX/Sμ$
Where, q and μ are the specific degradation rate of 4-CP (h−1) and specific growth rate (h−1), respectively. The simulated data of μ was calculated using Eq. (3) and used to achieve the predicted q using Eq. (5). Both investigational along with envisaging data of μ and q were illustrated in Fig. 5(a) and (b). Fig. 5 shows that the predicted data using Han and Levenspiel’s model are reasonably fit with experimental data, indicating that the said model is suitable for the present study. Table 3 indicates the degradation efficiency of 4-CP using various microorganisms. As per our knowledge, there is no microorganism discovered till date which can degrade 4-CP at 1,000 mg/L concentration. Jiang et al. [42] investigated the biodegradation of 4-CP using the mutant strain Candida tropicalis (C. tropicalis) CTM 2 under batch condition. It is evident that 400 mg/L of 4-CP was degraded by the mutated strain within 59.5 h. Basak et al. [43] used C. tropicalis PHB5 to evaluate the biodegradation of 4-CP and found that 949.7 mg/L of 4-CP was entirely degraded after 60 h in batch culture. In the present study, 1,000 mg/L of 4-CP was entirely degraded by B. subtilis MF447840.1 within 40 h. Therefore, B. subtilis MF447840.1 may be considered as a potential candidate for the degradation of 4-CP.
### 4. Conclusions
Twenty-five different bacteria were isolated from twenty-five waste water samples which were collected from fifteen different places. The phylogenetic study was performed to identify the most potent 4-CP-degrading strain. Genomic identification showed that the isolated strain was phylogenetically associated amid B. subtilis having GenBank accession number MF447840.1. The in-depth kinetic investigation was performed to assess the effect of various concentrations of 4-CP on the growth of B. subtilis. The intrinsic kinetic parameters were computed using a non-linear regression analysis with best fit unstructured Han and Levenspiel’s model. Experimental data on substrate concentration dependency of the specific growth rate in this study fit well with the simulated data find by using the best fit unstructured Levenspiel’s equation. B. subtilis MF447840.1 is the first bacterial species which metabolized both 4-CP and phenol at the concentration of 1,000 mg/L as well as five different aromatic compounds. This metabolic adaptability creates B. subtilis MF447840.1 a potent microorganism for the biodegradation of industrial effluents contaminated with various types of phenolic substances. Thus, the present study may expose new heights in research in the areas of biological treatment of industrial effluents containing various phenolic compounds. The microorganism isolated and used in this study for 4-CP biodegradation is a competent one and can further be exploited in industrial scale applications.
### Supplementary Materials
##### Table S1.
Consensus Sequence of Che_BE_1 (1,470 bp)
eer-2018-416-suppl.pdf
### Acknowledgments
This material is based upon work supported by the National Institute of Technology, Agartala, India. Authors would like to acknowledge the National Institute of Technology, Agartala, Ministry of Human Resource and Development, Government of India for Fellowship (0000-0003-4637-991X).
### References
1. O’Connell DW, Birkinshaw C, O’Dwyer TF. Heavy metal adsorbents prepared from the modification of cellulose: A review. Bioresour Technol. 2008;99:6709–6724.
2. Patel BP, Kumar A. Biodegradation of 4-chlorophenol in an airlift inner loop bioreactor with mixed consortium: Effect of HRT, loading rate and biogenic substrate. 3 Biotech. 2016;6:1–9.
3. Wang Q, Li Y, Li J, Wang Y, Wang C, Wang P. Experimental and kinetic study on the cometabolic biodegradation of phenol and 4-chlorophenol by psychrotrophic Pseudomonas putida LY1. Environ Sci Pollut Res. 2015;22:565–573.
4. Cooper V, Nicell J. Removal of phenols from a foundry wastewater using horseradish peroxidase. Water Res. 1996;30:954–964.
5. Igbinosa EO, Odjadjare EE, Chigor VN, et al. Toxicological profile of chlorophenols and their derivatives in the environment: The public health perspective. Sci World J. 2013;2013:1–11.
6. Hu P, Huang J, Ouyang Y, et al. Water management affects arsenic and cadmium accumulation in different rice cultivars. Environ Geochem Health. 2013;35:767–778.
7. Arao T, Kawasaki A, Baba K, Mori S, Matsumoto S. Effects of water management on cadmium and arsenic accumulation and dimethylarsinic acid concentrations in Japanese rice. Environ Sci Technol. 2009;43:9361–9367.
8. Durruty I, Okada E, González JF, Murialdo SE. Multisubstrate monod kinetic model for simultaneous degradation of chlorophenol mixtures. Biotechnol Bioprocess Eng. 2011;16:908–915.
9. Ra JS, Oh S-Y, Lee BC, Kim SD. The effect of suspended particles coated by humic acid on the toxicity of pharmaceuticals, estrogens, and phenolic compounds. Environ Int. 2008;34:184–192.
10. Akinpelu EA, Adetunji AT, Ntwampe SKO, Nchu F, Mekuto L. Performance of Fusarium oxysporum EKT01/02 isolate in cyanide biodegradation system. Environ Eng Res. 2018;23:223–227.
11. Basak B, Bhunia B, Dutta S, Chakraborty S, Dey A. Kinetics of phenol biodegradation at high concentration by a metabolically versatile isolated yeast Candida tropicalis PHB5. Environ Sci Pollut Res. 2014;21:1444–1454.
12. Geed S, Kureel M, Giri B, Singh R, Rai B. Performance evaluation of Malathion biodegradation in batch and continuous packed bed bioreactor (PBBR). Bioresour Technol. 2017;227:56–65.
13. Sahoo NK, Pakshirajan K, Ghosh PK. Evaluation of 4-bromophenol biodegradation in mixed pollutants system by Arthrobacter chlorophenolicus A6 in an upflow packed bed reactor. Biodegradation. 2014;25:705–718.
14. Yadav M, Srivastva N, Singh RS, Upadhyay SN, Dubey SK. Biodegradation of chlorpyrifos by Pseudomonas sp. in a continuous packed bed bioreactor. Bioresour Technol. 2014;165:265–269.
15. Yan J, Jianping W, Hongmei L, Suliang Y, Zongding H. The biodegradation of phenol at high initial concentration by the yeast Candida tropicalis . Biochem Eng J. 2005;24:243–247.
16. Uday USP, Majumdar R, Tiwari ON, et al. Isolation, screening and characterization of a novel extracellular xylanase from Aspergillus niger (KP874102. 1) and its application in orange peel hydrolysis. Int J Biol Macromol. 2017;105:401–409.
17. Leszczynska D, Bogatu C, Beqa L, Veerepalli R. Simultaneous determination of chlorophenols from quaternary mixtures using multivariate calibration. Chem Bull “POLITEHNICA” Univ (Timisoara). 2010;55:5–8.
18. Wang L, Li Y, Yu P, Xie Z, Luo Y, Lin Y. Biodegradation of phenol at high concentration by a novel fungal strain Paecilomyces variotii JH6. J Hazard Mater. 2010;183:366–371.
19. Tosu P, Luepromchai E, Suttinun O. Activation and immobilization of phenol-degrading bacteria on oil palm residues for enhancing phenols degradation in treated palm oil mill effluent. Environ Eng Res. 2015;20:141–148.
20. Hossain SG, McLaughlan RG. Oxidation of chlorophenols in aqueous solution by excess potassium permanganate. Water Air Soil Pollut. 2012;223:1429–1435.
21. Kim J, Min KA, Cho KS, Lee IS. Enhanced bioremediation and modified bacterial community structure by barn yard grass in diesel-contaminated soil. Environ Eng Res. 2007;12:37–45.
22. Nongbri BB, Syiem MB. Diversity analysis and molecular typing of cyanobacteria isolated from various ecological niches in the state of Meghalaya, North-East India. Environ Eng Res. 2012;17:21–26.
23. Saitou N, Nei M. The neighbor-joining method: A new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987;4:406–425.
24. Felsenstein J. Confidence limits on phylogenies: An approach using the bootstrap. Evolution. 1985;39:783–791.
25. Kimura M. A simple method for estimating evolutionary rates of base substitutions through comparative studies of nucleotide sequences. J Mol Evol. 1980;16:111–120.
26. Tallur P, Megadi V, Kamanavalli C, Ninnekar H. Biodegradation of p-cresol by Bacillus sp. strain PHN 1. Curr Microbiol. 2006;53:529–533.
27. Tallur P, Megadi V, Ninnekar H. Biodegradation of p-cresol by immobilized cells of Bacillus sp. strain PHN 1. Biodegradation. 2009;20:79–83.
28. Hasan SA, Jabeen S. Degradation kinetics and pathway of phenol by Pseudomonas and Bacillus species. Biotechnol Biotechnol Equip. 2015;29:45–53.
29. Kumar A, Bhunia B, Dasgupta D, et al. Optimization of culture condition for growth and phenol degradation by Alcaligenes faecalis JF339228 using Taguchi Methodology. Desalin Water Treat. 2013;51:3153–3163.
30. Mandal S, Bhunia B, Kumar A, et al. A statistical approach for optimization of media components for phenol degradation by Alcaligenes faecalis using Plackett-Burman and response surface methodology. Desalin Water Treat. 2013;51:6058–6069.
31. Khan F, Pal D, Vikram S, Cameotra SS. Metabolism of 2-chloro-4-nitroaniline via novel aerobic degradation pathway by Rhodococcus sp. strain MB-P1. PLoS One. 2013;8:e62178
32. Bhunia B, Basak B, Bhattacharya P, Dey A. Kinetic studies of alkaline protease from Bacillus licheniformis NCIM-2042. J Microbiol Biotechnol. 2012;22:1758–1766.
33. Lobo CC, Bertola NC, Contreras EM, Zaritzky NE. Monitoring and modeling 4-chlorophenol biodegradation kinetics by phenol-acclimated activated sludge by using open respirometry. Environ Sci Pollut Res. 2018;25:21272–21285.
34. Edwards VH. The influence of high substrate concentrations on microbial kinetics. Biotechnol Bioeng. 1970;12:679–712.
35. Wang S-J, Loh K-C. Modeling the role of metabolic intermediates in kinetics of phenol biodegradation. Enzyme Microb Technol. 1999;25:177–184.
36. Han K, Levenspiel O. Extended Monod kinetics for substrate, product, and cell inhibition. Biotechnol Bioeng. 1988;32:430–447.
37. Luong J. Generalization of Monod kinetics for analysis of growth data with substrate inhibition. Biotechnol Bioeng. 1987;29:242–248.
38. Okpokwasili G, Nweke C. Microbial growth and substrate utilization kinetics. African J Biotechnol. 2006;5:305–317.
39. Livingston AG, Chase HA. Modeling phenol degradation in a fluidized-bed bioreactor. AIChE J. 1989;35:1980–1992.
40. Kumar A, Kumar S, Kumar S. Biodegradation kinetics of phenol and catechol using Pseudomonas putida MTCC 1194. Biochem Eng J. 2005;22:151–159.
41. Bhunia B, Basak B, Bhattacharya P, Dey A. Process engineering studies to investigate the effect of temperature and pH on kinetic parameters of alkaline protease production. J Biosci Bioeng. 2013;115:86–89.
42. Jiang Y, Nanqi R, Xun C, Di W, Liyan Q, Sen L. Biodegradation of phenol and 4-chlorophenol by the mutant strain CTM 2. Chinese J Chem Eng. 2008;16:796–800.
43. Basak B, Bhunia B, Dutta S, Dey A. Enhanced biodegradation of 4-chlorophenol by Candida tropicalis PHB5 via optimization of physicochemical parameters using Taguchi orthogonal array approach. Int Biodeterior Biodegrad. 2013;78:17–23.
44. Yano T, Koga S. Dynamic behavior of the chemostat subject to substrate inhibition. Biotechnol Bioeng. 1969;11:139–153.
45. Wang J, Ma X, Liu S, Sun P, Fan P, Xia C. Biodegradation of phenol and 4-chlorophenol by Candida tropicalis W1. Procedia Environ Sci. 2012;16:299–303.
46. Liu Y, Liu J, Li C, Wen J, Ban R, Jia X. Metabolic profiling analysis of the degradation of phenol and 4-chlorophenol by Pseudomonas sp. cbp1-3. Biochem Eng J. 2014;90:316–323.
##### Fig. 1
1.2% Agarose gel showing single 1,500 bp of 16S rDNA amplicon. Lane 1:100 bp DNA ladder; Lane 2: 16S rDNA amplicon.
##### Fig. 2
Phylogenic analysis of isolated strain B. subtilis MF447840.1.
##### Fig. 3
The metabolic versatility of B. subtilis MF447840.1 in presence various phenolic compounds.
##### Fig. 4
Time course of growth and 4-CP utilization by B. subtilis MF447840.1 at initial 4-CP concentration 1,000 mg/L.
##### Fig. 5
Relationships between (a) specific growth rates (μ) and initial substrate concentration and between (b) specific degradation rates (q) and initial substrate concentration.
##### Table 1
Sequence Producing Alignments
Accession Description Max score Total score Query coverage E value Max ident
KJ801590.1 Bacillus subtilis, strain ZAP018-1 2,350 2,350 93% 0.0 98%
KU587801.1 Bacillus subtilis, strain BSP-68 2,348 2,348 93% 0.0 98%
KT735214.1 Bacillus subtilis, strain F3-3 2,344 2,344 93% 0.0 98%
KY910140.1 Bacillus subtilis, strain PCD3 2,342 2,342 93% 0.0 98%
KY818957.1 Bacillus subtilis, strain FL39 2,342 2,342 93% 0.0 98%
KY621529.1 Bacillus subtilis, strain IP18 2,342 2,342 93% 0.0 98%
KY400283.1 Bacillus subtilis, strain JL02 2,342 2,342 93% 0.0 98%
KY400272.1 Bacillus subtilis, strain JL12 2,342 2,342 93% 0.0 98%
KR708845.1 Bacillus tequilensis, strain 7PJ-7 2,342 2,342 93% 0.0 98%
KX058074.1 Bacillus subtilis 2,342 2,342 93% 0.0 98%
KT955740.1 Bacillus subtilis subsp. subtilis, strain D12-5 2,342 2,342 93% 0.0 98%
KT427429.1 Bacillus subtilis, strain VASB19/TS 2,342 2,342 93% 0.0 98%
LC155964.1 Bacillus subtilis, strain: 6R3-15 2,342 2,342 93% 0.0 98%
KU743239.1 Bacterium, strain CDSHGTR2A-21 2,342 2,342 93% 0.0 98%
KU743237.1 Bacterium, strain CDSHGTGPM-16 2,342 2,342 93% 0.0 98%
##### Table 2
Estimated Kinetics Model Parameters for Batch Study
Sr. no. Author Equation μmax (hr−1) KS Ki Sm m n R2 Ref.
1 Andrew’s Model $μ=μmax S(S+KS+S2 Ki)$ 0.45 245.4 0.019 - - - 0.8263 [34]
2 Edward’s Model $μ=μmax S(S+KS+(S2Ki)(1+SKS))$ 0.140 61.79 2,195 - - - 0.9248 [34]
3 Haldane’s Model $μ=μmax S(1-(SSm)n)(S+KS(1-(SSm)n)$ 0.45 245.4 53.23 - - - 0.8263 [35]
4 Levenspiel’s Model $μ=μmax S[1-(SSm)]nS+KS[1-(SSm)]m$ 0.110 39.88 - 1 1.57 1.046 0.9993 [36]
5 Luong’s Model $μ=μmax S(1-(SSm)n)(S+KS)$ 0.102 32.02 - 1 - 1.15 0.997 [37]
6 Teisseir’s Model $μ=μmax[exp(-SKi)-exp(-SKS)]$ 1.008 183.4 228.8 - - - 0.938 [34]
7 Webb’s Model $μ=μmax S(1+mSKi)(S+KS+S2Ki)$ 0.464 0.010 52.20 - 1.35 × 10−16 - 0.8267 [34]
8 Yona’s Model $μ=μmax S(S+KS+(S2Ki)(1+SKi))$ 0.2186 119.3 325.8 0.09124 [44]
##### Table 3
Degradation Efficiency of 4-CP by Various Microorganisms
Sr. no Strain Initial 4-Chlorophenol concentration (mg/L) Time (h) Temperature % removal Ref.
1 Candida Tropicalis 400 50.5 30°C 100% [42]
2 Candida Tropicalis W1 150 20 30°C 100% [45]
3 Candida Tropicalis PHB5 950 60 30°C 99.97% [43]
4 Pseudomonas sp. Cbp 1-3 100 40 30°C 100% [46]
5 Bacillus Subtilis MF447840.1 1,000 40 37°C 100% Present study
TOOLS
Full text via DOI
Supplement
E-Mail
Print
Share:
METRICS
13 Crossref
13 Scopus
3,490 View | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7206918001174927, "perplexity": 18625.68643028746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00510.warc.gz"} |
https://www.physicsforums.com/threads/finding-the-charge-on-a-capacitor.640117/ | # Finding the charge on a capacitor
• Thread starter nautola
• Start date
• #1
16
0
## Homework Statement
http://screencast.com/t/B4865iOOLzX
What is the charge on the 32 μF centered-
upper capacitor?
Answer in units of μC
## Homework Equations
Q = CV
1/c = 1/c1 + 1/c2......
## The Attempt at a Solution
I found the charge across the entire thing and then divided that by 2, since the charge should be the same on all the plates. But that was wrong.
## Answers and Replies
Related Introductory Physics Homework Help News on Phys.org
• #2
gneill
Mentor
20,793
2,773
What's the potential difference between nodes c and d?
What do you know about the charge distribution on capacitors connected in series?
• Last Post
Replies
2
Views
394
• Last Post
Replies
7
Views
2K
• Last Post
Replies
3
Views
3K
• Last Post
Replies
3
Views
64K
• Last Post
Replies
13
Views
2K
• Last Post
Replies
1
Views
5K
• Last Post
Replies
3
Views
7K
• Last Post
Replies
3
Views
268
• Last Post
Replies
15
Views
388
• Last Post
Replies
1
Views
358 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038679361343384, "perplexity": 6266.588080035796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00133.warc.gz"} |
https://planetmath.org/separateduniformspace | # separated uniform space
Let $X$ be a uniform space with uniformity $\mathcal{U}$. $X$ is said to be separated or Hausdorff if it satisfies the following separation axiom:
$\bigcap\mathcal{U}=\Delta,$
where $\Delta$ is the diagonal relation on $X$ and $\bigcap\mathcal{U}$ is the intersection of all elements (entourages) in $\mathcal{U}$. Since $\Delta\subseteq\bigcap\mathcal{U}$, the separation axiom says that the only elements that belong to every entourage of $\mathcal{U}$ are precisely the diagonal elements $(x,x)$. Equivalently, if $x\neq y$, then there is an entourage $U$ such that $(x,y)\notin U$.
The reason for calling $X$ separated has to do with the following assertion:
$X$ is separated iff $X$ is a Hausdorff space under the topology $T_{\mathcal{U}}$ induced by (http://planetmath.org/TopologyInducedByAUniformStructure) $\mathcal{U}$.
Recall that $T_{\mathcal{U}}=\{A\subseteq X\mid\mbox{for each }x\in A\mbox{, there is }U\in% \mathcal{U}\mbox{, such that }U[x]\subseteq A\}$, where $U[x]$ is some uniform neighborhood of $x$ where, under $T_{\mathcal{U}}$, $U[x]$ is also a neighborhood of $x$. To say that $X$ is Hausdorff under $T_{\mathcal{U}}$ is the same as saying every pair of distinct points in $X$ have disjoint uniform neighborhoods.
###### Proof.
$(\Rightarrow)$. Suppose $X$ is separated and $x,y\in X$ are distinct. Then $(x,y)\notin U$ for some $U\in\mathcal{U}$. Pick $V\in\mathcal{U}$ with $V\circ V\subseteq U$. Set $W=V\cap V^{-1}$, then $W$ is symmetric and $W\subseteq V$. Furthermore, $W\circ W\subseteq V\circ V\subseteq U$. If $z\in W[x]\cap W[y]$, then $(x,z),(y,z)\in W$. Since $W$ is symmetric, $(z,y)\in W$, so $(x,y)=(x,z)\circ(z,y)\in W\circ W\subseteq U$, which is a contradiction.
$(\Leftarrow)$. Suppose $X$ is Hausdorff under $T_{\mathcal{U}}$ and $(x,y)\in U$ for every $U\in\mathcal{U}$ for some $x,y\in X$. If $x\neq y$, then there are $V[x]\cap W[y]=\varnothing$ for some $V,W\in\mathcal{U}$. Since $(x,y)\in V$ by assumption, $y\in V[x]$. But $y\in W[y]$, contradicting the disjointness of $V[x]$ and $W[y]$. Therefore $x=y$. ∎
Title separated uniform space SeparatedUniformSpace 2013-03-22 16:42:34 2013-03-22 16:42:34 CWoo (3771) CWoo (3771) 5 CWoo (3771) Definition msc 54E15 separating Hausdorff uniform space | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 59, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9970124363899231, "perplexity": 118.29763050764848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00314.warc.gz"} |
https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/mean-absolute-error | # Mean absolute error
Mean absolute error (MAE) is a loss function used for regression. The loss is the mean overseen data of the absolute differences between true and predicted values, or writing it a formula:
where ŷ is the predicted value.
## Why use mean absolute error
MAE is not sensitive towards outliers and given several examples with the same input feature values, and the optimal prediction will be their median target value. This should be compared with Mean Squared Error, where the optimal prediction is the mean. A disadvantage of MAE is that the gradient magnitude is not dependent on the error size, only on the sign of y - ŷ. This leads to that the gradient magnitude will be large even when the error is small, which in turn can lead to convergence problems.
## When to use mean absolute error
Use Mean absolute error when you are doing regression and don’t want outliers to play a big role. It can also be useful if you know that your distribution is multimodal, and it’s desirable to have predictions at one of the modes, rather than at the mean of them.
Example: When doing image reconstruction, MAE encourages less blurry images compared to MSE. This is used for example in the paper Image-to-Image Translation with Conditional Adversarial Networks by Isola et al. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9182103872299194, "perplexity": 494.9053366587814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543850.90/warc/CC-MAIN-20191212130009-20191212154009-00362.warc.gz"} |
https://www.bts.dot.gov/archive/publications/commodity_flow_survey/2007/states/delaware/table_03 | USA Banner
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Site Notification
United States Department of Transportation
Table 3. Shipment Characteristics by Mode of Transportation and Distance Shipped for State of Origin: 2007
Tuesday, July 3, 2012
Estimates are based on the 2007 Commodity Flow Survey. Because of rounding, estimates may not be additive.
Excel | CSV
All modes
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 34,757 100 25,679 100 4,422 100 15.2 - 18.3 - 18.9 -
Less than 50 miles 10,833 31.2 14,928 58.1 257 5.8 13.4 4 18.2 2.9 23.2 1.3
50 - 99 miles 2,492 7.2 4,182 16.3 413 9.3 13.1 2.1 42.2 3.7 39.7 3.7
100 - 249 miles 6,727 19.4 2,804 10.9 577 13.1 16.4 2 22.3 2.3 24.7 3.7
250 - 499 miles S S 1,881 7.3 879 19.9 S S 40.1 1.7 45.8 4.1
500 - 749 miles 2,431 7 S S S S 27.1 1.6 S S S S
750 - 999 miles 1,297 3.7 262 1 274 6.2 32.1 0.9 20.4 0.3 20.7 1.8
1,000 - 1,499 miles 1,350 3.9 208 0.8 308 7 39.1 1 23 0.4 24.8 1.6
1,500 - 2,000 miles S S S S S S S S S S S S
More than 2,000 miles 1,986 5.7 246 1 718 16.2 38.9 1.7 25.2 0.8 26.2 3.9
Single modes
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 29,235 100 23,453 100 3,688 100 18.1 - 18.8 - 19.8 -
Less than 50 miles 10,094 34.5 14,442 61.6 240 6.5 14.2 5.1 18.7 4.1 24.6 1.6
50 - 99 miles 1,901 6.5 3,797 16.2 323 8.8 10.2 1.5 46.8 4 46.3 3.1
100 - 249 miles 4,904 16.8 1,697 7.2 337 9.2 23 2.1 18.9 1.3 17.6 2.6
250 - 499 miles S S 1,849 7.9 861 23.4 S S 40.7 1.7 46.7 4.9
500 - 749 miles 2,084 7.1 S S S S 29.6 2.5 S S S S
750 - 999 miles 886 3 219 0.9 229 6.2 30.4 0.6 20.4 0.3 21.1 1.7
1,000 - 1,499 miles 984 3.4 166 0.7 234 6.3 49.3 1.1 19.2 0.4 20 1.7
1,500 - 2,000 miles S S S S S S S S S S S S
More than 2,000 miles 1,495 5.1 186 0.8 538 14.6 42.3 1.7 26.5 0.7 27.5 4.2
Truck3
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 23,000 100 17,283 100 2,331 100 21 - 20.5 - 12.5 -
Less than 50 miles 7,423 32.3 10,205 59 219 9.4 12 4.2 18.5 4.8 27.7 1.2
50 - 99 miles 1,891 8.2 3,773 21.8 321 13.8 10.4 2.1 47.2 5.4 46.7 3.6
100 - 249 miles 4,740 20.6 1,317 7.6 255 10.9 24.2 2.4 14 1.3 13.8 1.3
250 - 499 miles S S 1,123 6.5 467 20.1 S S 23.7 1 18.8 1.9
500 - 749 miles 1,422 6.2 388 2.2 292 12.5 34.2 1.4 15.1 0.5 14.9 2.2
750 - 999 miles 524 2.3 195 1.1 199 8.5 33 0.7 24.9 0.3 26.5 2.1
1,000 - 1,499 miles 534 2.3 135 0.8 182 7.8 26.5 0.6 18.6 0.2 17.5 1.6
1,500 - 2,000 miles 89 0.4 20 0.1 40 1.7 39 0.3 47.3 0.1 48.7 1
More than 2,000 miles 758 3.3 127 0.7 355 15.2 29.7 1.2 18.4 0.3 18.6 3.1
For-hire truck
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 16,642 100 7,771 100 1,916 100 30.1 - 39.7 - 14.4 -
Less than 50 miles 3,040 18.3 3,124 40.2 S S 24.3 4 47.8 3.8 S S
50 - 99 miles 976 5.9 S S S S 27.6 1.4 S S S S
100 - 249 miles 3,979 23.9 892 11.5 171 8.9 29 2.8 14.3 3.2 13.5 1
250 - 499 miles S S 963 12.4 403 21 S S 28.3 2.1 22.3 2.1
500 - 749 miles 1,422 8.5 388 5 292 15.3 34.2 2.6 15.1 1.6 14.9 3.1
750 - 999 miles 523 3.1 195 2.5 199 10.4 33 1.8 24.9 1.1 26.5 2.3
1,000 - 1,499 miles 534 3.2 135 1.7 182 9.5 26.5 1 18.6 0.6 17.5 2
1,500 - 2,000 miles 89 0.5 20 0.3 40 2.1 39 0.8 47.3 0.3 48.7 1.3
More than 2,000 miles 758 4.6 127 1.6 355 18.5 29.7 1.8 18.4 0.6 18.6 3.6
Private truck
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 6,358 100 9,512 100 415 100 13.6 - 11.3 - 18.2 -
Less than 50 miles 4,383 68.9 7,080 74.4 118 28.6 16.4 4.9 13.5 6.6 10.9 5.2
50 - 99 miles 916 14.4 1,846 19.4 147 35.5 23.2 3 46.2 6.1 40.6 6.6
100 - 249 miles 760 12 425 4.5 84 20.3 18.9 2.4 27.6 0.9 28.7 3.2
250 - 499 miles 299 4.7 160 1.7 65 15.6 19 1.2 15.3 0.4 15.3 3.3
500 - 749 miles - - - - - - - - - - - -
750 - 999 miles S S S S S S S S S S S S
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Rail
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total S S 501 100 468 100 S S 27 - 48.2 -
Less than 50 miles S S 97 19.3 3 0.7 S S 36.4 16.1 39 5.9
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles S S S S S S S S S S S S
250 - 499 miles S S 39 7.8 19 4.1 S S 37.2 3.2 31.9 3.8
500 - 749 miles 141 8.6 77 15.3 70 15 47.3 19.6 23.5 16.1 23.8 15.5
750 - 999 miles S S 24 4.8 S S S S 33.6 17.6 S S
1,000 - 1,499 miles S S S S S S S S S S S S
1,500 - 2,000 miles S S S S S S S S S S S S
More than 2,000 miles S S S S S S S S S S S S
Water
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total S S S S S S S S S S S S
Less than 50 miles S S S S S S S S S S S S
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles - - - - - - - - - - - -
250 - 499 miles S S S S S S S S S S S S
500 - 749 miles S S S S S S S S S S S S
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Shallow draft
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total S S S S S S S S S S S S
Less than 50 miles S S S S S S S S S S S S
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles - - - - - - - - - - - -
250 - 499 miles - - - - - - - - - - - -
500 - 749 miles - - - - - - - - - - - -
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Deep draft
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 845 100 1,269 100 834 100 10.2 - 10.4 - 12.7 -
Less than 50 miles - - - - - - - - - - - -
50 - 99 miles - - - - - - - - - - - -
100 - 249 miles - - - - - - - - - - - -
250 - 499 miles S S S S S S S S S S S S
500 - 749 miles S S S S S S S S S S S S
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Air (incl truck and air)
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 1,014 100 3 100 4 100 29 - 27 - 32.7 -
Less than 50 miles S S S S S S S S S S S S
50 - 99 miles 2 0.2 S S S S 32 0.3 S S S S
100 - 249 miles 52 5.2 - 7.8 - 2.1 38.5 9.6 30.5 5.7 31.4 4
250 - 499 miles 216 21.3 - 15.4 - 4.8 34.5 4.7 33.2 7.8 32.9 7.7
500 - 749 miles 69 6.8 - 8.5 - 4.9 42.9 4.7 34.9 4.7 35.9 2.5
750 - 999 miles 280 27.6 1 23.1 1 17.1 43.1 8.4 45.3 6.4 46.2 5.5
1,000 - 1,499 miles 131 12.9 - 13.6 1 13.4 42.4 3.8 40.1 6 40.6 6.8
1,500 - 2,000 miles S S S S - 6.3 S S S S 48.6 5
More than 2,000 miles 153 15.1 1 23.3 2 50.7 43.4 6.6 47.6 6 48.1 9
Pipeline4
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 2,727 100 4,395 100 S S 44.6 - 45.4 - S S
Less than 50 miles 2,633 96.6 4,139 94.2 S S 45.2 1.7 46.4 2.5 S S
50 - 99 miles - - 21 0.5 S S 9.7 - 12.2 0.7 S S
100 - 249 miles S S S S S S S S S S S S
250 - 499 miles - - - - - - - - - - - -
500 - 749 miles - - - - - - - - - - - -
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Multiple modes
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 5,031 100 2,118 100 681 100 16.3 - 45.8 - 35 -
Less than 50 miles S S S S S S S S S S S S
50 - 99 miles 582 11.6 S S S S 41.1 4.8 S S S S
100 - 249 miles 1,535 30.5 1,070 50.5 229 33.7 25.8 6 46.7 10.3 47.7 11
250 - 499 miles 496 9.9 30 1.4 17 2.4 19.7 3.3 34.2 2.9 40 2.8
500 - 749 miles 313 6.2 S S S S 30.6 2.2 S S S S
750 - 999 miles 362 7.2 S S S S 45.4 2.4 S S S S
1,000 - 1,499 miles 364 7.2 S S S S 29.9 1.5 S S S S
1,500 - 2,000 miles S S S S S S S S S S S S
More than 2,000 miles 487 9.7 S S S S 33.3 2.2 S S S S
Parcel, U.S.P.S. or courier
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 3,218 100 129 100 75 100 22.5 - 29.6 - 46.3 -
Less than 50 miles 238 7.4 16 12.1 1 0.7 14.2 2.8 27.9 6.4 23.7 0.4
50 - 99 miles 323 10 17 13.1 2 2.3 25.5 2.5 35.9 2.6 38.5 0.7
100 - 249 miles 795 24.7 41 31.9 9 12.1 26 4.6 34.4 6 36.4 5.7
250 - 499 miles 477 14.8 20 15.4 8 10.2 20.2 2.7 32.7 2.2 31.9 3.3
500 - 749 miles 224 7 S S S S 31.9 1.2 S S S S
750 - 999 miles 286 8.9 S S S S 35 1.7 S S S S
1,000 - 1,499 miles 288 9 S S S S 32.5 1.2 S S S S
1,500 - 2,000 miles S S S S S S S S S S S S
More than 2,000 miles 342 10.6 S S S S 37.2 2.1 S S S S
Truck and rail
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 391 100 192 100 S S 45.2 - 45.8 - S S
Less than 50 miles - - - - - - - - - - - -
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles S S S S S S S S S S S S
250 - 499 miles S S S S S S S S S S S S
500 - 749 miles S S 55 28.6 50 17.2 S S 47.1 16.2 44 16.2
750 - 999 miles S S S S S S S S S S S S
1,000 - 1,499 miles S S S S S S S S S S S S
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles 143 36.5 S S S S 47.2 15.6 S S S S
Truck and water
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total S S S S S S S S S S S S
Less than 50 miles S S S S S S S S S S S S
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles S S S S S S S S S S S S
250 - 499 miles S S S S S S S S S S S S
500 - 749 miles - - - - - - - - - - - -
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles S S S S S S S S S S S S
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles S S S S S S S S S S S S
Rail and water
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total S S S S S S S S S S S S
Less than 50 miles - - - - - - - - - - - -
50 - 99 miles - - - - - - - - - - - -
100 - 249 miles S S S S S S S S S S S S
250 - 499 miles - - - - - - - - - - - -
500 - 749 miles - - - - - - - - - - - -
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Other multiple modes
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total S S 1,478 100 S S S S 48 - S S
Less than 50 miles S S S S S S S S S S S S
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles S S S S 154 60 S S S S 47.3 19.2
250 - 499 miles - - - - - - - - - - - -
500 - 749 miles - - - - - - - - - - - -
750 - 999 miles - - - - - - - - - - - -
1,000 - 1,499 miles - - - - - - - - - - - -
1,500 - 2,000 miles - - - - - - - - - - - -
More than 2,000 miles - - - - - - - - - - - -
Other and unknown modes
Distance Shipped Value Tons Ton-miles1 Value Tons Ton-miles
2007
(million \$)
Percent 2007
(thousands)
Percent 2007
(millions)
Percent CV2 Standard
Error of %
CV Standard
Error of %
CV Standard
Error of %
Total 491 100 S S 53 100 44.8 - S S 46.8 -
Less than 50 miles 91 18.5 S S S S 42.6 7.3 S S S S
50 - 99 miles S S S S S S S S S S S S
100 - 249 miles S S S S S S S S S S S S
250 - 499 miles S S S S S S S S S S S S
500 - 749 miles 34 6.9 S S S S 32.5 14.5 S S S S
750 - 999 miles S S S S S S S S S S S S
1,000 - 1,499 miles S S S S S S S S S S S S
1,500 - 2,000 miles 1 0.1 - 0.2 - 0.8 1.1 10.9 20.8 1.4 4.6 2
More than 2,000 miles 5 1 3 2.3 S S 27.2 7.7 41.2 11.5 S S
KEY: S = Estimate does not meet publication standards because of high sampling variability or poor response quality. - = Zero or Less than half the unit shown; thus, it has been rounded to zero.
1 Ton-miles estimates are based on estimated distances traveled along a modeled transportation network.
2 Coefficient of Variation.
3 "Truck" as a single mode includes shipments that were made by only private truck, only for-hire truck, or a combination of private truck and for-hire truck.
4 Estimates for pipeline exclude shipments of crude petroleum.
NOTES: Rows are not shown if all cells for that particular state by mode have no data. For example, since Wyoming by "Water" has no data for any mileage grouping, the entire mode of "Water" for that state was removed from this table. However, if a state by mode has at least one row of data for one of the mileage groupings, then the entire state by mode is shown even though there may be rows absent of data within that mode. For example, West Virginia by "Water" has a few rows absent of data but since the other rows within that mode do have data, that state by mode will be shown in its entirety for all the mileage groupings. Value-of-shipments estimates have not been adjusted for price changes. Estimated measures of sampling variability for each estimate known as coefficients of variation (CV) are also provided in these tables. More information on sampling error, confidentiality protection, nonsampling error, sample design, and definitions may be found at http://www.bts.gov/publications/commodity_flow_survey/.
SOURCE: U.S. Department of Transportation, Research and Innovative Technology Administration, Bureau of Transportation Statistics and U.S. Department of Commerce, U.S. Census Bureau, 2007 Economic Census: Transportation Commodity Flow Survey, December 2009. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8201539516448975, "perplexity": 751.5489807241718}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00006.warc.gz"} |
https://xorshammer.com/2008/08/13/loebs-theorem/ | # Löb’s Theorem: Santa Claus and Provability
Consider the following argument for the existence of Santa Claus (which is called Curry’s paradox):
Let S be the sentence
If S is true, then Santa Claus exists.
Lemma: S is true.
Proof. S is of the form “If P, then Q.” so to show S we just have to assume P and show Q. So, assume that S is true (with the goal of showing that Santa Claus exists). Since we’ve assumed that S is true, it follows that if S is true, then Santa Claus exists (since that’s exactly what S says). Then, since S is true, Santa Claus exists. $\square$
Corollary: Santa Claus exists.
Proof. Since S is true, it is the case that if S is true, then Santa Claus exists (since that’s exactly what S says). Therefore, since S is true, Santa Claus exists. $\square$
Since Santa Claus doesn’t exist, this argument seems to just prove that informal reasoning combined with self-reference can often lead you astray. Can we extract any interesting theorems out of this argument? It turns out that we can.
Let PA denote Peano Arithmetic, which is a first-order theory of arithmetic. In the course of proving his incompleteness theorems, Gödel found a way to represent way PA-provability in PA. For every sentence A, there is a sentence Prov(A) such that PA proves A iff PA proves Prov(A). It has the following two further properties:
1. For all A and B, PA proves that Prov(A→B) together with Prov(A) implies Prov(B).
2. For all A, PA proves that Prov(A) implies Prov(Prov(A)).
Furthermore, Gödel was able to prove his diagonalization lemma, which implies that you can use self-reference as long as it is in reference to provability. That is, you can formulate the sentence “This sentence is not provable” (i.e., there is an S such that PA proves S ↔ ~Prov(S)) but you cannot formulate the sentence “This sentence is not true” (i.e., there is no S such that PA proves S ↔ ~S).
Now let’s try to redo the above argument that Santa Claus exists in the context of PA-provability.
Let A be the assertion that Santa Claus exists. As above, let S assert “If S is provable, then A is true.” In other words, we use Gödel diagonalization to find an S such that PA proves S ↔ (Prov(S) → A).
In proving the following lemma, it turns out that we need an extra assumption, which I’ve put in italics.
Lemma: S is provable in PA.
Proof. We’ll work within PA. Assume Prov(S). Then, by property 1 above, we have Prov(Prov(S) → A), and it again follows from property 1 that Prov(Prov(S)) implies Prov(A). Then, if we knew that Prov(A) implied A, we would conclude that A is true and thus that S is. $\square$
Corollary: A is provable in PA.
Proof. We’ll work within PA. Since we know S from above, we know that Prov(S) → A. However, we also know from the above that Prov(S) holds. Therefore, A holds. $\square$
So, we have proved that A is provable in PA given that PA shows that Prov(A) implies A (we used this assumption in the proof of the above Lemma). This is Löb’s theorem.
Löb’s Theorem: For any sentence A, if PA proves Prov(A) implies A, then PA proves A.
We can use to answer the following natural question: Gödel proved his incompleteness theorem by finding a G such that PA proves G ↔ ~Prov(G). (G essentially says “G is not provable in PA.”) It follows that, given that PA only proves true things, G must be true and unprovable. But what about the sentence H which says “H is provable in PA.” ? Is it true and provable or false and unprovable?
Löb’s Theorem provides the answer. Since, by definition, PA proves Prov(H) → H, it turns out that H is both true and provable.
For more information, see the Stanford Encyclopedia of Philosophy’s entry on provability logic and George Boolos’s book “The Logic of Provability” | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843494892120361, "perplexity": 711.2868216226544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718987.23/warc/CC-MAIN-20161020183838-00311-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/32n-3-0-mod-p-formula-for-n.125837/ | # 32n + 3 = 0 mod p formula for n
1. Jul 12, 2006
### ramsey2879
If p is prime > 2 then there is a easy way to solve 32n+3 = 0 mod p
if p = 1 mod 8 then m = (p-1)/4 and n = m(m+1)/2 mod p
if p = 3 mod 8 then m = (p-3)/4 and n = m(m+1)/2 mod p
if p = 5 mod 8 then m = (p-5)/8*2 + 1 and n = m(m+1)/2 mod p
if p = 7 mod 8 then m = (p-7)/8*2 + 1 and n = m(m+1)/2 mod p
Strange how triangular numbers relate to primes!
Can anyone give a proof for this relation?
Last edited: Jul 12, 2006
2. Jul 12, 2006
### matt grime
If you just substitute in the expression for n in terms of p it drops out.
e.g if you take the p=1 mod 8 example
32(p-1)(p+3)/2*4*4 +3 = -3+3=0 mod p.
The others are the same.
It doesn't appear too hard to generate other relations like this. Of course had you not chosen 32 and 3 you would not have this explicit relation with triangular numbers. You have dropped into the trap of looking at your probabilities a posterii. Note that 32/4=8 and you're looking at p mod 8.
Last edited: Jul 12, 2006
3. Jul 12, 2006
### shmoe
p doesn't have to be prime either, those solutions still work.
edit- you can also combine the p=1 and 5 mod 8 cases, both have m=(p-1)/4. Likewise for the other two, i.e. just look at p mod 4.
Last edited: Jul 12, 2006
4. Jul 12, 2006
### ramsey2879
Your right. The formula for n is much simpler if you look at it mod 4.
I stumbled on to this by chance. Looking at the problem to find n such that solves 32n + 3 = 0 mod p, the solution escapes any reasoned thought.
As you have shown, the proof of the solution to this problem is quite simple for odd p. Thanks
It is not the case for even p. For some even p there is no solution at all, and I havent found a relation that determines which even p have solutions or a formula for those cases.
5. Jul 13, 2006
### matt grime
There are no solutions for any even p.
You're trying to find n and t so that 32n+3=pt. If p is even then the RHS is even and the LHS is always odd so there are no solutions.
6. Jul 15, 2006
### Playdo
Could'nt we also do it this way?
Rewrite the equation as $32n+3 = kp$. Rewrite that as
(1) $$kp - 32n = 3$$
Now this linear diophantine equation has solution (n,p) only when k is relatively prime to 32. The rrs mode 32 is {1,3,5,7,9,11,13,15,19,21,23,25,27,29,31}. Let one of these be denoted by r.
$$(32t+r)p - 32n = 3$$
So for any prime P and reduced residue of 32, R we have
$$Pt - n = \frac {3-RP}{32}$$
Which can be shown either to exist or not as a diophantine equation depending on whether 32|(3-RP).
So although this is not your typical closed form solution you can check easily to see if a given residue and a given prime allow a solution. When 32|(3-RP) we know that there is at least one solution t=3-RP and $n = P(3-RP)-\frac {3-RP}{32}[/tex] Then we can write all of them as t = 3-RP + s $$n = P(3-RP)-\frac {3-RP}{32} - s$$ s is an integer of our choosing. This is intersting in that it shows us that while t and n are discrete linear in the residues, t discrete linear in the primes but n is discrete quadratic in the primes. That begs the follwing question, clearly $$-RP^2+(3 -\frac {R}{32})P-(\frac {3}{32} +s+n)=0$$ So since s can be any integer, for every s there is a correspondence established between the set of primes and some subset of the integers, represented by n. If one were searching for a prime in a given interval this last equation could be used with n as a search parameter. The question is will all integer P be prime or will there be pseudo-primes? For a given interval and value for s any primes found should be denoted s-primes. So then are there intervals for which no primes are found unless s is very very large? We would not want to use this equation to search for primes on intervals where the number of possible values for s or n became very large. Those questions would have to be addressed. Another question I find interesting here is this. Is it true that [itex]\{z|z=\frac {3-RP}{32}, r \in rrs mod 32, P - prime\}$ is actually the set of integers? Is it further true that $\{z|z=\frac {3-RP}{B}, r \in rrs mod B, P - prime, gcd(B,P)=1\}$ is actually the set of integers.
Last edited: Jul 15, 2006
7. Jul 15, 2006
### matt grime
Your last (two) questions are trivially false. They are equivalent to the statements that for every integer z, 3-32z is p, a prime, times some number in the reduced residue system mod 32 and that is obviously false.
You can insert forced spaces in latex with \ , a slash followed by a space. Things like rrs, mod and prime should be typeset in roman, not in italics. I think we can use \text{foo} to do that here, or if not the {\rm foo} might work.
$$\left\{ z\ | z \in \mathbb{Z},\ z=\frac{3-rp}{32},\ r \in \text{rrs mod} 32, \ p\ \text{prime}\right\}$$
Last edited: Jul 15, 2006
8. Jul 15, 2006
### shmoe
This equation isn't hard to solve. There is a solution exactly when p is odd (prime is irrelevant), and in this case it is a unique residue class mod p.
It's not hard to find either, if p=4n+1 say, then ((p-1)/4)*4=-1 mod p, ((p+3)/4)*4=3 mod p, and one of (p-1)/4 or (p+3)/4 is even, so we know ((p-1)/4)*((p+3)/4)/2 is an integer, and when multiplied by 32 is -3 mod p, so this is our solution. (similar if p=4n+3). All others lie in the same residue class mod p.
And when it's not, you've got nothing. This is in fact usually the case, for any odd number p, there is only one choice of r out of the 16 that will work.
Of course you can actually find the only working r if you want, just find the inverse of p mod 32 and multiply by 3. You can find n this way, with the 16 cases for p mod 32. They'll reduce to the above mod 4 considerations, but not before much work.
You want that to be -p*s in your n, not -s. This is just getting all n in this residue class mod p.
So you're hoping to fix s and r, then try putting in values of n and hope the reulting p is a prime? Have you actually tried doing this to see what happens? Can you make these p's land in whatever interval you were looking for primes in? Are they often integers, let alone prime?
9. Jul 15, 2006
### Playdo
No because we are just solving the satandard linear diophantine equation in t and n and the parameter s is multiplied by the gcd of the coefficients which in this case is 1.
Those last two statements were intended to evoke just the response you gave. What sets are they?
And if for any odd p one and only one of the residues works for a solution how are the residues ordered by consecutive odd p? Are we talking about all possible permutations that must pass or are we talking about one particular ordering of the residues that is repeated over and over for consecutive odd p.
I have to praise you though on noticing that it suffices to use p odd and not prime. It is an important point that sometimes by restricting or enlarging perspective we find proofs that may not exist on other sets.
Just for you and to help allay my blues over not finding a job yet, I am going to sit down and go over that Goldbach proof I was talking about. It will probably fizzle, but I am guessing I am going to end up with a question about the distribution of primes similar to Wilson's Theorem, between p and 2p there is at least one other prime. Only I think it will have ot be more restrictive. I will probably see if you have any insights on that. I'll try to post tomorrow night.
10. Jul 16, 2006
### shmoe
You were looking for solutions n and t of:
$$Pt - n = \frac {3-RP}{32}$$
Then gave a solution in terms of P and R under the condition 32|(3-RP), these were fine. You then claim for any integer s, t+s and n-s will give a solution as well, but if you sub these in, the left hand side is:
$$P(t+s)-(n-s)=Pt+Ps-n+s=\frac{3-RP}{32}+Ps+s$$
That's not a solution to the equation you are after (unless s=0 of course).
You'll want to check that theorem on linear diophantine equations again. I think you are after something like:
if gcd(a,b)|c then ax+by=c has infinitely many solutions. fix any specific solution x_0, y_0 say, then all solutions are given by:
$$x=x_0+\frac{b}{\gcd (a,b)}t,\ y=y_0-\frac{a}{\gcd (a,b)}t$$
as t ranges over the integers. See the difference?
What response are you talking about?
Is this about the original equation or after you introduced the 'r'? I'm guessing you're asking about the necessary r's given p's and of course this is cyclic. r depends only on p mod 32 (remember how I said to find this r). You can work out the order easily enough, by hand or mathematica will do it in a second.
That's not Wilson's theorem, it's Bertrand's Postulate.
11. Aug 5, 2006
### ramsey2879
Thanks for your concise treatment. Let me re-phrase it.
Let $$p = a \ \mod \ 4$$,
Then $$\frac{(p-a)*(p-a+4)}{32}$$ is a triangular number, $$n$$,
Furthermore for odd p, as all odd numbers are either congruent to 1 or 3$$\mod\ 4$$ and since $$1 = -3\ \mod\ 4$$, then $$32*n = -(3*1) \ \mod\ 4$$.
Last edited: Aug 5, 2006
12. Aug 5, 2006
### ramsey2879
Your right, I confused 2 separate issues.
Formerly, I was looking for "a" mod p such that 4a*(8a+1) = a mod p There is an unique solution for all odd p which reduces to the present problem . For some even p, not all, there are solutions to this former question also. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9158967733383179, "perplexity": 713.8568944253556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00239-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=ProgrammingGuide%2FPreface | Preface - Maple Help
Contents Previous Next Index
Preface
Technical computation forms the heart of problem solving in mathematics, engineering, and science. To help you, Maple™ offers a vast repository of mathematical algorithms covering a wide range of applications.
At the core of Maple, the symbolic computation engine is second to none in terms of scalability and performance. Indeed, symbolics was the core focus when Maple was first conceived at the University of Waterloo in 1980 and to this day Maple continues to be the benchmark software for symbolic computing.
Together with a large repository of numeric functionality, including industry-standard libraries such as the Intel® Math Kernel Library (MKL), Automatically Tuned Linear Algebra Software (ATLAS), and the C Linear Algebra PACKage (CLAPACK), as well as a broad selection of routines from the Numerical Algorithms Group (NAG® ) libraries, you can rely on Maple to support you a across many domains and applications. Using its unique hybrid technology, Maple integrates the symbolic and numeric worlds to solve diverse problems more efficiently and with higher accuracy.
The Maple user interface allows you to harness all this computational power by using context-sensitive menus, task templates, and interactive assistants. The first steps are intuitively easy to use and quickly lead you into the captivating, creative, and dynamic world of Maple.
As you get more proficient, you will want to explore more deeply and directly access all of the computational power available to you. You can accomplish this through the Maple programming language. Combining elements from procedural languages (such as Pascal), functional languages (such as Lisp) and object-oriented languages (such as Java™ ), Maple provides you with an exceptionally simple yet powerful language to write your own programs. High-level constructs such as map allow you to express in a single statement what would take ten lines of code in a language like C.
Maple allows you to quickly focus and reliably solve problems with easy access to over 5000 algorithms and functions developed over 30 years of cutting-edge research and development.
Maple's user community is now over two million people. Together we have built large collections of Maple worksheets and Maple programs, much of which is freely available on the web for you to reuse or learn from. The majority of the mathematical algorithms you find in Maple today are written in the Maple Programming Language. As a Maple user, you write programs using the same basic tools that the Maple developers themselves use. Moreover you can easily view most of the code in the Maple library and you can even extend the Maple system, tying your programs in with existing functionality.
This guide will lead you from your first steps in Maple programming to writing sophisticated routines and packages, allowing you to tackle problems in mathematics, engineering, and science effectively and efficiently. You will quickly progress towards proficiency in Maple programming, allowing you to harness the full power of Maple.
Have fun!
Audience
This guide provides information for users who are new to Maple programming, as well as experienced Maple programmers. Before reading this guide, you should be familiar with the following.
• The Maple help system
• How to use Maple interactively
• The Maple User Manual
Contents Previous Next Index | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888471722602844, "perplexity": 1321.4043839387684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00717.warc.gz"} |
https://math.stackexchange.com/questions/2422951/is-there-a-proof-concerning-the-absolute-distance-of-a-goldbach-pair-from-a-give | # Is there a proof concerning the absolute distance of a Goldbach pair from a given number?
I am looking at the Goldbach Conjecture because I think it's interesting. One thing I've noticed using brute force programming methods is this:
It seems up, at least up to numbers with 6 or 7 digits, that given two primes are a "Goldbach Pair", the average relative distance of these two numbers from the number N/2 appears to be N/4.
Let me rephrase this a few ways for clarity, and provide some examples.
Consider the Goldbach pairs for the number (N=8). They are (3,5), and that's it. There's only one pair for (N=8). The numbers "3" and "5" are both (N/2)+1/4*(N/2) and (N/2)-1/4*(N/2). Consider the Goldbach pairs for the number (N=10). They are (3,7) and (5,5). For the purposes of what I am saying here, discount all instances where the Goldbach pairs consist of two of the same numbers. So exclude (5,5) from what I am saying. The numbers "3" and "7" are both (N/2)+2/5*(N/2) and (N/2)-2/5*(N/2).
All Goldbach pairs will be (N/2)+C*(N/2) and (N/2)-C*(N/2)
What I am finding using my program is the results for C seem to tend toward .5 or 1/2 "on average".
Is there a proof that C must be this way? Is there a good explanation for this?
Basically what it seems is that given a random Goldbach pair, it has the same probability of being 1 away from a given N/2 as it does of being any other distance from N/2, which also seems relevant to the twin primes conjecture.
• you can connect the twin prime conjecture to goldbach as a statement of things like 12m = (6m+1) +(6m-1) infinitely often. also I thin this might have to do with expected value. etc. we do know that the distance has to be equal to both parts of any pair. – user451844 Sep 9 '17 at 18:21
• The average $C$ you have found is what you would also expect if your experiment was, "take two random odd numbers (no matter whether they are prime or not) that sum to $N$". So what you have observed is that restricting to primes does not appear to cause a new pattern to appear -- or at least not one that can be observed by your methods. This absence of a pattern does not sound like something one would expect to be able to prove at our current state of understanding. – Henning Makholm Sep 9 '17 at 18:27
• The random model for the primes is : The sequence of random variables $X_n = 1_{n \text{ is prime}}$ is independent and $P[X_n = 1] = \frac{1}{\log n},\ P[X_n = 1]+P[X_n=0]=1$. Thus on average you'll find a "large prime pair" $n-m,n+m$ with $m \sim \frac{c}{\log^2 n}$ ($c$ some constant) and another "small prime pair" $m_2,2n-m_2$ with $m_2 \sim \frac{c_2}{\log n}$. To make this probabilistically rigorous, we should also look at the variance. – reuns Sep 9 '17 at 18:30
• In fact, much sharper versions of the Goldbach conjecture (essentially stating that we can choose a sum with one of the primes "small") are believed to be true, but a proof that such a version actually is true would imply the Goldbach-conjecture itself and it would not be open anymore. Statistical heuristics however support the sharper conjectures which is the reason why almost all (or all?) mathematicians believe that Goldbach's conjecture is true. – Peter Sep 9 '17 at 18:30
• Plotting the distribution that follows from @reuns's model, it looks like $C$ ought to skew slightly towards $1$, but less so the larger $N$ becomes. – Henning Makholm Sep 9 '17 at 18:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707272410392761, "perplexity": 469.6611107213873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257259.71/warc/CC-MAIN-20190523143923-20190523165923-00337.warc.gz"} |
http://www.scholarpedia.org/article/Bayesian_statistics | # Bayesian statistics
Post-publication activity
Curator: David Spiegelhalter
Bayesian statistics is a system for describing epistemological uncertainty using the mathematical language of probability. In the 'Bayesian paradigm,' degrees of belief in states of nature are specified; these are non-negative, and the total belief in all states of nature is fixed to be one. Bayesian statistical methods start with existing 'prior' beliefs, and update these using data to give 'posterior' beliefs, which may be used as the basis for inferential decisions.
## Contents
### Background
In 1763, Thomas Bayes published a paper on the problem of induction, that is, arguing from the specific to the general. In modern language and notation, Bayes wanted to use Binomial data comprising $$r$$ successes out of $$n$$ attempts to learn about the underlying chance $$\theta$$ of each attempt succeeding. Bayes' key contribution was to use a probability distribution to represent uncertainty about $$\theta\ .$$ This distribution represents 'epistemological' uncertainty, due to lack of knowledge about the world, rather than 'aleatory' probability arising from the essential unpredictability of future events, as may be familiar from games of chance.
Modern 'Bayesian statistics' is still based on formulating probability distributions to express uncertainty about unknown quantities. These can be underlying parameters of a system (induction) or future observations (prediction).
### Bayes' Theorem
In its raw form, Bayes' Theorem is a result in conditional probability, stating that for two random quantities $$y$$ and $$\theta\ ,$$ $p(\theta|y) = p(y|\theta) p(\theta) / p(y),$
where $$p(\cdot)$$ denotes a probability distribution, and $$p(\cdot|\cdot)$$ a conditional distribution. When $$y$$ represents data and $$\theta$$ represents parameters in a statistical model, Bayes Theorem provides the basis for Bayesian inference. The 'prior' distribution $$p(\theta)$$ (epistemological uncertainty) is combined with 'likelihood' $$p(y|\theta)$$ to provide a 'posterior' distribution $$p(\theta|y)$$ (updated epistemological uncertainty): the likelihood is derived from an aleatory sampling model $$p(y|\theta)$$ but considered as function of $$\theta$$ for fixed $$y\ .$$
While an innocuous theory, practical use of the Bayesian approach requires consideration of complex practical issues, including the source of the prior distribution, the choice of a likelihood function, computation and summary of the posterior distribution in high-dimensional problems, and making a convincing presentation of the analysis.
Bayes theorem can be thought of as way of coherently updating our uncertainty in the light of new evidence. The use of a probability distribution as a 'language' to express our uncertainty is not an arbitrary choice: it can in fact be determined from deeper principles of logical reasoning or rational behavior; see Jaynes (2003) or Lindley (1953). In particular, De Finetti (1937) showed that making a qualitative assumptions of exchangeability of binary observations (i.e. that their joint distribution is unaffected by label-permutation) is equivalent to assuming they are each independent conditional on some unknown parameter $$\theta\ ,$$ where $$\theta$$ has a prior distribution and is the limiting frequency with which the events occur.
### Use of Bayes' Theorem: a simple example
Figure 1: Prior, likelihood and posterior distributions for $$\theta\ ,$$ the rate of infections per 10,000 bed-days. The posterior distribution is a formal compromise between the likelihood, summarizing the evidence in the data alone, and the prior distribution, which summarizes external evidence which suggested higher rates.
Suppose a hospital has around 200 beds occupied each day, and we want to know the underlying risk that a patient will be infected by MRSA (methicillin-resistant Staphylococcus aureus). Looking back at the first six months of the year, we count $$y=$$ 20 infections in 40,000 bed-days. A simple estimate of the underlying risk $$\theta$$ would be 20/40,000 $$=$$ 5 infections per 10,000 bed-days. This is also the maximum-likelihood estimate, if we assume that the observation $$y$$ is drawn from a Poisson distribution with mean $$\theta N$$ where $$N = 4$$ is the number of bed-days/$$10,000,$$ so that $p(y|\theta) = (\theta N)^y e^{-\theta N}/y!\ .$
However, other evidence about the underlying risk may exist, such as the previous year's rates or rates in similar hospitals which may be included as part of a hierarchical model (see below). Suppose this other information, on its own, suggests plausible values of $$\theta$$ of around 10 per 10,000, with 95% of the support for $$\theta$$ lying between 5 and 17. This judgement about $$\theta$$ may be expressed as a prior probability distribution. Say, for convenience, the Gamma$$(a,b)$$ family of distributions is chosen to formally describe our knowledge about $$\theta\ .$$ This family has density $p(\theta) = b^a \theta^{a-1}e^{-b\theta}/\Gamma(a)\ ;$ choosing $$a=10$$ and $$b=1$$ gives a prior distribution with appropriate properties, as shown in Figure 1.
Figure 1 also shows a density proportional to the likelihood function, under an assumed Poisson model. Using Bayes Theorem, the posterior distribution $$p(\theta|y)$$ is $\propto \theta^y e^{-\theta N} \theta^{a-1}e^{-b\theta} \propto \theta^{y+a-1}e^{-\theta (N+b)}\ ,$ i.e. a Gamma$$(y+a,N+b)$$ distribution - this closed-form posterior, within the same parametric family as the prior, is an example of a conjugate Bayesian analysis. Figure 1 shows that this posterior is primarily influenced by the likelihood function but is 'shrunk' towards the prior distribution to reflect that the expectation based on external evidence was of a higher rate than that actually observed. This can be thought of as an automatic adjustment for 'Regression to the mean', in that the prior distribution will tend to counteract chance highs or lows in the data.
## Prior distributions
The prior distribution is central to Bayesian statistics and yet remains controversial unless there is a physical sampling mechanism to justify a choice of $$p(\theta)\ .$$ One option is to seek 'objective' prior distributions that can be used in situations where judgemental input is supposed to be minimized, such as in scientific publications. While progress in Objective Bayes methods has been made for simple situations, a universal theory of priors that represent zero or minimal information has been elusive.
A complete alternative is the fully subjectivist position, which compels one to elicit priors on all parameters based on the personal judgement of appropriate individuals. A pragmatic compromise recognizes that Bayesian statistical analyses must usually be justified to external bodies and therefore the prior distribution should, as far as possible, be based on convincing external evidence or at least be guaranteed to be weakly informative: of course, exactly the same holds for the choice of functional form for the sampling distribution which will also be a subject of judgement and will need to be justified. Bayesian analysis is perhaps best seen as a process for obtaining posterior distributions or predictions based on a range of assumptions about both prior distributions and likelihoods: arguing in this way, sensitivity analysis and reasoned justification for both prior and likelihood become vital.
Sets of prior distributions can themselves share unknown parameters, forming hierarchical models. These feature strongly within applied Bayesian analysis and provide a powerful basis for pooling evidence from multiple sources in order to reach more precise conclusions. Essentially a compromise is reached between the two extremes of assuming the sources are estimating (a) precisely the same, or (b) totally unrelated, parameters. The degree of pooling is itself estimated from the data according to the similarity of the sources, but this does not avoid the need for careful judgement about whether the sources are indeed exchangeable, in the sense that we have no external reasons to believe that certain sources are systematically different from others.
## Prediction
One of the strengths of the Bayesian paradigm is its ease in making predictions. If current uncertainty about $$\theta$$ is summarized by a posterior distribution $$p(\theta|y)\ ,$$ a predictive distribution for any quantity $$z$$ that depends on $$\theta$$ through a sampling distribution $$p(z|\theta)$$ can be obtained as follows; $p(z|y) = \int p(z|\theta) p(\theta|y)\,\,d\theta$ provided that $$y$$ and $$z$$ are conditionally independent given $$\theta\ ,$$ which will generally hold except in time series or spatial models.
In the MRSA example above, suppose we wanted to predict the number of infections $$z$$ over the next six months, or 40,000 bed-days. This prediction is given by $p(z|y) = \int \frac{(\theta N)^z e^{-\theta N}}{z!} \,\,\, \frac{(N+b)^{y+a} \theta^{y+a-1} e^{-\theta (N+b)}}{\Gamma(y+a)} \,\,d\theta = \frac{\Gamma(z+y+a)}{\Gamma(y+a)z!} p^{y+a}(1-p)^z\ ,$ where $$p = (N+b)/(2N+b)\ .$$ This Negative Binomial predictive distribution for $$z$$ is shown in Figure 2.
Figure 2: Predictive distribution for number of infections in the next six months, expressed as Negative Binomial$$(a+y+1,\frac{b+N}{b+2N})$$ distribution with $$a =$$ 10, $$b =$$ 1, $$y=$$ 20, $$N=$$4. The mean is 25 and standard deviation is 6.7, and the probability that there are more than 20 infections is 73%. Essentially, more infections are predicted for the second six months, because external evidence suggests the observations were lucky in the first half of the year.
## Making Bayesian Decisions
For inference, a full report of the posterior distribution is the correct and final conclusion of a statistical analysis. However, this may be impractical, particularly when the posterior is high-dimensional. Instead, posterior summaries are commonly reported, for example the posterior mean and variance, or particular tail areas. If the analysis is performed with the goal of making a specific decision, measures of utility, or loss functions can be used to derive the posterior summary that is the 'best' decision, given the data.
In Decision Theory, the loss function describes how bad a particular decision would be, given a true state of nature. Given a particular posterior, the Bayes rule is the decision which minimizes the expected loss with respect to that posterior. If a rule is admissible (meaning that there is no rule with strictly greater utility, for at least some state of nature) it can be shown to be a Bayes rule for some proper prior and utility function.
Many intuitively-reasonable summaries of posteriors can also be motivated as Bayes rules. The posterior mean for some parameter $$\theta$$ is the Bayes rule when the loss function is the square of the distance from $$\theta$$ to the decision. As noted, for example, by Schervish (1995), quantile-based credible intervals can be justified as a Bayes rule for a bivariate decision problem, and Highest Posterior Density intervals can be justified as a Bayes rule for a set-valued decision problem.
As a specific example, suppose we had to provide a point prediction for the number of MRSA cases in the next 6 months. For every case that we over-estimate, we will lose 10 units of wasted resources, but for every case that we under-estimate we will lose 50 units through having to make emergency provision. Our selected estimate is that $$t$$ which will minimise the expected total cost, given by $\sum_{z=0}^{t-1} 10(t-z)p(z|y) + \sum_{z=t+1}^\infty 50(z-t)p(z|y)$
The optimal choice of $$t$$ can be calculated to be 30, considerably more than the expected value 24, reflecting our fear of under-estimation.
## Computation for Bayesian statistics
Bayesian analysis requires evaluating expectations of functions of random quantities as a basis for inference, where these quantities may have posterior distributions which are multivariate or of complex form or often both. This meant that for many years Bayesian statistics was essentially restricted to conjugate analysis, where the mathematical form of the prior and likelihood are jointly chosen to ensure that the posterior may be evaluated with ease. Numerical integration methods based on analytic approximations or quadrature were developed in 70s and 80s with some success, but a revolutionary change occurred in the early 1990s with the adoption of indirect methods, notably Monte Carlo Markov Chain).
### The Monte Carlo method
Any posterior distribution $$p(\theta|y)$$ may be approximated by taking a very large random sample of realizations of $$\theta$$ from $$p(\theta|y)\ ;$$ the approximate properties of $$p(\theta|y)$$ by the respective summaries of the realizations. For example, the posterior mean and variance of $$\theta$$ may be approximated by the mean and variance of a large number of realizations from $$p(\theta|y)\ .$$ Similarly, quantiles of the realizations estimate quantiles of the posterior, and the mode of a smoothed histogram of the realizations may be used to estimate the posterior mode.
Samples from the posterior can be generated in several ways, without exact knowledge of $$p(\theta|y)\ .$$ Direct methods include rejection sampling, which generates independent proposals for $$\theta\ ,$$ and accepts them at a rate whereby those retained are proportional to the desired posterior. Importance sampling can also be used to numerically evaluate relevant integrals; by appropriately weighting independent samples from a user-chosen distribution on $$\theta\ ,$$ properties of the posterior $$p(\theta|y)$$can be estimated.
### Markov Chain Monte Carlo (MCMC)
Realizations from the posterior used in Monte Carlo methods need not be independent, or generated directly. If the conditional distribution of each parameter is known (conditional on all other parameters), one simple way to generate a possibly-dependent sample of data points is via Gibbs Sampling. This algorithm generates one parameter at a time; as it sequentially updates each parameter, the entire parameter space is explored. It is appropriate to start from multiple starting points in order to check convergence, and in the long-run, the 'chains' of realizations produced will reflect the posterior of interest.
More general versions of the same argument include the Metropolis-Hastings algorithm; developing practical algorithms to approximate posterior distributions for complex problems remains an active area of research.
## Applications of Bayesian statistical methods
Explicitly Bayesian statistical methods tend to be used in three main situations. The first is where one has no alternative but to include quantitative prior judgments, due to lack of data on some aspect of a model, or because the inadequacies of some evidence has to be acknowledged through making assumptions about the biases involved. These situations can occur when a policy decision must be made on the basis of a combination of imperfect evidence from multiple sources, an example being the encouragement of Bayesian methods by the Food and Drug Administration (FDA) division responsible for medical devices.
The second situation is with moderate-size problems with multiple sources of evidence, where hierarchical models can be constructed on the assumption of shared prior distributions whose parameters can be estimated from the data. Common application areas include meta-analysis, disease mapping, multi-centre studies, and so on. With weakly-informative prior distributions the conclusions may often be numerically similar to classic techniques, even if the interpretations may be different.
The third area concerns where a huge joint probability model is constructed, relating possibly thousands of observations and parameters, and the only feasible way of making inferences on the unknown quantities is through taking a Bayesian approach: examples include image processing, spam filtering, signal analysis, and gene expression data. Classical model-fitting fails, and MCMC or other approximate methods become essential.
There is also extensive use of Bayesian ideas of parameter uncertainty but without explicit use of Bayes theorem. If a deterministic prediction model has been constructed, but some of the parameter inputs are uncertain, then a joint prior distribution can be placed on those parameters and the resulting uncertainty propagated through the model, often using Monte Carlo methods, to produce a predictive probability distribution. This technique is used widely in risk analysis, health economic modelling and climate projections, and is sometimes known as probabilistic sensitivity analysis.
Another setting where the 'updating' inherent in the Bayesian approach is suitable is in machine-learning; simple examples can be found in modern software for spam filtering, suggesting which books or movies a user might enjoy given his or her past preferences, or ranking schemes for millions of on-line gamers. Formal inference may only be approximately carried out, but the Bayesian perspective allows a flexible and adaptive response to each additional item of information.
## Open Areas in Bayesian Statistics
The philosophical rationale for using Bayesian methods was largely established and settled by the pioneering work of De Finetti, Savage, Jaynes and Lindley. However, widespread concern remain over how to apply these methods in practice, where various concerns over sensitivity to assumptions can detract from the rhetorical impact of Bayesians' epistemological validity.
### Hypothesis testing and model choice
Jeffreys (1939) developed a procedure for using data $$y$$ to test between alternative scientific hypotheses $$H_0$$ and $$H_1\ ,$$ by computing the Bayes factor $$p(y|H_0)/p(y|H_1)\ .$$ He suggested thresholds for strength of evidence for or against the hypotheses. The Bayes factor can be combined with the prior odds $$p(H_0)/p(H_1)$$ to give posterior probabilities of each hypothesis, that can be used to weight predictions in Bayesian Model Averaging (BMA). Although BMA can be an effective pragmatic device for prediction, the use of posterior model probabilities for scientific hypothesis-testing is controversial even among the Bayesian community, for both philosophical and practical reasons: first, it may not make sense to talk of probabilities of hypotheses that we know are not strictly 'true', and second, the calculation of the Bayes factor can be extremely sensitive to apparently innocuous prior assumptions about parameters within each hypothesis. For example, the ordinate of a widely dispersed uniform prior distribution would be irrelevant for estimation within a single model, but becomes crucial when comparing models.
It has also been argued that model choice is not necessarily the same as identifying the 'true' model, particularly as in most circumstances no true model exists and so posterior model probabilities are not interpretable or useful. Instead, other criteria, such as the Akaike Information Criterion or the Deviance Information Criterion, are concerned with selecting models that are expected to make good short-term predictions.
### Robustness and reporting
In the uncommon situation that the data are extensive and of simple structure, the prior assumptions will be unimportant and the assumed sampling model will be uncontroversial. More generally we would like to report that any conclusions are robust to reasonable changes in both prior and assumed model: this has been termed inference robustness to distinguish it from the frequentist idea of robustness of procedures when applied to different data. (Frequentist statistics uses the properties of statistical procedures over repeated applications to make inference based on the data at hand)
Bayesian statistical analysis can be complex to carry out, and explicitly includes both qualitative and quantitative judgement. This suggests the need for agreed standards for analysis and reporting, but these have not yet been developed. In particular, audiences should ideally fully understand the contribution of the prior distribution to the conclusions, the reasonableness of the prior assumptions, the robustness to alternative models and priors, and the adequacy of the computational methods.
### Model criticism
In the archetypal Bayesian paradigm there is no need for testing whether a single model adequately fits the data, since we should be always comparing two competing models using hypothesis-testing methods. However there has been recent growth in techniques for testing absolute adequacy, generally involving the simulation of replicate data and checking whether specific characteristics of the observed data match those of the replicates. Procedures for model criticism in complex hierarchical models are still being developed. It is also reasonable to check there is not strong conflict between different data sources or between prior and data, and general measures of conflict in complex models is also a subject of current research.
## Connections and comparisons with other schools of statistical inference
At a simple level, 'classical' likelihood-based inference closely resembles Bayesian inference using a flat prior, making the posterior and likelihood proportional. However, this underestimates the deep philosophical differences between Bayesian and frequentist inference; Bayesian make statements about the relative evidence for parameter values given a dataset, while frequentists compare the relative chance of datasets given a parameter value.
The incompatibility of these two views has long been a source of contention between different schools of statisticians; there is little agreement over which is 'right', 'most appropriate' or even 'most useful'. Nevertheless, in many cases, estimates, intervals, and other decisions will be extremely similar for Bayesian and frequentist analyses. Bernstein von Mises Theorems give general results proving approximate large-sample agreement between Bayesian and frequentist methods, for large classes of standard parametric and semi-parametric models. A notable exception is in hypothesis testing, where default Bayesian and frequentist methods can give strongly discordant conclusions. Also, establishing Bayesian interpretations of non-model based frequentist analyses (such as Generalized Estimating Equations) remains an open area.
Some qualities sought in non-Bayesian inference (such as adherence to the principle and exploitation of sufficiency) are natural consequences of following a Bayesian approach. Also, many Bayesian procedures can also, quite straightforwardly, be calibrated to have desired frequentist properties, such as intervals with 95% coverage. This can be useful when justifying Bayesian methods to external bodies such as regulatory agencies, and we might expect an increased use of 'hybrid' techniques in which a Bayesian interpretation is given to the inferences, but the long-run behaviour of the procedure is also taken into account.
## References
• Thomas Bayes (1763), "An Essay towards solving a Problem in the Doctrine of Chances" Phil. Trans. Royal Society London
• B. de Finetti, La Prevision: Ses Lois Logiques, Ses Sources Subjectives (1937) Annales de l'Institut Henri Poincare, 7: 1-68. Translated as Foresight: Its Logical Laws, Its Subjective Sources, in Kyburg, H. E. and Smokler, H. E. eds., (1964). Studies in Subjective Probability. Wiley, New York, 91-158
• E.T. Jaynes Probability Theory: The Logic of Science (2003) Cambridge University Press, Cambridge, UK
• H. Jeffreys (1939) Theory of Probability Oxford, Clarendon Press
• D.V. Lindley: Statistical Inference (1953) Journal of the Royal Statistical Society, Series B, 16: 30-76
• Schervish, M. J. (1995) Theory of Statistics. Springer-Verlag, New York.
• Bernardo and Smith (1994) Bayesian Theory, Wiley
• Berger (1993) Statistical Decision Theory and Bayesian Analysis, Springer-Verlag
• Carlin and Louis (2008) Bayesian Methods for Data Analysis (Third Edition) Chapman and Hall/CRC
• Gelman, Carlin, Stern and Rubin (2003) Bayesian Data Analysis (Second Edition) Chapman and Hall/CRC
• Gelman and Hill (2007) Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press
• Lindley (1991) Making Decisions (2nd Edition) Wiley
• Robert (2007) The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation (Second Edition), Springer-Verlag | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905261218547821, "perplexity": 722.2320045654034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299894.32/warc/CC-MAIN-20220129002459-20220129032459-00667.warc.gz"} |
http://libros.duhnnae.com/2017/aug6/15030048389-High-sensitivity-magnetic-imaging-using-an-array-of-spins-in-diamond-Condensed-Matter-Mesoscale-and-Nanoscale-Physics.php | High sensitivity magnetic imaging using an array of spins in diamond - Condensed Matter > Mesoscale and Nanoscale Physics
High sensitivity magnetic imaging using an array of spins in diamond - Condensed Matter > Mesoscale and Nanoscale Physics - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: We present a solid state magnetic field imaging technique using a twodimensional array of spins in diamond. The magnetic sensing spin array is madeof nitrogen-vacancy NV centers created at shallow depths. Their opticalresponse is used for measuring external magnetic fields in close proximity.Optically detected magnetic resonance ODMR is readout from a 60x60 $\mu$mfield of view in a multiplexed manner using a CCD camera. We experimentallydemonstrate full two-dimensional vector imaging of the magnetic field producedby a pair of current carrying micro-wires. The presented widefield NVmagnetometer offers in addition to its high magnetic sensitivity of 20nT-$\sqrt{Hz}$ and vector reconstruction, an unprecedented spatio-temporalresolution and functionality at room temperature.
Autor: Steffen Steinert, Florian Dolde, Philipp Neumann, Andrew Aird, Boris Naydenov, Gopalakrishnan Balasubramanian, Fedor Jelezko, Joe
Fuente: https://arxiv.org/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18080317974090576, "perplexity": 18659.05815148188}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589172.41/warc/CC-MAIN-20180716021858-20180716041858-00266.warc.gz"} |
https://crypto.stackexchange.com/questions/5271/finding-partial-pre-image-of-md5-hash/5317 | # Finding partial pre-image of MD5 hash
I have the following requirement for hashing using MD5.
H(A,B,C,X);
Where values A,B & C are given. However X is not given.
I would like to find out what value of X would give a hash beginning with 32 1's bits
meaning H(A,B,C,X) = Begin with 32 '1' bits
I can brute force by testing all kinds of characters of X till i get 32 '1' bits.
however is there a faster way rather than doing this?
• Short answer: No, there isn't. So use brute-force. But if you use a GPU it'll just take a few seconds to find such an $X$. – CodesInChaos Nov 4 '12 at 17:00
• Are A, B, C, X arbitrary length strings, or is there a length limit? – Paŭlo Ebermann Nov 4 '12 at 20:09
• A,B,C are fixed strings where as X can be any characters up to 100 characters of length – null Nov 5 '12 at 1:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28410622477531433, "perplexity": 1069.4522938702637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00340.warc.gz"} |
https://www.zbmath.org/?q=an:1423.11029 | ×
# zbMATH — the first resource for mathematics
The denominators of power sums of arithmetic progressions. (English) Zbl 1423.11029
The authors study the denominators of polynomials that represent the power sums of arithmetic progressions: ${\mathcal S}_{m,r}^n(x)=\sum_{k=0}^{x-1}(km+r)^n=r^n+(m+r)^n+\dots+((x-1)m+r)^n.$ They extend their earlier results on the case of power sum’s (when $$r=0,m=1$$). Specially, they give a simple explicit criterion for the integrality of the coefficients of these polynomials, and show further applications about the sequence of denominators of the Bernoulli polynomials.
##### MSC:
11B25 Arithmetic progressions 11B68 Bernoulli and Euler numbers and polynomials
##### Keywords:
Bernoulli polynomial; denominator
OEIS
Full Text:
##### References:
[1] G. Almkvist and A. Meurman, Values of Bernoulli polynomials and Hurwitz’s zeta function at rational points, C. R. Math. Acad. Sci. Soc. R. Can. 13 no. 2–3 (1991), 104–108. · Zbl 0731.11014 [2] A. Bazs´o and I. Mez˝o, On the coefficients of power sums of arithmetic progressions, J. Number Theory 153 (2015), 117–123. [3] A. Bazs´o, ´A. Pint´er, and H. M. Srivastava, A refinement of Faulhaber’s theorem concerning sums of powers of natural numbers, Appl. Math. Lett. 25 no. 3 (2012), 486–489. [4] H. Cohen, Number Theory, Volume II: Analytic and Modern Tools, GTM 240, Springer– Verlag, New York, 2007. · Zbl 1119.11002 [5] B. C. Kellner, On a product of certain primes, J. Number Theory 179 (2017), 126–141. · Zbl 1418.11045 [6] B. C. Kellner and J. Sondow, Power-sum denominators, Amer. Math. Monthly 124 (2017), 695–709. · Zbl 1391.11052 [7] N. E. Nørlund, Vorlesungen ¨uber Di↵erenzenrechnung, J. Springer, Berlin, 1924. [8] V. V. Prasolov, Polynomials, D. Leites, transl., 2nd edition, ACM 11, Springer–Verlag, Berlin, 2010. [9] A. M. Robert, A Course in p-adic Analysis, GTM 198, Springer–Verlag, New York, 2000. INTEGERS: 18 (2018)17 · Zbl 0947.11035 [10] H. G. Senge and E. G. Straus, PV-numbers and sets of multiplicity, Period. Math. Hungar. 3 (1973), 93–100. · Zbl 0248.12004 [11] N. J. A. Sloane, ed., The On-Line Encyclopedia of Integer Sequences, http://oeis.org. · Zbl 1044.11108 [12] C. L. Stewart, On the representation of an integer in two di↵erent bases, J. Reine Angew. Math. 319 (1980), 63–72.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582502007484436, "perplexity": 1348.3407734703035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00017.warc.gz"} |
http://read.somethingorotherwhatever.com/entry/NationalCurveBank | # National Curve Bank
• Published in 2002
In the collections
The National Curve Bank is a resource for students of mathematics. We strive to provide features - for example, animation and interaction - that a printed page cannot offer. We also include geometrical, algebraic, and historical aspects of curves, the kinds of attributes that make the mathematics special and enrich classroom learning.
### BibTeX entry
@article{NationalCurveBank,
title = {National Curve Bank},
abstract = {The National Curve Bank is a resource for students of mathematics. We strive to provide features - for example, animation and interaction - that a printed page cannot offer. We also include geometrical, algebraic, and historical aspects of curves, the kinds of attributes that make the mathematics special and enrich classroom learning.},
url = {http://web.calstatela.edu/curvebank/home/home.htm},
year = 2002,
author = {Shirley B. Gray and Stewart Venit and Russ Abbott},
comment = {},
urldate = {2018-07-02},
collections = {Lists and catalogues,Geometry}
} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5732700228691101, "perplexity": 5886.354071085204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490107.12/warc/CC-MAIN-20190219122312-20190219144312-00636.warc.gz"} |
https://scicomp.stackexchange.com/questions?tab=unanswered&page=1 | # All Questions
2,032 questions with no upvoted or accepted answers
Filter by
Sorted by
Tagged with
2k views
### How to Run MPI-3.0 in shared memory mode like OpenMP
I am parallelizing code to numerically solve a 5 Dimensional population balance model. Currently I have a very good MPICH2 parallelized code in FORTRAN but as we increase parameter values the arrays ...
3k views
### Comparing Jacobi and Gauss-Seidel methods for nonlinear iterations
It is well known that for certain linear systems Jacobi and Gauss-Seidel iterative methods have the same convergence behavior, e.g. Stein-Rosenberg Theorem. I am wondering if similar results exist for ...
410 views
224 views
### Suggestions for numerical integral over Pólya Distribution
This problem arises from a Bayesian statistical modeling project. In order to compute with my model, I need to perform an integration in which part of the integrand is the "Pólya" or "Dirichlet-...
132 views
### What are some good debugging habits for numerical simulation?
I'm currently writing a lid drive cavity CFD code on python. Currently, my code has some issues (values jumping bear b.c). I was wondering what are some good habits in debugging numerical codes. ...
85 views
### How to construct an effective preconditioner for this particular problem
A quick introduction to my problem I am currently developing a method for simulation of water waves in three dimensions based on potential flow theory. The computational bottleneck of the method is ...
568 views
### Fast Automatic Differentiation for numpy?
I would like to use automatic differentiation to calculate gradients to function written in numpy. I've come across a number of packages, including autograd tangent chainer But none of them seem ...
106 views
### Finding the smallest root of a function on $[0, \infty)$
I would like to find the smallest real root of a 1-D real-valued function $f(x)$ on the domain $x\in [0,\infty)$. In this problem, I can make the following guarantees on $f$: $f$ does have a root at ...
338 views
### Speed and accuracy of Strassen vs Winograd matrix multiplication algorithms
I am doing work which requires as fast matrix multiplication as possible and just want to double-check with this community that the Winograd variant of Strassen's MM algorithm is the fastest practical ...
332 views
### Eigenvalue with largest imaginary part
Iterative eigensolvers such as ARPACK, give the option to find a subset of the eigenvalues which have the largest imaginary part. My question is how do these algorithms work. As I understand it, ...
242 views
### What can be done with Finite Element Method and not with the Finite Volume Method, and vice versa?
What are some applications where you would absolutely go for either FEM, but not FVM, or vice versa? What are some applications where both methods are equally suited? I worked with the FEM so far and ...
162 views
524 views
### What is the source of the error in the Sherman-Morrison formula application?
The Sherman-Morrison formula $$(A+uv^T)^{-1} = A^{-1} - \frac{A^{-1}uv^TA^{-1}}{1+v^TA^{-1}u}$$ results in small errors in relation to the standard matrix inverse operation after each application, ...
100 views
### Choosing how many iterations to use in VEGAS
I'm using VEGAS integration, specifically the GSL implementation, for some QCD calculations, and I've been investigating the behavior of the algorithm for various numbers of iterations in an attempt ...
257 views
550 views
### Understanding Boundary Condition in FEM
I am trying to understand Dirichlet and Neumann boundary conditions in FEM and I wanted to know if my inference is correct. To articulate my understanding, lets consider a simple case of TE and TM ...
75 views
### Can automatic differentiation be used on the parameters of an optimization problem?
If I wanted to perform an optimization using a Newton-based solver where the Hessian and gradient of a function are known analytically, and then use a package such as Adept to compute a Jacobian ...
97 views
### A Question About a Claim from 1991 Computational EM paper about the Cancellation of certain Boundary Terms
Please let me know if this is not the appropriate site for this question. I found questions regarding EFIE/MFIE/CFIE on this site, so I thought my question might fit. I am studying the paper by Putnam ...
262 views
### Symmetric sparse direct solvers in scipy
scipy.linalg.solve, in its newer versions, has a parameter assume_a that can be used to specify that the matrix $A$ is symmetric ...
15 30 50 per page | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.784325361251831, "perplexity": 808.2887126345669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00018.warc.gz"} |
https://www.science.gov/topicpages/p/pade+approximant+method.html | Sample records for pade approximant method
1. Convergence of multipoint Pade approximants of piecewise analytic functions
SciTech Connect
Buslaev, Viktor I
2013-02-28
The behaviour as n{yields}{infinity} of multipoint Pade approximants to a function which is (piecewise) holomorphic on a union of finitely many continua is investigated. The convergence of multipoint Pade approximants is proved for a function which extends holomorphically from these continua to a union of domains whose boundaries have a certain symmetry property. An analogue of Stahl's theorem is established for two-point Pade approximants to a pair of functions, either of which is a multivalued analytic function with finitely many branch points. Bibliography: 11 titles.
2. Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles
SciTech Connect
Masjuan, Pere
2011-05-23
Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.
3. Padé approximants and their application to scattering from fluid media.
PubMed
Denis, Max; Tsui, Jing; Thompson, Charles; Chandra, Kavitha
2010-11-01
In this work, a numerical method for modeling the scattered acoustic pressure from fluid occlusions is described. The method is based on the asymptotic series expansion of the pressure expressed in terms of sound speed contrast between the host medium and entrained fluid occlusions. Padé approximants are used to extend the applicability of the result for larger values of sound speed contrast. For scattering from a circular cylinder, an improvement in convergence between the exact and numerical solutions is demonstrated. In the case of scattering from an inhomogeneous medium, a numerical solution with reduced order of Padé approximants is presented.
4. Asymptotic Pade Approximant Predictions: Up to Five Loops in QCD and SQCD
SciTech Connect
Samuel, Mark A.
2003-05-16
We use Asymptotic Pade Approximants (APAP's) to predict the four- and five-loop {beta} functions in QCD and N = 1 supersymmetric QCD (SQCD), as well as the quark mass anomalous dimensions in Abelian and non-Abelian gauge theories. We show how the accuracy of our previous {beta}-function predictions at the four-loop level may be further improved by using estimators weighted over negative numbers of flavours (WAPAP's). The accuracy of the improved four-loop results encourages confidence in the new five-loop {beta}-function predictions that we present. However, the WAPAP approach does not provide improved results for the anomalous mass dimension, or for Abelian theories.
5. PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
6. PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
7. On the use of Pade approximants to represent unsteady aerodynamic loads for arbitrarily small motions of wings
NASA Technical Reports Server (NTRS)
Vepa, R.
1976-01-01
The general behavior of unsteady airloads in the frequency domain is explained. Based on this, a systematic procedure is described whereby the airloads, produced by completely arbitrary, small, time-dependent motions of a thin lifting surface in an airstream, can be predicted. This scheme employs as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. Although these approximations have many uses, they are proving especially valuable in the design of automatic control systems intended to modify aeroelastic behavior.
8. A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
9. Constraints to Dark Energy Using PADE Parameterizations
Rezaei, M.; Malekjani, M.; Basilakos, S.; Mehrabi, A.; Mota, D. F.
2017-07-01
We put constraints on dark energy (DE) properties using PADE parameterization, and compare it to the same constraints using Chevalier-Polarski-Linder (CPL) and ΛCDM, at both the background and the perturbation levels. The DE equation of the state parameter of the models is derived following the mathematical treatment of PADE expansion. Unlike CPL parameterization, PADE approximation provides different forms of the equation of state parameter that avoid the divergence in the far future. Initially we perform a likelihood analysis in order to put constraints on the model parameters using solely background expansion data, and we find that all parameterizations are consistent with each other. Then, combining the expansion and the growth rate data, we test the viability of PADE parameterizations and compare them with CPL and ΛCDM models, respectively. Specifically, we find that the growth rate of the current PADE parameterizations is lower than ΛCDM model at low redshifts, while the differences among the models are negligible at high redshifts. In this context, we provide for the first time a growth index of linear matter perturbations in PADE cosmologies. Considering that DE is homogeneous, we recover the well-known asymptotic value of the growth index (namely {γ }∞ =\\tfrac{3({w}∞ -1)}{6{w}∞ -5}), while in the case of clustered DE, we obtain {γ }∞ ≃ \\tfrac{3{w}∞ (3{w}∞ -5)}{(6{w}∞ -5)(3{w}∞ -1)}. Finally, we generalize the growth index analysis in the case where γ is allowed to vary with redshift, and we find that the form of γ (z) in PADE parameterization extends that of the CPL and ΛCDM cosmologies, respectively.
10. An analytic Pade-motivated QCD coupling
SciTech Connect
Martinez, H. E.; Cvetic, G.
2010-08-04
We consider a modification of the Minimal Analytic (MA) coupling of Shirkov and Solovtsov. This modified MA (mMA) coupling reflects the desired analytic properties of the space-like observables. We show that an approximation by Dirac deltas of its discontinuity function {rho} is equivalent to a Pade(rational) approximation of the mMA coupling that keeps its analytic structure. We propose a modification to mMA that, as preliminary results indicate, could be an improvement in the evaluation of low-energy observables compared with other analytic couplings.
11. Semiclassical complex angular momentum theory and Pade reconstruction for resonances, rainbows, and reaction thresholds
SciTech Connect
Sokolovski, D.; Msezane, A.Z.
2004-09-01
A semiclassical complex angular momentum theory, used to analyze atom-diatom reactive angular distributions, is applied to several well-known potential (one-particle) problems. Examples include resonance scattering, rainbow scattering, and the Eckart threshold model. Pade reconstruction of the corresponding matrix elements from the values at physical (integral) angular momenta and properties of the Pade approximants are discussed in detail.
12. A three-dimensional parabolic equation model of sound propagation using higher-order operator splitting and Padé approximants.
PubMed
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
13. Renormalization group and Pade applications to perturbative and non-perturbative quantum field theory
Chishtie, Farrukh Ahmed
Pade approximants (PA) have been widely applied in practically all areas of physics. This thesis focuses on developing PA as tools for both perturbative and non-perturbative quantum field theory (QFT). In perturbative QFT, we systematically estimate higher (unknown) loop terms via the asymptotic formula devised by Samuel et al. This algorithm, generally denoted as the asymptotic Pade approximation procedure (APAP), has greatly enhanced scope when it is applied to renormalization-group-(RG-) invariant quantities. A presently-unknown higher-loop quantity can then be matched with the approximant over the entire momentum region of phenomenological interest. Furthermore, the predicted value of the RG coefficients can be compared with the RG-accessible coefficients (at the higher-loop order), allowing a clearer indication of the accuracy of the predicted RG-inaccessible term. This methodology is applied to hadronic Higgs decay rates (H → bb¯ and H → gg, both within the Standard Model and its MSSM extension), Higgs-sector cross-sections ( W+LW- L→ZL ZL ), inclusive semileptonic b → u decays (leading to reduced theoretical uncertainties in the extraction of |Vub|), QCD (Quantum Chromodynamics) correlation functions (scalar-fermionic, scalar-gluonic and vector correlators) and the QCD static potential. APAP is also applied directly to RG beta- and gamma-functions in massive φ4 theory. In non-perturbative QFT we use Pade summation methods to probe the large coupling regions of QCD. In analysing all the possible Pade-approximants to truncated beta-function for QCD, we are able to probe the singularity structure corresponding to the all orders beta-function. Noting the consistent ordering of poles and roots for such approximants (regardless of the next unknown higher-loop contribution), we conclude that these approximants are free of defective (pole) behaviour and hence we can safely draw physical conclusions from them. QCD is shown to have a flavour threshold (6
14. PaDe - The particle detection program
Ott, T.; Drolshagen, E.; Koschny, D.; Poppe, B.
2016-01-01
This paper introduces the Particle Detection program PaDe. Its aim is to analyze dust particles in the coma of the Jupiter-family comet 67P/Churyumov-Gerasimenko which were recorded by the two OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras onboard the ESA spacecraft Rosetta, see e.g. Keller et al. (2007). In addition to working with the Rosetta data, the code was modified to work with images from meteors. It was tested with data recorded by the ICCs (Intensified CCD Cameras) of the CILBO-System (Canary Island Long-Baseline Observatory) on the Canary Islands; compare Koschny et al. (2013). This paper presents a new method for the position determination of the observed meteors. The PaDe program was written in Python 3.4. Its original intent is to find the trails of dust particles in space from the OSIRIS images. For that it determines the positions where the trail starts and ends. They were found using a fit following the so-called error function (Andrews, 1998) for the two edges of the profiles. The positions where the intensities fall to the half maximum were found to be the beginning and end of the particle. In the case of meteors, this method can be applied to find the leading edge of the meteor. The proposed method has the potential to increase the accuracy of the position determination of meteors dramatically. Other than the standard method of finding the photometric center, our method is not influenced by any trails or wakes behind the meteor. This paper presents first results of this ongoing work.
15. Random-Phase Approximation Methods
Chen, Guo P.; Voora, Vamsee K.; Agee, Matthew M.; Balasubramani, Sree Ganesh; Furche, Filipp
2017-05-01
Random-phase approximation (RPA) methods are rapidly emerging as cost-effective validation tools for semilocal density functional computations. We present the theoretical background of RPA in an intuitive rather than formal fashion, focusing on the physical picture of screening and simple diagrammatic analysis. A new decomposition of the RPA correlation energy into plasmonic modes leads to an appealing visualization of electron correlation in terms of charge density fluctuations. Recent developments in the areas of beyond-RPA methods, RPA correlation potentials, and efficient algorithms for RPA energy and property calculations are reviewed. The ability of RPA to approximately capture static correlation in molecules is quantified by an analysis of RPA natural occupation numbers. We illustrate the use of RPA methods in applications to small-gap systems such as open-shell d- and f-element compounds, radicals, and weakly bound complexes, where semilocal density functional results exhibit strong functional dependence.
16. Potential of the approximation method
SciTech Connect
Amano, K.; Maruoka, A.
1996-12-31
Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.
17. Approximation method for the kinetic Boltzmann equation
NASA Technical Reports Server (NTRS)
Shakhov, Y. M.
1972-01-01
The further development of a method for approximating the Boltzmann equation is considered and a case of pseudo-Maxwellian molecules is treated in detail. A method of approximating the collision frequency is discussed along with a method for approximating the moments of the Boltzmann collision integral. Since the return collisions integral and the collision frequency are expressed through the distribution function moments, use of the proposed methods make it possible to reduce the Boltzmann equation to a series of approximating equations.
18. Differential Equations, Related Problems of Pade Approximations and Computer Applications
DTIC Science & Technology
1988-01-01
geometric sense, like the Picard-Fuchs equations satisfied by the variation of periods, possess strong arithmetic properties (global nilpotence ...result, and the (G, C)-function conditions, one needs the definition of the p-curvature. We consider a system of matrix first order linear differential...the system (1.1) in the matrix form df f /dx = Aff ; A E M (Q(x)), one can introduce the p-curvature operators Ip, associated with the system (1.1). The
19. Approximate methods for equations of incompressible fluid
Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.
2017-02-01
Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.
20. Approximate error conjugation gradient minimization methods
DOEpatents
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
1. Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
2. Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
3. Variational Bayesian Approximation methods for inverse problems
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
4. An approximate projection method for incompressible flow
Stevens, David E.; Chan, Stevens T.; Gresho, Phil
2002-12-01
This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.
5. Finite difference methods for approximating Heaviside functions
Towers, John D.
2009-05-01
We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution
6. Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
7. Approximate Methods for State-Space Models.
PubMed
Koyama, Shinsuke; Pérez-Bolde, Lucia Castellanos; Shalizi, Cosma Rohilla; Kass, Robert E
2010-03-01
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.
8. Padé spectrum decompositions of quantum distribution functions and optimal hierarchical equations of motion construction for quantum open systems.
PubMed
Hu, Jie; Luo, Meng; Jiang, Feng; Xu, Rui-Xue; Yan, Yijing
2011-06-28
Padé spectrum decomposition is an optimal sum-over-poles expansion scheme of Fermi function and Bose function [J. Hu, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 133, 101106 (2010)]. In this work, we report two additional members to this family, from which the best among all sum-over-poles methods could be chosen for different cases of application. Methods are developed for determining these three Padé spectrum decomposition expansions at machine precision via simple algorithms. We exemplify the applications of present development with optimal construction of hierarchical equations-of-motion formulations for nonperturbative quantum dissipation and quantum transport dynamics. Numerical demonstrations are given for two systems. One is the transient transport current to an interacting quantum-dots system, together with the involved high-order co-tunneling dynamics. Another is the non-Markovian dynamics of a spin-boson system.
9. Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
10. A new approximation method for stress constraints in structural synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, Garret N.; Salajegheh, Eysa
1987-01-01
A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.
11. Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
12. Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
13. Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
14. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
SciTech Connect
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
15. An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
16. Discontinuous Galerkin method based on non-polynomial approximation spaces
SciTech Connect
Yuan Ling . E-mail: lyuan@dam.brown.edu; Shu Chiwang . E-mail: shu@dam.brown.edu
2006-10-10
In this paper, we develop discontinuous Galerkin (DG) methods based on non-polynomial approximation spaces for numerically solving time dependent hyperbolic and parabolic and steady state hyperbolic and elliptic partial differential equations (PDEs). The algorithm is based on approximation spaces consisting of non-polynomial elementary functions such as exponential functions, trigonometric functions, etc., with the objective of obtaining better approximations for specific types of PDEs and initial and boundary conditions. It is shown that L {sup 2} stability and error estimates can be obtained when the approximation space is suitably selected. It is also shown with numerical examples that a careful selection of the approximation space to fit individual PDE and initial and boundary conditions often provides more accurate results than the DG methods based on the polynomial approximation spaces of the same order of accuracy.
17. Mapping biological entities using the longest approximately common prefix method
PubMed Central
2014-01-01
Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653
18. Comparison of interpolation and approximation methods for optical freeform synthesis
Voznesenskaya, Anna; Krizskiy, Pavel
2017-06-01
Interpolation and approximation methods for freeform surface synthesis are analyzed using the developed software tool. Special computer tool is developed and results of freeform surface modeling with piecewise linear interpolation, piecewise quadratic interpolation, cubic spline interpolation, Lagrange polynomial interpolation are considered. The most accurate interpolation method is recommended. Surface profiles are approximated with the square least method. The freeform systems are generated in optical design software.
19. A simple approximation method for obtaining the spanwise lift distribution
NASA Technical Reports Server (NTRS)
Schrenk, O
1940-01-01
The approximation method described makes possible lift-distribution computations in a few minutes. Comparison with an exact method shows satisfactory agreement. The method is of greater applicability than the exact method and includes also the important case of the wing with end plates.
20. An approximation method for fractional integro-differential equations
Emiroglu, Ibrahim
2015-12-01
In this work, an approximation method is proposed for fractional order linear Fredholm type integrodifferential equations with boundary conditions. The Sinc collocation method is applied to the examples and its efficiency and strength is also discussed by some special examples. The results of the proposed method are compared to the available analytic solutions.
1. Double power series method for approximating cosmological perturbations
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
2. Dual methods and approximation concepts in structural synthesis
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.
3. Efficient variational Bayesian approximation method based on subspace optimization.
PubMed
Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas
2015-02-01
Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time.
4. Improved stochastic approximation methods for discretized parabolic partial differential equations
Guiaş, Flavius
2016-12-01
We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).
5. Successive approximation method for Caputo q-fractional IVPs
Salahshour, Soheil; Ahmadian, Ali; Chan, Chee Seng
2015-07-01
Recently, Abdeljawad and Baleanu (2011) introduced Caputo q-fractional derivatives and used it to solve Caputo q-fractional initial value problem. For this purpose, they applied successive approximation method to obtain an explicit solution; but did not clarify under which conditions that this method will be convergence. In this paper, we propose q-Krasnoselskii-Krein type condition to investigate the convergence of the method.
6. Efficient solution of parabolic equations by Krylov approximation methods
NASA Technical Reports Server (NTRS)
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
7. Approximate Design Method for Single Stage Pulse Tube Refrigerators
Pfotenhauer, J. M.; Gan, Z. H.; Radebaugh, R.
2008-03-01
An approximate design method is presented for the design of a single stage Stirling type pulse tube refrigerator. The design method begins from a defined cooling power, operating temperature, average and dynamic pressure, and frequency. Using a combination of phasor analysis, approximate correlations derived from extensive use of REGEN3.2, a few `rules of thumb,' and available models for inertance tubes, a process is presented to define appropriate geometries for the regenerator, pulse tube and inertance tube components. In addition, specifications for the acoustic power and phase between the pressure and flow required from the compressor are defined. The process enables an appreciation of the primary physical parameters operating within the pulse tube refrigerator, but relies on approximate values for the combined loss mechanisms. The defined geometries can provide both a useful starting point, and a sanity check, for more sophisticated design methodologies.
8. Multi-level methods and approximating distribution functions
Wilson, D.; Baker, R. E.
2016-07-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
9. Multi-level methods and approximating distribution functions
SciTech Connect
Wilson, D. Baker, R. E.
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
10. Approximate Newton-type methods via theory of control
Yap, Chui Ying; Leong, Wah June
2014-12-01
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this method in its original form requires inversion of a non-sparse matrix or equivalently solving a linear system in every iteration. Thus, an approximation of the proposed method via quasi-Newton update is constructed. Numerical results indicate that the new method is more effective and practical.
11. Calculating Resonance Positions and Widths Using the Siegert Approximation Method
ERIC Educational Resources Information Center
Rapedius, Kevin
2011-01-01
Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…
12. Using Propensity Score Methods to Approximate Factorial Experimental Designs
ERIC Educational Resources Information Center
Dong, Nianbo
2011-01-01
The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…
13. Calculating Resonance Positions and Widths Using the Siegert Approximation Method
ERIC Educational Resources Information Center
Rapedius, Kevin
2011-01-01
Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…
14. Methods to approximate reliabilities in single-step genomic evaluation
USDA-ARS?s Scientific Manuscript database
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
15. Spin-1 Heisenberg ferromagnet using pair approximation method
SciTech Connect
Mert, Murat; Mert, Gülistan; Kılıç, Ahmet
2016-06-08
Thermodynamic properties for Heisenberg ferromagnet with spin-1 on the simple cubic lattice have been calculated using pair approximation method. We introduce the single-ion anisotropy and the next-nearest-neighbor exchange interaction. We found that for negative single-ion anisotropy parameter, the internal energy is positive and heat capacity has two peaks.
16. Capturing correlations in chaotic diffusion by approximation methods.
PubMed
Knight, Georgie; Klages, Rainer
2011-10-01
We investigate three different methods for systematically approximating the diffusion coefficient of a deterministic random walk on the line that contains dynamical correlations that change irregularly under parameter variation. Capturing these correlations by incorporating higher-order terms, all schemes converge to the analytically exact result. Two of these methods are based on expanding the Taylor-Green-Kubo formula for diffusion, while the third method approximates Markov partitions and transition matrices by using a slight variation of the escape rate theory of chaotic diffusion. We check the practicability of the different methods by working them out analytically and numerically for a simple one-dimensional map, study their convergence, and critically discuss their usefulness in identifying a possible fractal instability of parameter-dependent diffusion, in the case of dynamics where exact results for the diffusion coefficient are not available.
17. An approximate method for calculating aircraft downwash on parachute trajectories
SciTech Connect
Strickland, J.H.
1989-01-01
An approximate method for calculating velocities induced by aircraft on parachute trajectories is presented herein. A simple system of quadrilateral vortex panels is used to model the aircraft wing and its wake. The purpose of this work is to provide a simple analytical tool which can be used to approximate the effect of aircraft-induced velocities on parachute performance. Performance issues such as turnover and wake recontact may be strongly influenced by velocities induced by the wake of the delivering aircraft, especially if the aircraft is maneuvering at the time of parachute deployment. 7 refs., 9 figs.
18. Approximate method of designing a two-element airfoil
Abzalilov, D. F.; Mardanov, R. F.
2011-09-01
An approximate method is proposed for designing a two-element airfoil. The method is based on reducing an inverse boundary-value problem in a doubly connected domain to a problem in a singly connected domain located on a multisheet Riemann surface. The essence of the method is replacement of channels between the airfoil elements by channels of flow suction and blowing. The shape of these channels asymptotically tends to the annular shape of channels passing to infinity on the second sheet of the Riemann surface. The proposed method can be extended to designing multielement airfoils.
19. Source Localization using Stochastic Approximation and Least Squares Methods
SciTech Connect
Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis
2009-03-05
This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.
20. Interfacing Relativistic and Nonrelativistic Methods: A Systematic Sequence of Approximations
NASA Technical Reports Server (NTRS)
Dyall, Ken; Langhoff, Stephen R. (Technical Monitor)
1997-01-01
A systematic sequence of approximations for the introduction of relativistic effects into nonrelativistic molecular finite-basis set calculations is described. The theoretical basis for the approximations is the normalized elimination of the small component (ESC) within the matrix representation of the modified Dirac equation. The key features of the normalized method are the retention of the relativistic metric and the ability to define a single matrix U relating the pseudo-large and large component coefficient matrices. This matrix is used to define a modified set of one- and two-electron integrals which have the same appearance as the integrals of the Breit-Pauli Hamiltonian. The first approximation fixes the ratios of the large and pseudo-large components to their atomic values, producing an expansion in atomic 4-spinors. The second approximation defines a local fine-structure constant on each atomic centre, which has the physical value for centres considered to be relativistic and zero for nonrelativistic centres. In the latter case, the 4-spinors are the positive-energy kinetic al ly-balanced solutions of the Levy-Leblond equation, and the integrals involving pseudo-large component basis functions on these centres, are set to zero. Some results are presented for test systems to illustrate the various approximations.
1. A hybrid approximation method for solving Hutchinson's equation
Marzban, Hamid Reza; Tabrizidooz, Hamid Reza
2012-01-01
The hybrid function approximation method for solving Hutchinson's equation which is a nonlinear delay partial differential equation, is investigated. The properties of hybrid of block-pulse functions and Lagrange interpolating polynomials based on Legendre-Gauss-type points are presented and are utilized to replace the system of nonlinear delay differential equations resulting from the application of Legendre pseudospectral method, by a system of nonlinear algebraic equations. The validity and applicability of the proposed method are demonstrated through two illustrative examples on Hutchinson's equation.
2. Parallel iterative solvers and preconditioners using approximate hierarchical methods
SciTech Connect
Grama, A.; Kumar, V.; Sameh, A.
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
3. Local Approximation and Hierarchical Methods for Stochastic Optimization
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the
4. A multiscale two-point flux-approximation method
SciTech Connect
Møyner, Olav Lie, Knut-Andreas
2014-10-15
A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal–dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
5. Asymptotic approximation method of force reconstruction: Proof of concept
Sanchez, J.; Benaroya, H.
2017-08-01
An important problem in engineering is the determination of the system input based on the system response. This type of problem is difficult to solve as it is often ill-defined, and produces inaccurate or non-unique results. Current reconstruction techniques typically involve the employment of optimization methods or additional constraints to regularize the problem, but these methods are not without their flaws as they may be sub-optimally applied and produce inadequate results. An alternative approach is developed that draws upon concepts from control systems theory, the equilibrium analysis of linear dynamical systems with time-dependent inputs, and asymptotic approximation analysis. This paper presents the theoretical development of the proposed method. A simple application of the method is presented to demonstrate the procedure. A more complex application to a continuous system is performed to demonstrate the applicability of the method.
6. Analytic approximations to the modon dispersion relation. [in oceanography
NASA Technical Reports Server (NTRS)
Boyd, J. P.
1981-01-01
Three explicit analytic approximations are given to the modon dispersion relation developed by Flierl et al. (1980) to describe Gulf Stream rings and related phenomena in the oceans and atmosphere. The solutions are in the form of k(q), and are developed in the form of a power series in q for small q, an inverse power series in 1/q for large q, and a two-point Pade approximant. The low order Pade approximant is shown to yield a solution for the dispersion relation with a maximum relative error for the lowest branch of the function equal to one in 700 in the q interval zero to infinity.
7. Separable approximation method for two-body relativistic scattering
Tandy, P. C.; Thaler, R. M.
1988-03-01
A method for defining a separable approximation to a given interaction within a two-body relativistic equation, such as the Bethe-Salpeter equation, is presented. The rank-N separable representation given here permits exact reproduction of the T matrix on the mass shell and half off the mass shell at N selected bound state and/or continuum values of the invariant mass. The method employed is a four-space generalization of the separable representation developed for Schrödinger interactions by Ernst, Shakin, and Thaler, supplemented by procedures for dealing with the relativistic spin structure in the case of Dirac particles.
8. Advances in dual algorithms and convex approximation methods
NASA Technical Reports Server (NTRS)
Smaoui, H.; Fleury, C.; Schmit, L. A.
1988-01-01
A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.
9. The Caratheodory-Fejer Method for Real Rational Approximation,
DTIC Science & Technology
1981-10-01
M H GUTKNECHT N01-75-C-1132 UNCLASSIFIED STAN-NA-81-15 IIEEEEEEEEEEE L11-205 ~jj 2 I’tll H’O IIN W HR !%OM j~j 2A LEVEL 00 Ott a DTIC ChELECTEP...FEB 5 1982 ZBUTION STATrMENT A jLpproved for public releaqw, Distribution Unlimited The Carath4iodory-Fej4r method for real rational approximation Lloyd...angewandte MathematikB E’idgen6ssischc Technische Iiochschule 8092 Zurich, Switzerland Abstract. A "Carath6odory-Fej6r method" is presented ror near-best
10. A Surface Approximation Method for Image and Video Correspondences.
PubMed
Huang, Jingwei; Wang, Bin; Wang, Wenping; Sen, Pradeep
2015-12-01
Although finding correspondences between similar images is an important problem in image processing, the existing algorithms cannot find accurate and dense correspondences in images with significant changes in lighting/transformation or with the non-rigid objects. This paper proposes a novel method for finding accurate and dense correspondences between images even in these difficult situations. Starting with the non-rigid dense correspondence algorithm [1] to generate an initial correspondence map, we propose a new geometric filter that uses cubic B-Spline surfaces to approximate the correspondence mapping functions for shared objects in both images, thereby eliminating outliers and noise. We then propose an iterative algorithm which enlarges the region containing valid correspondences. Compared with the existing methods, our method is more robust to significant changes in lighting, color, or viewpoint. Furthermore, we demonstrate how to extend our surface approximation method to video editing by first generating a reliable correspondence map between a given source frame and each frame of a video. The user can then edit the source frame, and the changes are automatically propagated through the entire video using the correspondence map. To evaluate our approach, we examine applications of unsupervised image recognition and video texture editing, and show that our algorithm produces better results than those from state-of-the-art approaches.
11. Visualizations for genetic assignment analyses using the saddlepoint approximation method.
PubMed
McMillan, L F; Fewster, R M
2017-09-01
We propose a method for visualizing genetic assignment data by characterizing the distribution of genetic profiles for each candidate source population. This method enhances the assignment method of Rannala and Mountain (1997) by calculating appropriate graph positions for individuals for which some genetic data are missing. An individual with missing data is positioned in the distributions of genetic profiles for a population according to its estimated quantile based on its available data. The quantiles of the genetic profile distribution for each population are calculated by approximating the cumulative distribution function (CDF) using the saddlepoint method, and then inverting the CDF to get the quantile function. The saddlepoint method also provides a way to visualize assignment results calculated using the leave-one-out procedure. This new method offers an advance upon assignment software such as geneclass2, which provides no visualization method, and is biologically more interpretable than the bar charts provided by the software structure. We show results from simulated data and apply the methods to microsatellite genotype data from ship rats (Rattus rattus) captured on the Great Barrier Island archipelago, New Zealand. The visualization method makes it straightforward to detect features of population structure and to judge the discriminative power of the genetic data for assigning individuals to source populations. © 2017, The International Biometric Society.
12. Phase Transitions at Liquid Solid Interfaces: Pade Approximant for Adsorption Isotherms and Voltammograms
DTIC Science & Technology
1991-01-29
the underpotential deposition of metals on an electrode, and obtain voltammograms that show the sharp spikes seen in recent experiments. 20 DISTRIBUTION...recursion relation. and can be computed from the fugacity series in closed form. We apply this approxiant to the underpotential deposition of metals on an...the sudden formation of films at electrodes. It has been possible to perform structural analysis of underpotential deposits of metallic monolavers [4
13. Globbic approximation in low-resolution direct-methods phasing.
PubMed
Guo, D Y; Blessing, R H; Langs, D A
2000-09-01
Probabilistic direct-methods phasing theory, originally based on a uniform atomic distribution hypothesis, is shown to be adaptable to a non-uniform bulk-solvent-compensated globbic approximation for protein crystals at low resolution. The effective number n(g) of non-H protein atoms per polyatomic glob increases with decreasing resolution; low-resolution phases depend on the positions of only N(g) = N(a)/n(g) globs rather than N(a) atoms. Test calculations were performed with measured structure-factor data and the refined structural parameters from a protein crystal with approximately 10 000 non-H protein atoms per molecule and approximately 60% solvent volume. Low-resolution data sets with d(min) ranging from 15 to 5 A gave n(g) = ad(min) + b, with a = 1.0 A(-1) and b = -1.9 for the test case. Results of tangent-formula phase-estimation trials emphasize that completeness of the low-resolution data is critically important for probabilistic phasing.
14. Finite amplitude method for the quasiparticle random-phase approximation
SciTech Connect
2011-07-15
We present the finite amplitude method (FAM), originally proposed in Ref. [17], for superfluid systems. A Hartree-Fock-Bogoliubov code may be transformed into a code of the quasiparticle-random-phase approximation (QRPA) with simple modifications. This technique has advantages over the conventional QRPA calculations, such as coding feasibility and computational cost. We perform the fully self-consistent linear-response calculation for the spherical neutron-rich nucleus {sup 174}Sn, modifying the hfbrad code, to demonstrate the accuracy, feasibility, and usefulness of the FAM.
15. Proton Form Factor Measurements Using Polarization Method: Beyond Born Approximation
SciTech Connect
Pentchev, Lubomir
2008-10-13
Significant theoretical and experimental efforts have been made over the past 7 years aiming to explain the discrepancy between the proton form factor ratio data obtained at JLab using the polarization method and the previous Rosenbluth measurements. Preliminary results from the first high precision polarization experiment dedicated to study effects beyond Born approximation will be presented. The ratio of the transferred polarization components and, separately, the longitudinal polarization in ep elastic scattering have been measured at a fixed Q{sup 2} of 2.5 GeV{sup 2} over a wide kinematic range. The two quantities impose constraints on the real part of the ep elastic amplitudes.
16. Parabolic approximation method for the mode conversion-tunneling equation
SciTech Connect
Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.
1987-07-01
The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.
17. Approximation method to compute domain related integrals in structural studies
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
18. Approximate hard-sphere method for densely packed granular flows
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
19. Approximate hard-sphere method for densely packed granular flows.
PubMed
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
20. Hybrid functionals and GW approximation in the FLAPW method
Friedrich, Christoph; Betzinger, Markus; Schlipf, Martin; Blügel, Stefan; Schindlmayr, Arno
2012-07-01
We present recent advances in numerical implementations of hybrid functionals and the GW approximation within the full-potential linearized augmented-plane-wave (FLAPW) method. The former is an approximation for the exchange-correlation contribution to the total energy functional in density-functional theory, and the latter is an approximation for the electronic self-energy in the framework of many-body perturbation theory. All implementations employ the mixed product basis, which has evolved into a versatile basis for the products of wave functions, describing the incoming and outgoing states of an electron that is scattered by interacting with another electron. It can thus be used for representing the nonlocal potential in hybrid functionals as well as the screened interaction and related quantities in GW calculations. In particular, the six-dimensional space integrals of the Hamiltonian exchange matrix elements (and exchange self-energy) decompose into sums over vector-matrix-vector products, which can be evaluated easily. The correlation part of the GW self-energy, which contains a time or frequency dependence, is calculated on the imaginary frequency axis with a subsequent analytic continuation to the real axis or, alternatively, by a direct frequency convolution of the Green function G and the dynamically screened Coulomb interaction W along a contour integration path that avoids the poles of the Green function. Hybrid-functional and GW calculations are notoriously computationally expensive. We present a number of tricks that reduce the computational cost considerably, including the use of spatial and time-reversal symmetries, modifications of the mixed product basis with the aim to optimize it for the correlation self-energy and another modification that makes the Coulomb matrix sparse, analytic expansions of the interaction potentials around the point of divergence at k = 0, and a nested density and density-matrix convergence scheme for hybrid
1. Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
2. A stochastic approximation method for assigning values to calibrators.
PubMed
Schlain, B
1998-04-01
A new procedure is provided for transferring analyte concentration values from a reference material to production calibrators. This method is robust to calibration curve-fitting errors and can be accomplished using only one instrument and one set of reagents. An easily implemented stochastic approximation algorithm iteratively finds the appropriate analyte level of a standard prepared from a reference material that will yield the same average signal response as the new production calibrator. Alternatively, a production bulk calibrator material can be iteratively adjusted to give the same average signal response as some prespecified, fixed reference standard. In either case, the outputted value assignment of the production calibrator is the analyte concentration of the reference standard in the final iteration of the algorithm. Sample sizes are statistically determined as functions of known within-run signal response precisions and user-specified accuracy tolerances.
3. Multivariate approximation methods and applications to geophysics and geodesy
NASA Technical Reports Server (NTRS)
Munteanu, M. J.
1979-01-01
The first report in a series is presented which is intended to be written by the author with the purpose of treating a class of approximation methods of functions in one and several variables and ways of applying them to geophysics and geodesy. The first report is divided in three parts and is devoted to the presentation of the mathematical theory and formulas. Various optimal ways of representing functions in one and several variables and the associated error when information is had about the function such as satellite data of different kinds are discussed. The framework chosen is Hilbert spaces. Experiments were performed on satellite altimeter data and on satellite to satellite tracking data.
4. Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
5. Approximate Bayesian computation methods for daily spatiotemporal precipitation occurrence simulation
Olson, Branden; Kleiber, William
2017-04-01
Stochastic precipitation generators (SPGs) produce synthetic precipitation data and are frequently used to generate inputs for physical models throughout many scientific disciplines. Especially for large data sets, statistical parameter estimation is difficult due to the high dimensionality of the likelihood function. We propose techniques to estimate SPG parameters for spatiotemporal precipitation occurrence based on an emerging set of methods called Approximate Bayesian computation (ABC), which bypass the evaluation of a likelihood function. Our statistical model employs a thresholded Gaussian process that reduces to a probit regression at single sites. We identify appropriate ABC penalization metrics for our model parameters to produce simulations whose statistical characteristics closely resemble those of the observations. Spell length metrics are appropriate for single sites, while a variogram-based metric is proposed for spatial simulations. We present numerical case studies at sites in Colorado and Iowa where the estimated statistical model adequately reproduces local and domain statistics.
6. Introduction to Methods of Approximation in Physics and Astronomy
van Putten, Maurice H. P. M.
2017-04-01
Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify
7. A new approximate fast method of computing the scattering from multilayer rough surfaces based on the Kirchhoff approximation
Tian, Jiasheng; Tong, Jian; Shi, Jian; Gui, Liangqi
2017-02-01
In this paper a new approximate fast method of calculating the bistatic-scattering coefficients of a multilayer structure with random rough interfaces was presented based on the Kirchhoff Approximation (KA) and the electromagnetic theory of stratified media. First, the electromagnetic scattering from a Gauss rough metal or dielectric surface was calculated by KA method and method of moment (MOM), and the effectiveness of KA method was confirmed and verified. Second, a new approximate fast method was presented to calculate electromagnetic scattering from a multilayer-random-rough surface based on electromagnetic reflection from multilayer parallel surfaces and KA. The calculated results by the new method were in good agreements with those by MOM, especially near the specular point. Finally, a comparison of the new method and MOM was carried out in consuming computing time, memory resources, and complexity. The comparison indicated that the new approximate method was faster by about 30-150 times than MOM. The new approximate fast method could avoid a large matrix inversion and greatly reduce the computation time and memory resources and thus improve the computational efficiency. It was an effective approximation fast analyzing method of electromagnetic scattering from multilayer rough surfaces.
8. A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
9. Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
10. Communication: Improved pair approximations in local coupled-cluster methods
Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim
2015-03-01
In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.
11. A Binomial Approximation Method for the Ising Model
Streib, Noah; Streib, Amanda; Beichl, Isabel; Sullivan, Francis
2014-08-01
A large portion of the computation required for the partition function of the Ising model can be captured with a simple formula. In this work, we support this claim by defining an approximation to the partition function and other thermodynamic quantities of the Ising model that requires no algorithm at all. This approximation, which uses the high temperature expansion, is solely based on the binomial distribution, and performs very well at low temperatures. At high temperatures, we provide an alternative approximation, which also serves as a lower bound on the partition function and is trivial to compute. We provide theoretical evidence and the results of numerical experiments to support the strength of these approximations.
12. Production-passage-time approximation: a new approximation method to accelerate the simulation process of enzymatic reactions.
PubMed
Kuwahara, Hiroyuki; Myers, Chris J
2008-09-01
Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency.
13. Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
14. Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
15. A method of approximating range size of small mammals
USGS Publications Warehouse
Stickel, L.F.
1965-01-01
In summary, trap success trends appear to provide a useful approximation to range size of easily trapped small mammals such as Peromyscus. The scale of measurement can be adjusted as desired. Further explorations of the usefulness of the plan should be made and modifications possibly developed before adoption.
16. Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
17. On Using a Fast Multipole Method-based Poisson Solver in anApproximate Projection Method
SciTech Connect
Williams, Sarah A.; Almgren, Ann S.; Puckett, E. Gerry
2006-03-28
Approximate projection methods are useful computational tools for solving the equations of time-dependent incompressible flow.Inthis report we will present a new discretization of the approximate projection in an approximate projection method. The discretizations of divergence and gradient will be identical to those in existing approximate projection methodology using cell-centered values of pressure; however, we will replace inversion of the five-point cell-centered discretization of the Laplacian operator by a Fast Multipole Method-based Poisson Solver (FMM-PS).We will show that the FMM-PS solver can be an accurate and robust component of an approximation projection method for constant density, inviscid, incompressible flow problems. Computational examples exhibiting second-order accuracy for smooth problems will be shown. The FMM-PS solver will be found to be more robust than inversion of the standard five-point cell-centered discretization of the Laplacian for certain time-dependent problems that challenge the robustness of the approximate projection methodology.
18. Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
19. Approximation of the transport equation by a weighted particle method
SciTech Connect
Mas-Gallic, S.; Poupaud, F.
1988-08-01
We study a particle method for numerically solving a model equation for the neutron transport. We present the method and develop the theoretical convergence analysis. We prove the stability and the convergence of the method in L/sup infinity/. Some computational test results are given.
20. Decentralized Bayesian search using approximate dynamic programming methods.
PubMed
Zhao, Yijia; Patek, Stephen D; Beling, Peter A
2008-08-01
We consider decentralized Bayesian search problems that involve a team of multiple autonomous agents searching for targets on a network of search points operating under the following constraints: 1) interagent communication is limited; 2) the agents do not have the opportunity to agree in advance on how to resolve equivalent but incompatible strategies; and 3) each agent lacks the ability to control or predict with certainty the actions of the other agents. We formulate the multiagent search-path-planning problem as a decentralized optimal control problem and introduce approximate dynamic heuristics that can be implemented in a decentralized fashion. After establishing some analytical properties of the heuristics, we present computational results for a search problem involving two agents on a 5 x 5 grid.
1. Effective moduli of particulate solids: Lubrication approximation method
Qi, F.; Phan-Thien, N.; X. J. Fan
To efficiently calculate the effective properties of a composite, which consists of rigid spherical inclusions not necessarily of the same sizes in a homogeneous isotropic elastic matrix, a method based on the lubrication forces between neighbouring particles has been developed. The method is used to evaluate the effective Lamé moduli and the Poisson's ratio of the composite, for the particles in random configurations and in cubic lattices. A good agreement with experimental results given by Smith (1975) for particles in random configurations is observed, and also the numerical results on the effective moduli agree well with the results given by Nunan & Keller (1984) for particles in cubic lattices.
2. An approximate method for determining of investment risk
Slavkova, Maria; Tzenova, Zlatina
2016-12-01
In this work a method for determining of investment risk during all economic states is considered. It is connected to matrix games with two players. A definition for risk in a matrix game is introduced. Three properties are proven. It is considered an appropriate example.
3. Approximate proximal point methods for convex programming problems
SciTech Connect
Eggermont, P.
1994-12-31
We study proximal point methods for the finite dimensional convex programming problem minimize f(x) such that x {element_of} C, where f : dom f {contained_in} RIR is a proper convex function and C {contained_in} R is a closed convex set.
4. SET: a pupil detection method using sinusoidal approximation
PubMed Central
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
5. SET: a pupil detection method using sinusoidal approximation.
PubMed
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as "SET") that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations ("Natural"); and images of less challenging indoor scenes ("CASIA-Iris-Thousand"). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library ("DLL"), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk).
6. Computation of atmospheric cooling rates by exact and approximate methods
NASA Technical Reports Server (NTRS)
Ridgway, William L.; HARSHVARDHAN; Arking, Albert
1991-01-01
Infrared fluxes and cooling rates for several standard model atmospheres, with and without water vapor, carbon dioxide, and ozone, have been calculated using a line-by-line method at 0.01/cm resolution. The sensitivity of the results to the vertical integration scheme and to the model for water vapor continuum absorption is shown. Comparison with similar calculations performed at NOAA/GFDL shows agreement to within 0.5 W/sq m in fluxes at various levels and 0.05 K/d in cooling rates. Comparison with a fast, parameterized radiation code used in climate models reveals a worst case difference, when all gases are included, of 3.7 W/sq m in flux; cooling rate differences are 0.1 K/d or less when integrated over a substantial layer with point differences as large as 0.3 K/d.
7. Lubrication approximation in completed double layer boundary element method
Nasseri, S.; Phan-Thien, N.; Fan, X.-J.
This paper reports on the results of the numerical simulation of the motion of solid spherical particles in shear Stokes flows. Using the completed double layer boundary element method (CDLBEM) via distributed computing under Parallel Virtual Machine (PVM), the effective viscosity of suspension has been calculated for a finite number of spheres in a cubic array, or in a random configuration. In the simulation presented here, the short range interactions via lubrication forces are also taken into account, via the range completer in the formulation, whenever the gap between two neighbouring particles is closer than a critical gap. The results for particles in a simple cubic array agree with the results of Nunan and Keller (1984) and Stoksian Dynamics of Brady etal. (1988). To evaluate the lubrication forces between particles in a random configuration, a critical gap of 0.2 of particle's radius is suggested and the results are tested against the experimental data of Thomas (1965) and empirical equation of Krieger-Dougherty (Krieger, 1972). Finally, the quasi-steady trajectories are obtained for time-varying configuration of 125 particles.
8. Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
9. Convergence of hausdorff approximation methods for the Edgeworth-Pareto hull of a compact set
Efremov, R. V.
2015-11-01
The Hausdorff methods comprise an important class of polyhedral approximation methods for convex compact bodies, since they have an optimal convergence rate and possess other useful properties. The concept of Hausdorff methods is extended to a problem arising in multicriteria optimization, namely, to the polyhedral approximation of the Edgeworth-Pareto hull (EPH) of a convex compact set. It is shown that the sequences of polyhedral sets generated by Hausdorff methods converge to the EPH to be approximated. It is shown that the Estimate Refinement method, which is most frequently used to approximate the EPH of convex compact sets, is a Hausdorff method and, hence, generates sequences of sets converging to the EPH.
10. A new method of imposing boundary conditions in pseudospectral approximations of hyperbolic equations
NASA Technical Reports Server (NTRS)
Funaro, D.; Gottlieb, D.
1988-01-01
A new method to impose boundary conditions for pseudospectral approximations to hyperbolic equations is suggested. This method involves the collocation of the equation at the boundary nodes as well as satisfying boundary conditions. Stability and convergence results are proven for the Chebyshev approximation of linear scalar hyperbolic equations. The eigenvalues of this method applied to parabolic equations are shown to be real and negative.
11. Hausdorff methods for approximating the convex Edgeworth-Pareto hull in integer problems with monotone objectives
Pospelov, A. I.
2016-08-01
Adaptive methods for the polyhedral approximation of the convex Edgeworth-Pareto hull in multiobjective monotone integer optimization problems are proposed and studied. For these methods, theoretical convergence rate estimates with respect to the number of vertices are obtained. The estimates coincide in order with those for filling and augmentation H-methods intended for the approximation of nonsmooth convex compact bodies.
12. On the interpretation of large gravimagnetic data by the modified method of S-approximations
Stepanova, I. E.; Raevskiy, D. N.; Shchepetilov, A. V.
2017-01-01
The modified method of S-approximations applied for processing large and superlarge gravity and magnetic prospecting data is considered. The modified S-approximations of the elements of gravitational field are obtained due to the efficient block methods for solving the system of linear algebraic equations (SLAEs) to which the geophysically meaningful problem is reduced. The results of the mathematical experiment are presented.
13. The complex variable boundary element method: Applications in determining approximative boundaries
USGS Publications Warehouse
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
14. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
15. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
16. Comparison of Finite Differences and WKB approximation Methods for PT symmetric complex potentials
Naceri, Leila; Chekkal, Meziane; Hammou, Amine B.
2016-10-01
We consider the one dimensional schrödinger eigenvalue problem on a finite domain (Strum-Liouville problem) for several PT-symmetric complex potentials, studied by Bender and Jones using the WKB approximation method. We make a comparison between the solutions of theses PT-symmetric complex potentials using both the finite difference method (FDM) and the WKB approximation method and show quantitative and qualitative agreement between the two methods.
17. The Subspace Projected Approximate Matrix (SPAM) modification of the Davidson method
SciTech Connect
Shepard, R.; Tilson, J.L.; Wagner, A.F.; Minkoff, M.
1997-12-31
A modification of the Davidson subspace expansion method, a Ritz approach, is proposed in which the expansion vectors are computed from a {open_quotes}cheap{close_quotes} approximating eigenvalue equation. This approximate eigenvalue equation is assembled using projection operators constructed from the subspace expansion vectors. The method may be implemented using an inner/outer iteration scheme, or it may be implemented by modifying the usual Davidson algorithm in such a way that exact and approximate matrix-vector product computations are intersperced. A multi-level algorithm is proposed in which several levels of approximate matrices are used.
18. Low-rank approximations with sparse factors II: Penalized methods with discrete Newton-like iterations
SciTech Connect
Zhang, Zhenyue; Zha, Hongyuan; Simon, Horst
2006-07-31
In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.
19. The Subspace Projected Approximate Matrix (SPAM) Modification of the Davidson Method
Shepard, Ron; Wagner, Albert F.; Tilson, Jeffrey L.; Minkoff, Michael
2001-09-01
A modification of the iterative matrix diagonalization method of Davidson is presented that is applicable to the symmetric eigenvalue problem. This method is based on subspace projections of a sequence of one or more approximate matrices. The purpose of these approximate matrices is to improve the efficiency of the solution of the desired eigenpairs by reducing the number of matrix-vector products that must be computed with the exact matrix. Several applications are presented. These are chosen to show the range of applicability of the method, the convergence behavior for a wide range of matrix types, and also the wide range of approaches that may be employed to generate approximate matrices.
20. Evaluation of Jacobian determinants by Monte Carlo methods - Application to the quasiclassical approximation in molecular scattering.
NASA Technical Reports Server (NTRS)
La Budde, R. A.
1972-01-01
Sampling techniques have been used previously to evaluate Jacobian determinants that occur in classical mechanical descriptions of molecular scattering. These determinants also occur in the quasiclassical approximation. A new technique is described which can be used to evaluate Jacobian determinants which occur in either description. This method is expected to be valuable in the study of reactive scattering using the quasiclassical approximation.
1. Accelerated over relaxation iterative method using triangle element approximation for solving 2D Helmholtz Equations
Akhir, M. K. M.; Sulaiman, J.
2017-09-01
Weighted iterative methods particularly Accelerated Over Relaxation (AOR) method are used to solve linear system generated from triangle finite element approximation equation in solving 2D Helmholtz equation. The development of the AOR iterative method were also presented. Numerical experiments have been carried out and the results obtained confirm the superiority of the proposed iterative method
2. Approximate Method of Calculating Heating Rates at General Three-Dimensional Stagnation Points During Atmospheric Entry
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II
1982-01-01
An approximate method for calculating heating rates at general three dimensional stagnation points is presented. The application of the method for making stagnation point heating calculations during atmospheric entry is described. Comparisons with results from boundary layer calculations indicate that the method should provide an accurate method for engineering type design and analysis applications.
3. Extension of the weak-line approximation and application to correlated-k methods
SciTech Connect
Conley, A.J.; Collins, W.D.
2011-03-15
Global climate models require accurate and rapid computation of the radiative transfer through the atmosphere. Correlated-k methods are often used. One of the approximations used in correlated-k models is the weakline approximation. We introduce an approximation T/sub g/ which reduces to the weak-line limit when optical depths are small, and captures the deviation from the weak-line limit as the extinction deviates from the weak-line limit. This approximation is constructed to match the first two moments of the gamma distribution to the k-distribution of the transmission. We compare the errors of the weak-line approximation with T/sub g/ in the context of a water vapor spectrum. The extension T/sub g/ is more accurate and converges more rapidly than the weak-line approximation.
4. Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
5. Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
6. Evaluation of the successive approximations method for acoustic streaming numerical simulations.
PubMed
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.
7. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
Schnoerr, David; Sanguinetti, Guido; Grima, Ramon
2017-03-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
8. Padé approximation of the adiabatic electron contribution to the gyrokinetic quasi-neutrality equation in the ORB5 code
Lanti, E.; Dominski, J.; Brunner, S.; McMillan, B. F.; Villard, L.
2016-11-01
This work aims at completing the implementation of a solver for the quasineutrality equation using a Padé approximation in the global gyrokinetic code ORB5. Initially [Dominski, Ph.D. thesis, 2016], the Pade approximation was only implemented for the kinetic electron model. To enable runs with adiabatic or hybrid electron models while using a Pade approximation to the polarization response, the adiabatic response term of the quasi-neutrality equation must be consistently modified. It is shown that the Pade solver is in good agreement with the arbitrary wavelength solver of ORB5 [Dominski, Ph.D. thesis, 2016]. To perform this verification, the linear dispersion relation of an ITG-TEM transition is computed for both solvers and the linear growth rates and frequencies are compared.
9. Asymptotic properties of the estimate refinement method in polyhedral approximation of multidimensional balls
Kamenev, G. K.
2015-10-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is considered. In the approximation of convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. The properties of the method are examined as applied to the polyhedral approximation of a multidimensional ball. As vertices of approximating polytopes, the method is shown to generate a deep holes sequence on the surface of the ball. As a result, previously obtained combinatorial properties of convex hulls of the indicated sequences, namely, the convergence rates with respect to the number of faces of all dimensions and the optimal growth of the cardinality of the facial structure (of the norm of the f-vector) can be extended to such polytopes. The combinatorial properties of the approximating polytopes generated by the estimate refinement method are compared to the properties of polytopes with a facial structure of extremal cardinality. It is shown that the polytopes generated by the method are similar to stacked polytopes, on which the minimum number of faces of all dimensions is attained for a given number of vertices.
10. Gait Generation for a Small Biped Robot using Approximated Optimization Method
Nguyen, Tinh; Tao, Linh; Hasegawa, Hiroshi
2016-11-01
This paper proposes a novel approach for gait pattern generation of a small biped robot to enhance its walking behavior. This is to aim to make the robot gait more natural and more stable in the walking process. In this study, we mention the approximated optimization method which applied the Differential Evolution algorithm (DE) to objective function approximated by Artificial Neural Network (ANN). In addition, we also present a new humanlike foot structure with toes for the biped robot in this paper. To evaluate this method achievement, the robot was simulated by multi-body dynamics simulation software, Adams (MSC software, USA). As a result, we confirmed that the biped robot with the proposed foot structure can walk naturally. The approximated optimization method based on DE algorithm and ANN is an effective approach to generate a gait pattern for the locomotion of the biped robot. This method is simpler than the conventional methods using Zero Moment Point (ZMP) criterion.
11. Viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces
Chang, Shih-Sen
2006-11-01
By using viscosity approximation methods for a finite family of nonexpansive mappings in Banach spaces, some sufficient and necessary conditions for the iterative sequence to converging to a common fixed point are obtained. The results presented in the paper extend and improve some recent results in [H.K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl. 298 (2004) 279-291; H.K. Xu, Remark on an iterative method for nonexpansive mappings, Comm. Appl. Nonlinear Anal. 10 (2003) 67-75; H.H. Bauschke, The approximation of fixed points of compositions of nonexpansive mappings in Banach spaces, J. Math. Anal. Appl. 202 (1996) 150-159; B. Halpern, Fixed points of nonexpansive maps, Bull. Amer. Math. Soc. 73 (1967) 957-961; J.S. Jung, Iterative approaches to common fixed points of nonexpansive mappings in Banach spaces, J. Math. Anal. Appl. 302 (2005) 509-520; P.L. Lions, Approximation de points fixes de contractions', C. R. Acad. Sci. Paris Ser. A 284 (1977) 1357-1359; A. Moudafi, Viscosity approximation methods for fixed point problems, J. Math. Anal. Appl. 241 (2000) 46-55; S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, J. Math. Anal. Appl. 75 (1980) 128-292; R. Wittmann, Approximation of fixed points of nonexpansive mappings, Arch. Math. 58 (1992) 486-491].
12. Approximate Solution of Time-Fractional Advection-Dispersion Equation via Fractional Variational Iteration Method
PubMed Central
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
13. Adjoined Piecewise Linear Approximations (APLAs) for Equating: Accuracy Evaluations of a Postsmoothing Equating Method
ERIC Educational Resources Information Center
Moses, Tim
2013-01-01
The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…
14. Numerical solution of 2D-vector tomography problem using the method of approximate inverse
SciTech Connect
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-10
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
15. Numerical solution of 2D-vector tomography problem using the method of approximate inverse
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
16. Adjoined Piecewise Linear Approximations (APLAs) for Equating: Accuracy Evaluations of a Postsmoothing Equating Method
ERIC Educational Resources Information Center
Moses, Tim
2013-01-01
The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…
17. Approximation methods for control of structural acoustics models with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1993-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input team. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
18. Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
19. Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
20. Magnetic interface forward and inversion method based on Padé approximation
Zhang, Chong; Huang, Da-Nian; Zhang, Kai; Pu, Yi-Tao; Yu, Ping
2016-12-01
The magnetic interface forward and inversion method is realized using the Taylor series expansion to linearize the Fourier transform of the exponential function. With a large expansion step and unbounded neighborhood, the Taylor series is not convergent, and therefore, this paper presents the magnetic interface forward and inversion method based on Padé approximation instead of the Taylor series expansion. Compared with the Taylor series, Padé's expansion's convergence is more stable and its approximation more accurate. Model tests show the validity of the magnetic forward modeling and inversion of Padé approximation proposed in the paper, and when this inversion method is applied to the measured data of the Matagami area in Canada, a stable and reasonable distribution of underground interface is obtained.
1. The uniform asymptotic swallowtail approximation - Practical methods for oscillating integrals with four coalescing saddle points
NASA Technical Reports Server (NTRS)
Connor, J. N. L.; Curtis, P. R.; Farrelly, D.
1984-01-01
Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.
2. Simulation of mass transfer during osmotic dehydration of apple: a power law approximation method
Abbasi Souraki, B.; Tondro, H.; Ghavami, M.
2014-10-01
In this study, unsteady one-dimensional mass transfer during osmotic dehydration of apple was modeled using an approximate mathematical model. The mathematical model has been developed based on a power law profile approximation for moisture and solute concentrations in the spatial direction. The proposed model was validated by the experimental water loss and solute gain data, obtained from osmotic dehydration of infinite slab and cylindrical shape samples of apple in sucrose solutions (30, 40 and 50 % w/w), at different temperatures (30, 40 and 50 °C). The proposed model's predictions were also compared with the exact analytical and also a parabolic approximation model's predictions. The values of mean relative errors respect to the experimental data were estimated between 4.5 and 8.1 %, 6.5 and 10.2 %, and 15.0 and 19.1 %, for exact analytical, power law and parabolic approximation methods, respectively. Although the parabolic approximation leads to simpler relations, the power law approximation method results in higher accuracy of average concentrations over the whole domain of dehydration time. Considering both simplicity and precision of the mathematical models, the power law model for short dehydration times and the simplified exact analytical model for long dehydration times could be used for explanation of the variations of the average water loss and solute gain in the whole domain of dimensionless times.
3. Laplace transform homotopy perturbation method for the approximation of variational problems.
PubMed
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
4. Model reference adaptive control in fractional order systems using discrete-time approximation methods
2015-08-01
In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.
5. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method
SciTech Connect
Wilson, B.G.
1999-11-11
The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.
6. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing
NASA Technical Reports Server (NTRS)
Mier Muth, A. M.; Willsky, A. S.
1978-01-01
In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.
7. Approximation methods for control of acoustic/structure models with piezoceramic actuators
NASA Technical Reports Server (NTRS)
Banks, H. T.; Fang, W.; Silcox, R. J.; Smith, R. C.
1991-01-01
The active control of acoustic pressure in a 2-D cavity with a flexible boundary (a beam) is considered. Specifically, this control is implemented via piezoceramic patches on the beam which produces pure bending moments. The incorporation of the feedback control in this manner leads to a system with an unbounded input term. Approximation methods in this manner leads to a system with an unbounded input term. Approximation methods in the context of linear quadratic regulator (LQR) state space control formulation are discussed and numerical results demonstrating the effectiveness of this approach in computing feedback controls for noise reduction are presented.
8. Adomian Decomposition Method for Approximating the Solutions of the Bidirectional Sawada-Kotera Equation
Lai, Xian-Jing; Cai, Xiao-Ou
2010-09-01
In this paper, the decomposition method is implemented for solving the bidirectional Sawada- Kotera (bSK) equation with two kinds of initial conditions. As a result, the Adomian polynomials have been calculated and the approximate and exact solutions of the bSK equation are obtained by means of Maple, such as solitary wave solutions, doubly-periodic solutions, two-soliton solutions. Moreover, we compare the approximate solution with the exact solution in a table and analyze the absolute error and the relative error. The results reported in this article provide further evidence of the usefulness of the Adomian decomposition method for obtaining solutions of nonlinear problems
9. Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
10. Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
11. Improved Parker's method for topographic models using Chebyshev series and low rank approximation
Wu, Leyuan; Lin, Qiang
2017-03-01
We present a new method to improve the convergence of the well-known Parker's formula for the modelling of gravity and magnetic fields caused by sources with complex topography. In the original Parker's formula, two approximations are made, which may cause considerable numerical errors and instabilities: 1) the approximation of the forward and inverse continuous Fourier transforms using their discrete counterparts, the forward and inverse Fast Fourier Transform (FFT) algorithms; 2) the approximation of the exponential function with its Taylor series expansion. In a previous paper of ours, we have made an effort addressing the first problem by applying the Gauss-FFT method instead of the standard FFT algorithm. The new Gauss-FFT based method shows improved numerical efficiency and agrees well with space-domain analytical or hybrid analytical-numerical algorithms. However, even under the simplifying assumption of a calculation surface being a level plane above all topographic sources, the method may still fail or become inaccurate under certain circumstances. When the peaks of the topography approach the observation surface too closely, the number of terms of the Taylor series expansion needed to reach a suitable precision becomes large and slows the calculation. We show in this paper that this problem is caused by the second approximation mentioned above, and it is due to the convergence property of the Taylor series expansion that the algorithm becomes inaccurate for certain topographic models with large amplitudes. Based on this observation, we present a modified Parker's method using low rank approximation (LRA) of the exponential function in virtue of the Chebfun software system. In this way, the optimal rate of convergence is achieved. Some pre-computation is needed but will not cause significant computational overheads. Synthetic and real model tests show that the method now works well for almost any practical topographic model, provided that the assumption
12. Improved Parker's method for topographic models using Chebyshev series and low rank approximation
Wu, Leyuan; Lin, Qiang
2017-05-01
We present a new method to improve the convergence of the well-known Parker's formula for the modelling of gravity and magnetic fields caused by sources with complex topography. In the original Parker's formula, two approximations are made, which may cause considerable numerical errors and instabilities: (1) the approximation of the forward and inverse continuous Fourier transforms using their discrete counterparts, the forward and inverse Fast Fourier Transform (FFT) algorithms; (2) the approximation of the exponential function with its Taylor series expansion. In a previous paper of ours, we have made an effort addressing the first problem by applying the Gauss-FFT method instead of the standard FFT algorithm. The new Gauss-FFT based method shows improved numerical efficiency and agrees well with space-domain analytical or hybrid analytical-numerical algorithms. However, even under the simplifying assumption of a calculation surface being a level plane above all topographic sources, the method may still fail or become inaccurate under certain circumstances. When the peaks of the topography approach the observation surface too closely, the number of terms of the Taylor series expansion needed to reach a suitable precision becomes large and slows the calculation. We show in this paper that this problem is caused by the second approximation mentioned above, and it is due to the convergence property of the Taylor series expansion that the algorithm becomes inaccurate for certain topographic models with large amplitudes. Based on this observation, we present a modified Parker's method using low rank approximation of the exponential function in virtue of the Chebfun software system. In this way, the optimal rate of convergence is achieved. Some pre-computation is needed but will not cause significant computational overheads. Synthetic and real model tests show that the method now works well for almost any practical topographic model, provided that the assumption, that
13. An approximate method for solution to variable moment of inertia problems
NASA Technical Reports Server (NTRS)
Beans, E. W.
1981-01-01
An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.
14. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics
Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.
2017-07-01
The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.
15. Comparing methods for the approximation of rainfall fields in environmental applications
Patané, G.; Cerri, A.; Skytt, V.; Pittaluga, S.; Biasotti, S.; Sobrero, D.; Dokken, T.; Spagnuolo, M.
2017-05-01
Digital environmental data are becoming commonplace and the amount of information they provide is complex to process, due to the size, variety, and dynamic nature of the data captured by sensing devices. The paper discusses an evaluation framework for comparing methods to approximate observed rain data, in real conditions of sparsity of the observations. The novelty brought by this experimental study stands in the geographical area and heterogeneity of the data used for evaluation, aspects which challenge all approximation methods. The Liguria region, located in the north-west of Italy, is a complex area for the orography and the closeness to the sea, which cause complex hydro-meteorological events. The observed rain data are highly heterogeneous: two data sets come from measured rain gathered from two different rain gauge networks, with different characteristics and spatial distributions over the Liguria region; the third data set come from weather radar, with a more regular coverage of the same region but a different veracity. Finally, another novelty of the paper is brought by the proposal of an application-oriented perspective on the comparison. The approximation models the rain field, whose maxima and their evolution is essential for an effective monitoring of meteorological events. Therefore, we adapt a storm tracking technique to the analysis of the displacement of maxima computed by the different methods, used as a dissimilarity measure among the approximation methods analyzed.
16. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
17. An Iterative Pixel-Level Image Matching Method for Mars Mapping Using Approximate Orthophotos
Geng, X.; Xu, Q.; Lan, C. Z.; Xing, S.
2017-07-01
Mars mapping is essential to the scientific research of the red planet. The special terrain characteristics of Martian surface can be used to develop the targeted image matching method. In this paper, in order to generate high resolution Mars DEM, a pixel-level image matching method for Mars orbital pushbroom images is proposed. The main strategies of our method include: (1) image matching on approximate orthophotos; (2) estimating approximate value of conjugate points by using ground point coordinates of orthophotos; (3) hierarchical image matching; (4) generating DEM and approximate orthophotos at each pyramid level; (5) fast transformation from ground points to image points for pushbroom images. The derived DEM at each pyramid level is used as reference data for the generation of approximate orthophotos at the next pyramid level. With iterative processing, the generated DEM becomes more and more accurate and a very small search window is precise enough for the determination of conjugate points. The images acquired by High Resolution Stereo Camera (HRSC) on European Mars Express were used to verify our method's feasibility. Experiment results demonstrate that accurate DEM data can be derived with an acceptable time cost by pixel-level image matching.
18. Sequential Experimentation: Comparing Stochastic Approximation Methods Which Find the "Right" Value of the Independent Variable.
ERIC Educational Resources Information Center
Hummel, Thomas J.; Johnston, Charles B.
This research investigates stochastic approximation procedures of the Robbins-Monro type. Following a brief introduction to sequential experimentation, attention is focused on formal methods for selecting successive values of a single independent variable. Empirical results obtained through computer simulation are used to compare several formal…
19. An analytical technique for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.
20. Approximate Solution Methods for Spectral Radiative Transfer in High Refractive Index Layers
NASA Technical Reports Server (NTRS)
Siegel, R.; Spuckler, C. M.
1994-01-01
Some ceramic materials for high temperature applications are partially transparent for radiative transfer. The refractive indices of these materials can be substantially greater than one which influences internal radiative emission and reflections. Heat transfer behavior of single and laminated layers has been obtained in the literature by numerical solutions of the radiative transfer equations coupled with heat conduction and heating at the boundaries by convection and radiation. Two-flux and diffusion methods are investigated here to obtain approximate solutions using a simpler formulation than required for exact numerical solutions. Isotropic scattering is included. The two-flux method for a single layer yields excellent results for gray and two band spectral calculations. The diffusion method yields a good approximation for spectral behavior in laminated multiple layers if the overall optical thickness is larger than about ten. A hybrid spectral model is developed using the two-flux method in the optically thin bands, and radiative diffusion in bands that are optically thick.
1. A numerical method for approximating antenna surfaces defined by discrete surface points
NASA Technical Reports Server (NTRS)
Lee, R. Q.; Acosta, R.
1985-01-01
A simple numerical method for the quadratic approximation of a discretely defined reflector surface is described. The numerical method was applied to interpolate the surface normal of a parabolic reflector surface from a grid of nine closest surface points to the point of incidence. After computing the surface normals, the geometrical optics and the aperture integration method using the discrete Fast Fourier Transform (FFT) were applied to compute the radiaton patterns for a symmetric and an offset antenna configurations. The computed patterns are compared to that of the analytic case and to the patterns generated from another numerical technique using the spline function approximation. In the paper, examples of computations are given. The accuracy of the numerical method is discussed.
2. A nonstationary geophysical inversion approach with an approximation error method for imaging fluid flow
Lehikoinen, A.; Finsterle, S.; Voutilainen, A.; Kowalsky, M.; Kaipio, J.
2006-12-01
We present a new methodology for imaging the evolution of electrically conductive fluids in porous media. The state estimation problem is formulated in terms of an evolution-observation model, and the estimates are obtained via Bayesian filtering. The approach is based on an extended Kalman filter algorithm and includes an approximation error method to model uncertainties in the evolution and observation models. The example we consider involves the imaging of time-varying distributions of water saturation in porous media using time-lapse electrical resistance tomography (ERT). The evolution model we employ is a simplified model for simulating flow through partially saturated porous media. The complete electrode model (with Archie's law relating saturations to electrical conductivity) is used as the observation model. We propose to account for approximation errors in the evolution and observation models by constructing a statistical model of the differences between the "accurate" and "approximate" representations of fluid flow, and by including this information in the calculation of the posterior probability density of the estimated system state. The proposed method provides improved estimates of water saturation distribution relative to traditional reconstruction schemes that rely on conventional stabilization methods (e.g., using a smoothness prior) and relative to the extended Kalman filter without the approximation error method incorporated. Finally, the approximation error method allows for the use of a simplified and computationally efficient evolution model in the state estimation scheme. This work was supported, in part, by the Finnish Funding Agency for Technology and Innovation (TEKES), projects 40285/05 and 40347/05, and by the U.S. Dept. of Energy under Contract No. DE-AC02- 05CH11231.
3. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
4. Approximate method of free energy calculation for spin system with arbitrary connection matrix
Kryzhanovsky, Boris; Litinskii, Leonid
2015-01-01
The proposed method of the free energy calculation is based on the approximation of the energy distribution in the microcanonical ensemble by the Gaussian distribution. We hope that our approach will be effective for the systems with long-range interaction, where large coordination number q ensures the correctness of the central limit theorem application. However, the method provides good results also for systems with short-range interaction when the number q is not so large.
5. A method for solving stochastic equations by reduced order models and local approximations
SciTech Connect
Grigoriu, M.
2012-08-01
A method is proposed for solving equations with random entries, referred to as stochastic equations (SEs). The method is based on two recent developments. The first approximates the response surface giving the solution of a stochastic equation as a function of its random parameters by a finite set of hyperplanes tangent to it at expansion points selected by geometrical arguments. The second approximates the vector of random parameters in the definition of a stochastic equation by a simple random vector, referred to as stochastic reduced order model (SROM), and uses it to construct a SROM for the solution of this equation. The proposed method is a direct extension of these two methods. It uses SROMs to select expansion points, rather than selecting these points by geometrical considerations, and represents the solution by linear and/or higher order local approximations. The implementation and the performance of the method are illustrated by numerical examples involving random eigenvalue problems and stochastic algebraic/differential equations. The method is conceptually simple, non-intrusive, efficient relative to classical Monte Carlo simulation, accurate, and guaranteed to converge to the exact solution.
6. Linear decomposition method for approximating arbitrary magnetic field profiles by optimization of discrete electromagnet currents
SciTech Connect
Tejero, E. M.; Gatling, G.
2009-03-15
A method for approximating arbitrary axial magnetic field profiles for a given solenoidal electromagnet coil array is described. The method casts the individual contributions from each coil as a truncated orthonormal basis for the space within the array. This truncated basis allows for the linear decomposition of an arbitrary profile function, which returns the appropriate currents for each coil to best reproduce the desired profile. We present the mathematical details of the method along with a detailed example of its use. The results from the method are used in a simulation and compared with magnetic field measuremen0008.
7. An approximate viscous shock layer method for calculating the hypersonic flow over blunt-nosed bodies
NASA Technical Reports Server (NTRS)
Grantz, A. C.; Dejarnette, F. R.; Thompson, R. A.
1989-01-01
The approximate axisymmetric method presented for accurately calculating the surface and flowfield properties of fully viscous hypersonic flow over blunt-nosed bodies incorporates the turbulence model of Cebeci-Smith (1970) and the equilibrium air tables of Hansen (1959). The method is faster than the parabolized Navier-Stokes or viscous shock layer solvers that it could replace for preliminary design determinations. Surface heat transfer and pressure predictions for the present method are comparable with the more accurate viscous shock layer method as well as flight test and wind tunnel data. A starting solution is not required.
8. A novel analytical approximation technique for highly nonlinear oscillators based on the energy balance method
Hosen, Md. Alal; Chowdhury, M. S. H.; Ali, Mohammad Yeakub; Ismail, Ahmad Faris
In the present paper, a novel analytical approximation technique has been proposed based on the energy balance method (EBM) to obtain approximate periodic solutions for the focus generalized highly nonlinear oscillators. The expressions of the natural frequency-amplitude relationship are obtained using a novel analytical way. The accuracy of the proposed method is investigated on three benchmark oscillatory problems, namely, the simple relativistic oscillator, the stretched elastic wire oscillator (with a mass attached to its midpoint) and the Duffing-relativistic oscillator. For an initial oscillation amplitude A0 = 100, the maximal relative errors of natural frequency found in three oscillators are 2.1637%, 0.0001% and 1.201%, respectively, which are much lower than the errors found using the existing methods. It is highly remarkable that an excellent accuracy of the approximate natural frequency has been found which is valid for the whole range of large values of oscillation amplitude as compared with the exact ones. Very simple solution procedure and high accuracy that is found in three benchmark problems reveal the novelty, reliability and wider applicability of the proposed analytical approximation technique.
9. Coupled-cluster method: A lattice-path-based subsystem approximation scheme for quantum lattice models
SciTech Connect
Bishop, R. F.; Li, P. H. Y.
2011-04-15
An approximation hierarchy, called the lattice-path-based subsystem (LPSUBm) approximation scheme, is described for the coupled-cluster method (CCM). It is applicable to systems defined on a regular spatial lattice. We then apply it to two well-studied prototypical (spin-(1/2) Heisenberg antiferromagnetic) spin-lattice models, namely, the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the ground-state sublattice magnetization, and the quantum critical point. They are all in good agreement with those from such alternative methods as spin-wave theory, series expansions, quantum Monte Carlo methods, and the CCM using the alternative lattice-animal-based subsystem (LSUBm) and the distance-based subsystem (DSUBm) schemes. Each of the three CCM schemes (LSUBm, DSUBm, and LPSUBm) for use with systems defined on a regular spatial lattice is shown to have its own advantages in particular applications.
10. An in vitro comparison of detection methods for approximal carious lesions in primary molars.
PubMed
Chawla, N; Messer, L B; Adams, G G; Manton, D J
2012-01-01
11. Multiple tests based on a gaussian approximation of the unitary events method with delayed coincidence count.
PubMed
Tuleau-Malot, Christine; Rouis, Amel; Grammont, Franck; Reynaud-Bouret, Patricia
2014-07-01
The unitary events (UE) method is one of the most popular and efficient methods used over the past decade to detect patterns of coincident joint spike activity among simultaneously recorded neurons. The detection of coincidences is usually based on binned coincidence count (Grün, 1996 ), which is known to be subject to loss in synchrony detection (Grün, Diesmann, Grammont, Riehle, & Aertsen, 1999 ). This defect has been corrected by the multiple shift coincidence count (Grün et al., 1999 ). The statistical properties of this count have not been further investigated until this work, the formula being more difficult to deal with than the original binned count. First, we propose a new notion of coincidence count, the delayed coincidence count, which is equal to the multiple shift coincidence count when discretized point processes are involved as models for the spike trains. Moreover, it generalizes this notion to nondiscretized point processes, allowing us to propose a new gaussian approximation of the count. Since unknown parameters are involved in the approximation, we perform a plug-in step, where unknown parameters are replaced by estimated ones, leading to a modification of the approximating distribution. Finally the method takes the multiplicity of the tests into account via a Benjamini and Hochberg approach (Benjamini & Hochberg, 1995 ), to guarantee a prescribed control of the false discovery rate. We compare our new method, MTGAUE (multiple tests based on a gaussian approximation of the unitary events) and the UE method proposed in Grün et al. ( 1999 ) over various simulations, showing that MTGAUE extends the validity of the previous method. In particular, MTGAUE is able to detect both profusion and lack of coincidences with respect to the independence case and is robust to changes in the underlying model. Furthermore MTGAUE is applied on real data.
12. Evaluation of approximate methods for the prediction of noise shielding by airframe components
NASA Technical Reports Server (NTRS)
Ahtye, W. F.; Mcculley, G.
1980-01-01
An evaluation of some approximate methods for the prediction of shielding of monochromatic sound and broadband noise by aircraft components is reported. Anechoic-chamber measurements of the shielding of a point source by various simple geometric shapes were made and the measured values compared with those calculated by the superposition of asymptotic closed-form solutions for the shielding by a semi-infinite plane barrier. The shields used in the measurements consisted of rectangular plates, a circular cylinder, and a rectangular plate attached to the cylinder to simulate a wing-body combination. The normalized frequency, defined as a product of the acoustic wave number and either the plate width or cylinder diameter, ranged from 4.6 to 114. Microphone traverses in front of the rectangular plates and cylinders generally showed a series of diffraction bands that matched those predicted by the approximate methods, except for differences in the magnitudes of the attenuation minima which can be attributed to experimental inaccuracies. The shielding of wing-body combinations was predicted by modifications of the approximations used for rectangular and cylindrical shielding. Although the approximations failed to predict diffraction patterns in certain regions, they did predict the average level of wing-body shielding with an average deviation of less than 3 dB.
13. Association of evaluation methods of the effective permittivity of heterogeneous media on the basis of a generalized singular approximation
Kolesnikov, V. I.; Yakovlev, V. B.; Bardushkin, V. V.; Lavrov, I. V.; Sychev, A. P.; Yakovleva, E. N.
2013-09-01
Various methods for evaluation of the effective permittivity of heterogeneous media, namely, the effective medium approximation (Bruggeman's approximation), the Maxwell-Garnett approximation, Wiener's bounds, and the Hashin-Shtrikman variational bounds (for effective static characteristics) are combined on the basis of a generalized singular approximation.
14. Jacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation.
PubMed
Wei, Yunxia; Chen, Yanping; Shi, Xiulian; Zhang, Yuanyuan
2016-01-01
We present in this paper the convergence properties of Jacobi spectral collocation method when used to approximate the solution of multidimensional nonlinear Volterra integral equation. The solution is sufficiently smooth while the source function and the kernel function are smooth. We choose the Jacobi-Gauss points associated with the multidimensional Jacobi weight function [Formula: see text] (d denotes the space dimensions) as the collocation points. The error analysis in [Formula: see text]-norm and [Formula: see text]-norm theoretically justifies the exponential convergence of spectral collocation method in multidimensional space. We give two numerical examples in order to illustrate the validity of the proposed Jacobi spectral collocation method.
15. An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
16. Update-based evolution control: A new fitness approximation method for evolutionary algorithms
Ma, Haiping; Fei, Minrui; Simon, Dan; Mo, Hongwei
2015-09-01
Evolutionary algorithms are robust optimization methods that have been used in many engineering applications. However, real-world fitness evaluations can be computationally expensive, so it may be necessary to estimate the fitness with an approximate model. This article reviews design and analysis of computer experiments (DACE) as an approximation method that combines a global polynomial with a local Gaussian model to estimate continuous fitness functions. The article incorporates DACE in various evolutionary algorithms, to test unconstrained and constrained benchmarks, both with and without fitness function evaluation noise. The article also introduces a new evolution control strategy called update-based control that estimates the fitness of certain individuals of each generation based on the exact fitness values of other individuals during that same generation. The results show that update-based evolution control outperforms other strategies on noise-free, noisy, constrained and unconstrained benchmarks. The results also show that update-based evolution control can compensate for fitness evaluation noise.
17. Effective cluster interactions using the generalized perturbation method in the atomic-sphere approximation
SciTech Connect
Singh, P.P.; Gonis, A. )
1993-03-15
We describe the generalized perturbation method in the atomic-sphere approximation (ASA) for calculating the effective cluster interactions. Based on our development of Korringa-Kohn-Rostoker coherent-potential approximation in the ASA [Singh [ital et] [ital al]., Phys. Rev. B 44, 8578 (1991)], the present approach is the next step towards developing a first-principles method that can be easily applied to describe substitutionally disordered alloys based on simple lattice structures as well as complex lattice structures with low symmetry. To test the accuracy of the ASA results, we have calculated the effective pair interactions (EPI) up to fourth-nearest neighbors for the substitutionally disordered Pd[sub 0.5]V[sub 0.5] and Pd[sub 0.75]Rh[sub 0.25] alloys. Our calculated EPI's are in good agreement with the respective muffin-tin results.
18. Assessment of Tuning Methods for Enforcing Approximate Energy Linearity in Range-Separated Hybrid Functionals.
PubMed
Gledhill, Jonathan D; Peach, Michael J G; Tozer, David J
2013-10-08
A range of tuning methods, for enforcing approximate energy linearity through a system-by-system optimization of a range-separated hybrid functional, are assessed. For a series of atoms, the accuracy of the frontier orbital energies, ionization potentials, electron affinities, and orbital energy gaps is quantified, and particular attention is paid to the extent to which approximate energy linearity is actually achieved. The tuning methods can yield significantly improved orbital energies and orbital energy gaps, compared to those from conventional functionals. For systems with integer M electrons, optimal results are obtained using a tuning norm based on the highest occupied orbital energy of the M and M + 1 electron systems, with deviations of just 0.1-0.2 eV in these quantities, compared to exact values. However, detailed examination for the carbon atom illustrates a subtle cancellation between errors arising from nonlinearity and errors in the computed ionization potentials and electron affinities used in the tuning.
19. A fourth-order Runge-Kutta method based on BDF-type Chebyshev approximations
Ramos, Higinio; Vigo-Aguiar, Jesus
2007-07-01
In this paper we consider a new fourth-order method of BDF-type for solving stiff initial-value problems, based on the interval approximation of the true solution by truncated Chebyshev series. It is shown that the method may be formulated in an equivalent way as a Runge-Kutta method having stage order four. The method thus obtained have good properties relatives to stability including an unbounded stability domain and large [alpha]-value concerning A([alpha])-stability. A strategy for changing the step size, based on a pair of methods in a similar way to the embedding pair in the Runge-Kutta schemes, is presented. The numerical examples reveals that this method is very promising when it is used for solving stiff initial-value problems.
20. Approximation of Integrals Via Monte Carlo Methods, With An Application to Calculating Radar Detection Probabilities
DTIC Science & Technology
2005-03-01
synthetic aperature radar and radar detec- tion using both software modelling and mathematical analysis and techniques. vi DSTO–TR–1692 Contents 1...joined DSTO in 1990, where he has been part of research efforts in the areas of target radar cross section, digital signal processing, inverse ...Approximation of Integrals via Monte Carlo Methods, with an Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross
1. Approximation of acoustic waves by explicit Newmark's schemes and spectral element methods
Zampieri, Elena; Pavarino, Luca F.
2006-01-01
A numerical approximation of the acoustic wave equation is presented. The spatial discretization is based on conforming spectral elements, whereas we use finite difference Newmark's explicit integration schemes for the temporal discretization. A rigorous stability analysis is developed for the discretized problem providing an upper bound for the time step [Delta]t. We present several numerical results concerning stability and convergence properties of the proposed numerical methods.
2. A method for the accurate and smooth approximation of standard thermodynamic functions
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are
3. In vitro performance of methods of approximal caries detection in primary molars.
PubMed
Braga, Mariana Minatel; Morais, Caroline Carvalho; Nakama, Renata Cristina Satiko; Leamari, Victor Moreira; Siqueira, Walter Luiz; Mendes, Fausto Medeiros
2009-10-01
The aim was to compare the performance of different methods in detecting approximal caries lesions primary molars ex vivo. One hundred thirty-one approximal surfaces were examined by 2 observers with visual inspection (VI) using the International Caries Detection and Assessment System, radiographic interpretation, and clinically using the Diagnodent pen (LFpen). To achieve a reference standard, surfaces were directly examined for the presence of white spots or cavitations, and lesion depth was determined after sectioning. The area under the receiver operating characteristic curve (A(z)), sensitivity, specificity, and accuracy were calculated, as well as the interexaminer reproducibility. Using the cavitation threshold, all methods presented similar sensitivities. Higher A(z) values were achieved with VI at white spot threshold, and VI and LFpen had higher A(z) values at cavitation threshold. VI presented higher accuracy and A(z) than radiographic and LFpen at both enamel and dentin depth thresholds. Higher reliability values were achieved with VI. VI performs better, but both radiographic and LFpen methods also show good performance in detecting more advanced approximal caries lesions.
4. Low rank approximation methods for MR fingerprinting with large scale dictionaries.
PubMed
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2017-08-13
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T1 , T2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 000:000-000, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
5. Method of successive approximations in the theory of stimulated Raman scattering of a randomly modulated pump
Krochik, G. M.
1980-02-01
Stimulated Raman scattering of a randomly modulated pump is investigated by the method of successive approximations. This involves expanding solutions in terms of small parameters, which are ratios of the correlation scales of random effects to other characteristic dynamic scales of the problem. Systems of closed equations are obtained for the moments of the amplitudes of the Stokes and pump waves and of the molecular vibrations. These describe the dynamics of the process allowing for changes in the pump intensity and statistics due to a three-wave interaction. By analyzing equations in higher-order approximations, it is possible to establish the conditions of validity of the first (Markov) and second approximations. In particular, it is found that these are valid for pump intensities JL both above and below the critical value Jcr near which the gain begins to increase rapidly and reproduction of the pump spectrum by the Stokes wave is initiated. Solutions are obtained for average intensities of the Stokes wave and molecular vibrations in the first approximation in a constant pump field. It is established that, for JLgtrsimJcr, the Stokes wave undergoes rapid nonsteady-state amplification which is associated with an increase in the amplitude of the molecular vibrations. The results of the calculations show good agreement with known experimental data.
6. Physically weighted approximations of unsteady aerodynamic forces using the minimum-state method
NASA Technical Reports Server (NTRS)
1991-01-01
The Minimum-State Method for rational approximation of unsteady aerodynamic force coefficient matrices, modified to allow physical weighting of the tabulated aerodynamic data, is presented. The approximation formula and the associated time-domain, state-space, open-loop equations of motion are given, and the numerical procedure for calculating the approximation matrices, with weighted data and with various equality constraints are described. Two data weighting options are presented. The first weighting is for normalizing the aerodynamic data to maximum unit value of each aerodynamic coefficient. The second weighting is one in which each tabulated coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. This weighting yields a better fit of the more important terms, at the expense of less important ones. The resulting approximate yields a relatively low number of aerodynamic lag states in the subsequent state-space model. The formulation forms the basis of the MIST computer program which is written in FORTRAN for use on the MicroVAX computer and interfaces with NASA's Interaction of Structures, Aerodynamics and Controls (ISAC) computer program. The program structure, capabilities and interfaces are outlined in the appendices, and a numerical example which utilizes Rockwell's Active Flexible Wing (AFW) model is given and discussed.
7. Assessment of approximate computational methods for conical intersections and branching plane vectors in organic molecules
SciTech Connect
Nikiforov, Alexander; Gamez, Jose A.; Thiel, Walter; Huix-Rotllant, Miquel; Filatov, Michael
2014-09-28
Quantum-chemical computational methods are benchmarked for their ability to describe conical intersections in a series of organic molecules and models of biological chromophores. Reference results for the geometries, relative energies, and branching planes of conical intersections are obtained using ab initio multireference configuration interaction with single and double excitations (MRCISD). They are compared with the results from more approximate methods, namely, the state-interaction state-averaged restricted ensemble-referenced Kohn-Sham method, spin-flip time-dependent density functional theory, and a semiempirical MRCISD approach using an orthogonalization-corrected model. It is demonstrated that these approximate methods reproduce the ab initio reference data very well, with root-mean-square deviations in the optimized geometries of the order of 0.1 Å or less and with reasonable agreement in the computed relative energies. A detailed analysis of the branching plane vectors shows that all currently applied methods yield similar nuclear displacements for escaping the strong non-adiabatic coupling region near the conical intersections. Our comparisons support the use of the tested quantum-chemical methods for modeling the photochemistry of large organic and biological systems.
8. Assessment of approximate computational methods for conical intersections and branching plane vectors in organic molecules
Nikiforov, Alexander; Gamez, Jose A.; Thiel, Walter; Huix-Rotllant, Miquel; Filatov, Michael
2014-09-01
Quantum-chemical computational methods are benchmarked for their ability to describe conical intersections in a series of organic molecules and models of biological chromophores. Reference results for the geometries, relative energies, and branching planes of conical intersections are obtained using ab initio multireference configuration interaction with single and double excitations (MRCISD). They are compared with the results from more approximate methods, namely, the state-interaction state-averaged restricted ensemble-referenced Kohn-Sham method, spin-flip time-dependent density functional theory, and a semiempirical MRCISD approach using an orthogonalization-corrected model. It is demonstrated that these approximate methods reproduce the ab initio reference data very well, with root-mean-square deviations in the optimized geometries of the order of 0.1 Å or less and with reasonable agreement in the computed relative energies. A detailed analysis of the branching plane vectors shows that all currently applied methods yield similar nuclear displacements for escaping the strong non-adiabatic coupling region near the conical intersections. Our comparisons support the use of the tested quantum-chemical methods for modeling the photochemistry of large organic and biological systems.
9. An algorithm for maximum likelihood estimation using an efficient method for approximating sensitivities
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1984-01-01
An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.
10. S-curve networks and an approximate method for estimating degree distributions of complex networks
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
11. A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation
Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth
2015-04-01
We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.
12. An Extension of a Nonstationary Inversion Method with Approximation Error Analysis Applied to Hydrological Process Monitoring
Lehikoinen, A.; Huttunen, J. M.; Finsterle, S.; Kowalsky, M. B.; Kaipio, J. P.
2007-05-01
We extend the previously presented methodology for imaging the evolution of electrically conductive fluids in porous media. In that method, the nonstationary inversion problem was solved using Bayesian filtering. The method was demonstrated using a synthetically generated test case where the monitored target is a time-varying water plume in an unsaturated porous medium, and the imaging modality was electrical resistance tomography (ERT). The inverse problem was formulated as a state estimation problem, which is based on observation- evolution models. As an observation model for ERT, the complete electrode model was used, and for time- varying unsaturated flow, the Richards equation was used as an evolution model. Although the "true" evolution of water flow was simulated using a heterogeneous permeability field, in the inversion step the permeability was assumed to be homogeneous. This assumption leads to approximation errors that have been taken into account by constructing a statistical model between the different realizations of the accurate and the approximate fluid flow models. This statistical model was constructed using an ensemble of samples from the evolution model in a way that the construction can be carried out prior to taking observations. However, the statistics of approximation errors actually depends on observations (through the state). In this work we extend the previously presented method so that the statistics of the approximation error are adjusted based on the observations. The basic idea of the extension is to gather those samples from the ensemble which at the current time best represents the observed state. We then determine the statistics of the approximation error based on these collated samples. The extension of the methodology provides improved estimates of water saturation distributions compared to the previously presented approaches. The proposed methodology may be extended for imaging and estimating parameters of dynamical processes
13. Subtraction method in the second random-phase approximation: First applications with a Skyrme energy functional
Gambacurta, D.; Grasso, M.; Engel, J.
2015-09-01
We make use of a subtraction procedure, introduced to overcome double-counting problems in beyond-mean-field theories, in the second random-phase-approximation (SRPA) for the first time. This procedure guarantees the stability of the SRPA (so that all excitation energies are real). We show that the method fits perfectly into nuclear density-functional theory. We illustrate applications to the monopole and quadrupole response and to low-lying 0+ and 2+ states in the nucleus 16O . We show that the subtraction procedure leads to (i) results that are weakly cutoff dependent and (ii) a considerable reduction of the SRPA downwards shift with respect to the random-phase approximation (RPA) spectra (systematically found in all previous applications). This implementation of the SRPA model will allow a reliable analysis of the effects of two particle-two hole configurations (2p2h) on the excitation spectra of medium-mass and heavy nuclei.
14. Synergistic development of differential approximants and the finite lattice method in lattice statistics
Enting, I. G.
2017-04-01
Several decades of parallel developments in the calculation and analysis of series expansions for lattice statistics have led to many new insights into critical phenomena. These studies have centered on the use of the finite lattice method for series expansions in lattice statistics and the use of differential approximants in analysing such series. One of these strands of research ultimately led to the result that a number of unsolved lattice statistics problems cannot be expressed as D-finite functions. Somewhat ironically, given power and success of differential approximants in analysing series, neither the assumed functional form, nor any finite generalisation thereof can fit such cases exactly. In honour of the 70th birthday for Professor A J Guttmann
15. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
16. Approximate method for calculating heating rates on three-dimensional vehicles
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-05-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
17. Approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
18. Approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
19. Chemical physics without the Born-Oppenheimer approximation: The molecular coupled-cluster method
Monkhorst, Hendrik J.
1987-08-01
The Born-Oppenheimer (BO) and Born-Huang (BH) treatments of molecular eigenstates are reexamined. It is argued that in application of the BO approximation to nonrigid molecules and chemical dynamics involving single potential-energy surfaces (PES's), errors on the order of tens of percents can easily occur in many computed properties. Introduction of a BH expansion (in BO states) will always lead to poor convergence when the BO approximation fails; its diagonal (or adiabatic) approximation will not change this situation. The main problem in the above applications is the absence of well-developed, well-separated minima in the PES (or no minima at all). Inspired by a non-BO view of a molecule by Essén [Int. J. Quantum Chem. 12, 721 (1977)], a molecular coupled-cluster (MCC) method is formulated. An Essén molecule consists of neutral subunits (``atoms''), weakly interacting (``bonds'') in some spatial arrangement (``structure''). The quasiseparation in collective and individual motions within the molecule comes about by virtue of the virial theorem, not the smallness of the electron-to-nuclear mass ratio. The MCC method not only should converge well in the cluster sizes, but it also is capable of describing electronic shell and molecular geometric structures. It can be viewed as the workable formalism for Essén's physical picture of a molecule. The time-independent and time-dependent versions are described. The latter one is useful for scattering, chemical dynamics, laser chemistry, half-collisions, and any other phenomena that can be described as the time evolution of many-particle wave packets. Close relationship to time-dependent Hartree-Fock theory exists. A few implementational aspects are discussed, such as symmetry, conservation laws, approximations, numerical techniques, as well as a possible relation with a non-BO PES. Appendixes contain mathematical details.
20. Approximate-model based estimation method for dynamic response of forging processes
Lei, Jie; Lu, Xinjiang; Li, Yibo; Huang, Minghui; Zou, Wei
2015-03-01
Many high-quality forging productions require the large-sized hydraulic press machine (HPM) to have a desirable dynamic response. Since the forging process is complex under the low velocity, its response is difficult to estimate. And this often causes the desirable low-velocity forging condition difficult to obtain. So far little work has been found to estimate the dynamic response of the forging process under low velocity. In this paper, an approximate-model based estimation method is proposed to estimate the dynamic response of the forging process under low velocity. First, an approximate model is developed to represent the forging process of this complex HPM around the low-velocity working point. Under guaranteeing the modeling performance, the model may greatly ease the complexity of the subsequent estimation of the dynamic response because it has a good linear structure. On this basis, the dynamic response is estimated and the conditions for stability, vibration, and creep are derived according to the solution of the velocity. All these analytical results are further verified by both simulations and experiment. In the simulation verification for modeling, the original movement model and the derived approximate model always have the same dynamic responses with very small approximate error. The simulations and experiment finally demonstrate and test the effectiveness of the derived conditions for stability, vibration, and creep, and these conditions will benefit both the prediction of the dynamic response of the forging process and the design of the controller for the high-quality forging. The proposed method is an effective solution to achieve the desirable low-velocity forging condition.
1. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations.
PubMed
Wu, Fuke; Tian, Tianhai; Rawlings, James B; Yin, George
2016-05-07
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
2. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations
Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George
2016-05-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
3. Validity of approximate methods in molecular scattering. III - Effective potential and coupled states approximations for differential and gas kinetic cross sections
NASA Technical Reports Server (NTRS)
Monchick, L.; Green, S.
1977-01-01
Two dimensionality-reducing approximations, the j sub z-conserving coupled states (sometimes called the centrifugal decoupling) method and the effective potential method, were applied to collision calculations of He with CO and with HCl. The coupled states method was found to be sensitive to the interpretation of the centrifugal angular momentum quantum number in the body-fixed frame, but the choice leading to the original McGuire-Kouri expression for the scattering amplitude - and to the simplest formulas - proved to be quite successful in reproducing differential and gas kinetic cross sections. The computationally cheaper effective potential method was much less accurate.
4. MIST - MINIMUM-STATE METHOD FOR RATIONAL APPROXIMATION OF UNSTEADY AERODYNAMIC FORCE COEFFICIENT MATRICES
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the
5. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method
SciTech Connect
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
6. Dynamic load identification for stochastic structures based on Gegenbauer polynomial approximation and regularization method
Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie
2015-05-01
7. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.
PubMed
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
8. All-electron self-consistent GW approximation based on full-potential LMTO method
Faleev, Sergey; van Schilfgaarde, Mark; Kotani, Takao
2003-03-01
We present a new all-electron self-consistent implementation of the GW approximation based on full-potential LMTO method. The dynamically screened Coloumb interaction W is expended in a mixed basis which consist of two contributions, local atom-centered functions confined to muffin-tin spheres, and plane waves with the overlap to the local functions projected out. The former can include any of the core states: thus the core and valence states can be treated on an equal footing. The self-consistency is achieved by following iteration cycle: using eigenfunctions of the LDA Hamiltonian with an added self-energy term the next-iteration self-energy is calculated in GW approximation. The non-local and energy dependent self-energy term is then added to the LDA Hamiltonian and next iteration wave-functions and energies are obtained by diagonalization. The CPU time of otherwise numerically prohibited SC GW simulations has been reduced by an order of magnitude utilizing the dispersion relations for the polarization operator. The results obtained for band gaps of Si and MnO are in good agreement with the experimental values, noticeably better then results obtained in the non self-consistent GW and LDA approximations.
9. Efficient time-sampling method in Coulomb-corrected strong-field approximation.
PubMed
Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You
2016-11-01
One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.
10. Efficient time-sampling method in Coulomb-corrected strong-field approximation
Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You
2016-11-01
One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.
11. Simplified method for including spatial correlations in mean-field approximations
Markham, Deborah C.; Simpson, Matthew J.; Baker, Ruth E.
2013-06-01
Biological systems involving proliferation, migration, and death are observed across all scales. For example, they govern cellular processes such as wound healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration, and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behavior. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pairwise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplification in the form of a partial differential equation description for the evolution of pairwise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behavior in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before and find our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.
12. Photovoltaic Generation Data Cleaning Method Based on Approximately Periodic Time Series
Zhang, J.; Zhang, Sh; Liang, J.; Tian, B.; Hou, Z.; Liu, B. Zh
2017-05-01
Data cleaning of photovoltaic (PV) power generation is an important step during data preprocessing for further utilization, such as PV power generation forecasting. The PV power generation data can be treated as a time series. An improved data cleaning method based on approximately periodic time series is proposed. First, the abnormal data in the PV data time series is classified with three types of the outliers. Then these three types of outliers are quantified based on the physical characters of PV power generation, and the effective corresponding cleaning implementations are described considering the rate capacity of PV station and period of PV data time series. Finally, the data cleaning method is tested on the PV generation data from a certain real power grid. The results show that this data cleaning method can effectively improve the PV data quality, and provide an effective support tool for the further application of PV data.
13. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
14. Sensitivity and Approximation of Coupled Fluid-Structure Equations by Virtual Control Method
SciTech Connect
Murea, Cornel Marius Vazquez, Carlos
2005-08-15
The formulation of a particular fluid-structure interaction as an optimal control problem is the departure point of this work. The control is the vertical component of the force acting on the interface and the observation is the vertical component of the velocity of the fluid on the interface. This approach permits us to solve the coupled fluid-structure problem by partitioned procedures. The analytic expression for the gradient of the cost function is obtained in order to devise accurate numerical methods for the minimization problem. Numerical results arising from blood flow in arteries are presented. To solve the optimal control problem numerically, we use a quasi-Newton method which employs the analytic gradient of the cost function and the approximation of the inverse Hessian is updated by the Broyden, Fletcher, Goldforb, Shano (BFGS) scheme. This algorithm is faster than fixed point with relaxation or block Newton methods.
15. An approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Greene, Francis A.; Dejarnette, Fred R.
1993-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body fitted coordinate system. Edge conditions for the boundary layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used the method is applicable to any blunt body geometry for which a inviscid flowfield solution can be obtained. It is validated by comparing with experimental heating data and with Navier-Stokes calculations on the Shuttle orbiter at both wind tunnel and flight conditions and with Navier-Stokes calculations on the HL-20 at wind tunnel conditions.
16. Car-Parrinello treatment for an approximate density-functional theory method
Rapacioli, Mathias; Barthel, Robert; Heine, Thomas; Seifert, Gotthard
2007-03-01
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite systems and for supercells using periodic boundary conditions within the Γ-point approximation. They show that the methodology allows the application of modern computational techniques such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the C60 fullerene, and liquid water have been selected as benchmark systems.
17. Numerical approximation of Lévy-Feller fractional diffusion equation via Chebyshev-Legendre collocation method
Sweilam, N. H.; Abou Hasan, M. M.
2016-08-01
This paper reports a new spectral algorithm for obtaining an approximate solution for the Lévy-Feller diffusion equation depending on Legendre polynomials and Chebyshev collocation points. The Lévy-Feller diffusion equation is obtained from the standard diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative. A new formula expressing explicitly any fractional-order derivatives, in the sense of Riesz-Feller operator, of Legendre polynomials of any degree in terms of Jacobi polynomials is proved. Moreover, the Chebyshev-Legendre collocation method together with the implicit Euler method are used to reduce these types of differential equations to a system of algebraic equations which can be solved numerically. Numerical results with comparisons are given to confirm the reliability of the proposed method for the Lévy-Feller diffusion equation.
18. An approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Greene, Francis A.; Dejarnette, Fred R.
1993-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body fitted coordinate system. Edge conditions for the boundary layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used the method is applicable to any blunt body geometry for which a inviscid flowfield solution can be obtained. It is validated by comparing with experimental heating data and with Navier-Stokes calculations on the Shuttle orbiter at both wind tunnel and flight conditions and with Navier-Stokes calculations on the HL-20 at wind tunnel conditions.
19. Car-Parrinello treatment for an approximate density-functional theory method.
PubMed
Rapacioli, Mathias; Barthel, Robert; Heine, Thomas; Seifert, Gotthard
2007-03-28
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite systems and for supercells using periodic boundary conditions within the Gamma-point approximation. They show that the methodology allows the application of modern computational techniques such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the C(60) fullerene, and liquid water have been selected as benchmark systems.
20. An approximate factorization method for inverse medium scattering with unknown buried objects
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2017-03-01
This paper is concerned with the inverse problem of scattering of time-harmonic acoustic waves by an inhomogeneous medium with different kinds of unknown buried objects inside. By constructing a sequence of operators which are small perturbations of the far-field operator in a suitable way, we prove that each operator in this sequence has a factorization satisfying the Range Identity. We then develop an approximate factorization method for recovering the support of the inhomogeneous medium from the far-field data. Finally, numerical examples are provided to illustrate the practicability of the inversion algorithm.
1. A stochastic approximation algorithm with Markov chain Monte-carlo method for incomplete data estimation problems.
PubMed
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
2. Approximate direct reduction method: infinite series reductions to the perturbed mKdV equation
Jiao, Xiao-Yu; Lou, Sen-Yue
2009-09-01
The approximate direct reduction method is applied to the perturbed mKdV equation with weak fourth order dispersion and weak dissipation. The similarity reduction solutions of different orders conform to formal coherence, accounting for infinite series reduction solutions to the original equation and general formulas of similarity reduction equations. Painlevé II type equations, hyperbolic secant and Jacobi elliptic function solutions are obtained for zero-order similarity reduction equations. Higher order similarity reduction equations are linear variable coefficient ordinary differential equations.
3. An Approximate Method for Analysis of Solitary Waves in Nonlinear Elastic Materials
Rushchitsky, J. J.; Yurchuk, V. N.
2016-05-01
Two types of solitary elastic waves are considered: a longitudinal plane displacement wave (longitudinal displacements along the abscissa axis of a Cartesian coordinate system) and a radial cylindrical displacement wave (displacements in the radial direction of a cylindrical coordinate system). The basic innovation is the use of nonlinear wave equations similar in form to describe these waves and the use of the same approximate method to analyze these equations. The distortion of the wave profile described by Whittaker (plane wave) or Macdonald (cylindrical wave) functions is described theoretically
4. Approximate method for calculating free vibrations of a large-wind-turbine tower structure
NASA Technical Reports Server (NTRS)
Das, S. C.; Linscott, B. S.
1977-01-01
A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.
5. Perturbative approximation to hybrid equation of motion coupled cluster/effective fragment potential method
SciTech Connect
Ghosh, Debashree
2014-03-07
Hybrid quantum mechanics/molecular mechanics (QM/MM) methods provide an attractive way to closely retain the accuracy of the QM method with the favorable computational scaling of the MM method. Therefore, it is not surprising that QM/MM methods are being increasingly used for large chemical/biological systems. Hybrid equation of motion coupled cluster singles doubles/effective fragment potential (EOM-CCSD/EFP) methods have been developed over the last few years to understand the effect of solvents and other condensed phases on the electronic spectra of chromophores. However, the computational cost of this approach is still dominated by the steep scaling of the EOM-CCSD method. In this work, we propose and implement perturbative approximations to the EOM-CCSD method in this hybrid scheme to reduce the cost of EOM-CCSD/EFP. The timings and accuracy of this hybrid approach is tested for calculation of ionization energies, excitation energies, and electron affinities of microsolvated nucleic acid bases (thymine and cytosine), phenol, and phenolate.
6. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
PubMed
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf.
7. A probability density function discretization and approximation method for the dynamic load identification of stochastic structures
Liu, Jie; Sun, Xingsheng; Li, Kun; Jiang, Chao; Han, Xu
2015-11-01
Aiming at structures containing random parameters with multi-peak probability density functions (PDFs) or great variable coefficients, an analytical method of probability density function discretization and approximation (PDFDA) is proposed for dynamic load identification. Dynamic loads are expressed as the functions of time and random parameters in time domain and the forward model is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions. The PDF of each random parameter is discretized into several subintervals and in each subinterval the original PDF curve is approximated via uniform distribution PDF with equal probability value. Then the joint distribution model is built and hence the equivalent deterministic equations are solved to identify unknown loads. Inverse analysis is operated separately at each variable in the joint distribution model through regularization because of noise-contaminated measured responses. In order to assess the accuracy of identified results, PDF curves and statistical properties of loads are achieved based on the specially assumed distributions of identified loads. Numerical simulations demonstrate the efficiency and superiority of the presented method.
8. Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel
SciTech Connect
Blair, J.; Machorro, E.; Luttman, A.
2013-03-01
The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.
9. Approximate method for solving relaxation problems in terms of material`s damagability under creep
SciTech Connect
Nikitenko, A.F.; Sukhorukov, I.V.
1995-03-01
The technology of thermoforming under creep and superplasticity conditions is finding increasing application in machine building for producing articles of a preset shape. After a part is made there are residual stresses in it, which lead to its warping. To remove residual stresses, moulded articles are usually exposed to thermal fixation, i.e., the part is held in compressed state at a certain temperature. Thermal fixation is simply the process of residual stress relaxation, following by accumulation of total creep in the material. Therefore the necessity to develop engineering methods for calculating the time of thermal fixation and relaxation of residual stresses to a safe level, not resulting in warping, becomes evident. The authors present an approximate method of calculation of stress-strain rate of a body during relaxation. They use a system of equations which describes a material`s creep, simultaneously taking into account accumulation of damages in it.
10. Adaptive approximation method for joint parameter estimation and identical synchronization of chaotic systems.
PubMed
Mariño, Inés P; Míguez, Joaquín
2005-11-01
We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods.
11. Influence of the discomfort reported by children on the performance of approximal caries detection methods.
PubMed
Novaes, T F; Matos, R; Raggio, D P; Imparato, J C P; Braga, M M; Mendes, F M
2010-01-01
This in vivo study aimed to evaluate the performance of methods of approximal caries detection in primary molars and to assess the influence of the discomfort caused by these methods on their performance. Two examiners evaluated 76 children (4-12 years old) using visual inspection (ICDAS), radiography and a laser fluorescence device (DIAGNOdent pen, LFpen). The reference standard was visual inspection after temporary separation with orthodontic rubbers. Surfaces were classified as sound, noncavitated (NC) or cavitated (Cav), and performance was assessed at both NC and Cav thresholds. Wong-Baker faces scale was employed to assess the discomfort. Multilevel analysis was performed to verify the influence of discomfort on performance, considering the number of false-positives and false-negatives as outcome. At NC threshold, visual inspection achieved better performance (sensitivities and accuracies around 0.67) than other methods (sensitivities around 0.25 and accuracies around 0.35). At Cav threshold, visual inspection presented lower sensitivity (0.23 and 0.19), and LFpen (0.52 and 0.42) and radiography (0.52) presented similar sensitivities. Concerning the influence of the discomfort, at NC threshold, when discomfort was present, the number of false-negative results was lower with LFpen and the number of false-positive results was higher with visual inspection. At Cav threshold, the number of false-positive results was higher with LFpen. In conclusion, radiography and LFpen achieved similar performance in detecting approximal caries lesions in primary teeth and the discomfort caused by visual inspection and LFpen can influence the performance of these methods, since a higher number of false-positive or false-negative results occurred in children who reported discomfort. Copyright © 2010 S. Karger AG, Basel.
12. Replace-approximation method for ambiguous solutions in factor analysis of ultrasonic hepatic perfusion
Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu
2010-03-01
Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.
13. Delving Into Dissipative Quantum Dynamics: From Approximate to Numerically Exact Approaches
Chen, Hsing-Ta
In this thesis, I explore dissipative quantum dynamics of several prototypical model systems via various approaches, ranging from approximate to numerically exact schemes. In particular, in the realm of the approximate I explore the accuracy of Pade-resummed master equations and the fewest switches surface hopping (FSSH) algorithm for the spin-boson model, and non-crossing approximations (NCA) for the Anderson-Holstein model. Next, I develop new and exact Monte Carlo approaches and test them on the spin-boson model. I propose well-defined criteria for assessing the accuracy of Pade-resummed quantum master equations, which correctly demarcate the regions of parameter space where the Pade approximation is reliable. I continue the investigation of spin-boson dynamics by benchmark comparisons of the semiclassical FSSH algorithm to exact dynamics over a wide range of parameters. Despite small deviations from golden-rule scaling in the Marcus regime, standard surface hopping algorithm is found to be accurate over a large portion of parameter space. The inclusion of decoherence corrections via the augmented FSSH algorithm improves the accuracy of dynamical behavior compared to exact simulations, but the effects are generally not dramatic for the cases I consider. Next, I introduce new methods for numerically exact real-time simulation based on real-time diagrammatic Quantum Monte Carlo (dQMC) and the inchworm algorithm. These methods optimally recycle Monte Carlo information from earlier times to greatly suppress the dynamical sign problem. In the context of the spin-boson model, I formulate the inchworm expansion in two distinct ways: the first with respect to an expansion in the system-bath coupling and the second as an expansion in the diabatic coupling. In addition, a cumulant version of the inchworm Monte Carlo method is motivated by the latter expansion, which allows for further suppression of the growth of the sign error. I provide a comprehensive comparison of the
14. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
15. Ising model for neural data: Model quality and approximate methods for extracting functional connectivity
Roudi, Yasser; Tyrcha, Joanna; Hertz, John
2009-05-01
We study pairwise Ising models for describing the statistics of multineuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods—inversion of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson—are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin-glass theory for the normal phase. We also show that a globally correlated input to the neurons in the network leads to a small increase in the average coupling. However, the pair-to-pair variation in the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signaling the need to include higher-order correlations to describe the statistics of large networks.
16. Approximate Methods for Analyzing and Controlling Axisymmetric Instabilities of Elongated Tokamak Plasmas.
Frantz, Eric Randall
Elongation and shaping of the tokamak plasma cross -section can allow increased beta and other favorable improvements. As the cross-section is made non-circular, however, the plasma can become unstable against axisymmetric motions, the most predominant one being a nearly uniform displacement in the direction of elongation. Without additional stabilizing mechanisms, this instability has growth rates typically (TURN)10('6)sec('-1). With passive and active feedback from external conductors, the plasma can be significantly slowed down and controlled. In this work, a mathematical formulism for analyzing the vertical instability is developed in which the external conductors are treated (or broken -up) as discrete coils. The circuit equations for the plasma induced currents can be included within the same mathematical framework. The plasma equation of motion and the circuit equations are combined and manipulated into a diagonalized form that can be graphically analyzed to determine the growth rate. An effective mode approximation (EMA) to the dispersion relation in introduced to simplify and approximate the growth rate of the more exact case. Controller voltage equations for active feedback are generalized to include position and velocity feedback and time delay. A position cut-off displacement is added to model finite spatial resolution of the position detectors or a dead-band voltage level. Stability criteria are studied for EMA and the more exact case. The time dependent responses for plasma position controller voltages, and currents are determined from the Laplace transformations. Slow responses are separated from the fast ones (dependent on plasma inertia) using a typical tokamak ordering approximation. The methods developed are applied in numerous examples for the machine geometry and plasma of TNS, an inside-D configuration plasma resembling JET, INTOR, or FED.
17. An efficient method for transfer cross coefficient approximation in model based optical proximity correction
Sabatier, Romuald; Fossati, Caroline; Bourennane, Salah; Di Giacomo, Antonio
2008-10-01
Model Based Optical Proximity Correction (MBOPC) is since a decade a widely used technique that permits to achieve resolutions on silicon layout smaller than the wave-length which is used in commercially-available photolithography tools. This is an important point, because masks dimensions are continuously shrinking. As for the current masks, several billions of segments have to be moved, and also, several iterations are needed to reach convergence. Therefore, fast and accurate algorithms are mandatory to perform OPC on a mask in a reasonably short time for industrial purposes. As imaging with an optical lithography system is similar to microscopy, the theory used in MBOPC is drawn from the works originally conducted for the theory of microscopy. Fourier Optics was first developed by Abbe to describe the image formed by a microscope and is often referred to as Abbe formulation. This is one of the best methods for optimizing illumination and is used in most of the commercially available lithography simulation packages. Hopkins method, developed later in 1951, is the best method for mask optimization. Consequently, Hopkins formulation, widely used for partially coherent illumination, and thus for lithography, is present in most of the commercially available OPC tools. This formulation has the advantage of a four-way transmission function independent of the mask layout. The values of this function, called Transfer Cross Coefficients (TCC), describe the illumination and projection pupils. Commonly-used algorithms, involving TCC of Hopkins formulation to compute aerial images during MBOPC treatment, are based on TCC decomposition into its eigenvectors using matricization and the well-known Singular Value Decomposition (SVD) tool. These techniques that use numerical approximation and empirical determination of the number of eigenvectors taken into account, could not match reality and lead to an information loss. They also remain highly runtime consuming. We propose an
18. A force evaluation free method to N-body problems: Binary interaction approximation
Oikawa, S.
2016-03-01
We recently proposed the binary interaction approximation (BIA) to N-body problems, which, in principle, excludes the interparticle force evaluation if the exact solutions are known for the corresponding two-body problems such as the Coulombic and gravitational interactions. In this article, a detailed introduction to the BIA is given, including the error analysis to give the expressions for the approximation error in the total angular momentum and the total energy of the entire system. It is shown that, although the energy conservation of the BIA scheme is worse than the 4th order Hermite integrator (HMT4) for similar elapsed, or the wall-clock times, the individual errors in position and in velocity are much better than HMT4. The energy error correction scheme to the BIA is also introduced that does not deteriorate the individual errors in position and in velocity. It is suggested that the BIA scheme is applicable to the tree method, the particle-mesh (PM), and the particle-particle-particle-mesh (PPPM) schemes simply by replacing the force evaluation and the conventional time integrator with the BIA scheme.
19. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals
Werner, Hans-Joachim
2016-11-01
The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.
20. Communication: Multipole approximations of distant pair energies in local correlation methods with pair natural orbitals.
PubMed
Werner, Hans-Joachim
2016-11-28
The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.
1. A novel method of automated skull registration for forensic facial approximation.
PubMed
Turner, W D; Brown, R E B; Kelliher, T P; Tu, P H; Taister, M A; Miller, K W P
2005-11-25
Modern forensic facial reconstruction techniques are based on an understanding of skeletal variation and tissue depths. These techniques rely upon a skilled practitioner interpreting limited data. To (i) increase the amount of data available and (ii) lessen the subjective interpretation, we use medical imaging and statistical techniques. We introduce a software tool, reality enhancement/facial approximation by computational estimation (RE/FACE) for computer-based forensic facial reconstruction. The tool applies innovative computer-based techniques to a database of human head computed tomography (CT) scans in order to derive a statistical approximation of the soft tissue structure of a questioned skull. A core component of this tool is an algorithm for removing the variation in facial structure due to skeletal variation. This method uses models derived from the CT scans and does not require manual measurement or placement of landmarks. It does not require tissue-depth tables, can be tailored to specific racial categories by adding CT scans, and removes much of the subjectivity of manual reconstructions.
2. Model-independent mean-field theory as a local method for approximate propagation of information.
PubMed
Haft, M; Hofmann, R; Tresp, V
1999-02-01
We present a systematic approach to mean-field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean-field equations derived here may serve as a local, and thus very simple, method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. Our approach is 'model-independent' in the sense that we do not assume a particular type of dependences; in a Bayesian network, for example, we allow arbitrary tables to specify conditional dependences. In general, there are multiple solutions to the mean-field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean-field solutions. Simple approximate expressions for the mixture weights are given. The general formalism derived so far is evaluated for the special case of Bayesian networks. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small and in a very large Bayesian network. The results are compared with the exact results.
3. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
PubMed Central
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
4. Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
5. Low scaling random-phase approximation electron correlation method including exchange interactions using localised orbitals
Heßelmann, Andreas
2017-05-01
A random-phase approximation electron correlation method including exchange interactions has been developed which reduces the scaling behaviour of the standard approach by two to four orders of magnitude, effectively leading to a linear scaling performance if the local structures of the underlying quantities are fully exploited in the calculations. This has been achieved by a transformation of the integrals and amplitudes from the canonical orbital basis into a local orbital basis and a subsequent dyadic screening approach. The performance of the method is demonstrated for a range of tripeptide molecules as well as for two conformers of the polyglycine molecule using up to 40 glycine units. While a reasonable agreement with the corresponding canonical method is obtained if long-range Coulomb interactions are not screened by the local method, a significant improvement in the performance is achieved for larger systems beyond 20 glycine units. Furthermore, the control of the Coulomb screening threshold allows for a quantification of intramolecular dispersion interactions, as will be exemplified for the polyglycine conformers as well as a highly branched hexaphenylethane derivate which is stabilised by steric crowding effects.
6. A new embedded-atom method approach based on the pth moment approximation
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-01
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
7. A new embedded-atom method approach based on the pth moment approximation.
PubMed
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-21
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
8. Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
9. An approximate method for analyzing transient condensation on spray in HYLIFE-II
SciTech Connect
Bai, R.Y.; Schrock, V.E. . Dept. of Nuclear Engineering)
1990-01-01
The HYLIFE-II conceptual design calls for analysis of highly transient condensation on droplets to achieve a rapidly decaying pressure field. Drops exposed to the required transient vapor pressure field are first heated by condensation but later begin to reevaporate after the vapor temperature falls below the drop surface temperature. An approximate method of analysis has been developed based on the assumption that the thermal resistance is concentrated in the liquid. The time dependent boundary condition is treated via the Duhamel integral for the pure conduction model. The resulting Nusselt number is enhanced to account for convection within the drop and then used to predict the drop mean temperature history. Many histories are considered to determine the spray rate necessary to achieve the required complete condensation.
10. Relaxation and approximate factorization methods for the unsteady full potential equation
NASA Technical Reports Server (NTRS)
Shankar, V.; Ide, H.; Gorski, J.
1984-01-01
The unsteady form of the full potential equation is solved in conservation form, using implicit methods based on approximate factorization and relaxation schemes. A local time linearization for density is introduced to enable solution to the equation in terms of phi, the velocity potential. A novel flux-biasing technique is applied to generate proper forms of the artificial viscosity, to treat hyperbolic regions with shocks and sonic lines present. The wake is properly modeled by accounting not only for jumps in phi, but also for jumps in higher derivatives of phi obtained from requirements of density continuity. The far field is modeled using the Riemann invariants to simulate nonreflecting boundary conditions. Results are presented for flows over airfoils, cylinders, and spheres. Comparisons are made with available Euler and full potential results.
11. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
SciTech Connect
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. The improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.
12. Atomistic modelling of nanostructures via the Bozzolo Ferrante Smith quantum approximate method
Bozzolo, Guillermo; Garcés, Jorge E.; Noebe, Ronald D.; Farías, Daniel
2003-09-01
Ideally, computational modelling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nanostructured systems. A quantum approximate technique, the Bozzolo-Ferrante-Smith method for alloys, which attempts to meet these demands, is introduced for calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multiphase precipitation in a five-element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(111) substrate.
13. Approximation methods of European option pricing in multiscale stochastic volatility model
Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.
14. Diffusion approximation-based simulation of stochastic ion channels: which method to use?
PubMed Central
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
15. A linearly approximated iterative Gaussian decomposition method for waveform LiDAR processing
Mountrakis, Giorgos; Li, Yuguang
2017-07-01
Full-waveform LiDAR (FWL) decomposition results often act as the basis for key LiDAR-derived products, for example canopy height, biomass and carbon pool estimation, leaf area index calculation and under canopy detection. To date, the prevailing method for FWL product creation is the Gaussian Decomposition (GD) based on a non-linear Levenberg-Marquardt (LM) optimization for Gaussian node parameter estimation. GD follows a ;greedy; approach that may leave weak nodes undetected, merge multiple nodes into one or separate a noisy single node into multiple ones. In this manuscript, we propose an alternative decomposition method called Linearly Approximated Iterative Gaussian Decomposition (LAIGD method). The novelty of the LAIGD method is that it follows a multi-step ;slow-and-steady; iterative structure, where new Gaussian nodes are quickly discovered and adjusted using a linear fitting technique before they are forwarded for a non-linear optimization. Two experiments were conducted, one using real full-waveform data from NASA's land, vegetation, and ice sensor (LVIS) and another using synthetic data containing different number of nodes and degrees of overlap to assess performance in variable signal complexity. LVIS data revealed considerable improvements in RMSE (44.8% lower), RSE (56.3% lower) and rRMSE (74.3% lower) values compared to the benchmark GD method. These results were further confirmed with the synthetic data. Furthermore, the proposed multi-step method reduces execution times in half, an important consideration as there are plans for global coverage with the upcoming Global Ecosystem Dynamics Investigation LiDAR sensor on the International Space Station.
16. Thermodynamic potential of the periodic Anderson model with the X-boson method: chain approximation
Franco, R.; Figueira, M. S.; Foglio, M. E.
2002-05-01
The periodic Anderson model (PAM) in the U→∞ limit has been studied in a previous work employing the cumulant expansion with the hybridization as perturbation (Figueira et al., Phys. Rev. B 50 (1994) 17 933). When the total number of electrons Nt is calculated as a function of the chemical potential μ in the “chain approximation” (CHA), there are three values of the chemical potential μ for each Nt in a small interval of Nt at low T (Physica A 208 (1994) 279). We have recently introduced the “X-boson” method, inspired in the slave boson technique of Coleman, that solves the problem of nonconservation of probability (completeness) in the CHA as well as removing the spurious phase transitions that appear with the slave boson method in the mean field approximation. In the present paper, we show that the X-boson method solves also the problem of the multiple roots of Nt( μ) that appear in the CHA.
17. Applying the approximation method PAINT and the interactive method NIMBUS to the multiobjective optimization of operating a wastewater treatment plant
Hartikainen, Markus E.; Ojalehto, Vesa; Sahlstedt, Kristian
2015-03-01
Using an interactive multiobjective optimization method called NIMBUS and an approximation method called PAINT, preferable solutions to a five-objective problem of operating a wastewater treatment plant are found. The decision maker giving preference information is an expert in wastewater treatment plant design at the engineering company Pöyry Finland Ltd. The wastewater treatment problem is computationally expensive and requires running a simulator to evaluate the values of the objective functions. This often leads to problems with interactive methods as the decision maker may get frustrated while waiting for new solutions to be computed. Thus, a newly developed PAINT method is used to speed up the iterations of the NIMBUS method. The PAINT method interpolates between a given set of Pareto optimal outcomes and constructs a computationally inexpensive mixed integer linear surrogate problem for the original wastewater treatment problem. With the mixed integer surrogate problem, the time required from the decision maker is comparatively short. In addition, a new IND-NIMBUS® PAINT module is developed to allow the smooth interoperability of the NIMBUS method and the PAINT method.
18. Adding method of delta-four-stream spherical harmonic expansion approximation for infrared radiative transfer parameterization
Wu, Kun; Zhang, Feng; Min, Jinzhong; Yu, Qiu-Run; Wang, Xin-Yue; Ma, Leiming
2016-09-01
The adding method, which could calculate the infrared radiative transfer (IRT) in inhomogeneous atmosphere with multiple layers, has been applied to δ -four-stream discrete-ordinates method (DOM). This scheme is referred as δ -4DDA. However, there is a lack of application for adding method of δ -four-stream spherical harmonic expansion approximation (SHM) to solve infrared radiative transfer through multiple layers. In this paper, the adding method for δ -four-stream SHM (δ -4SDA) will be obtained and the accuracy of it will be evaluated as well. The result of δ -4SDA in an idealized medium with homogeneous optical property is significantly more accurate than that of the adding method for δ -two-stream DOM (δ -2DDA). The relative errors of δ -2DDA can be over 15% in thin optical depths for downward emissivity, while errors of δ -4SDA are bounded by 2%. However, the result of δ -4SDA is slightly less accurate than that of δ -4DDA. In a radiation model with realistic atmospheric profile considering gaseous transmission, the accuracy for heating rate of δ -4SDA is significantly superior than that of δ -2DDA, especially for the cloudy sky. The accuracy for heating rate of δ -4SDA is slightly less accurate than that of δ -4DDA under water cloud conditions, while it is superior than that of δ -4DDA in ice cloud cases. Beside, the computational efficiency of δ -4SDA is higher than that of δ -4DDA.
19. Improved locality-sensitive hashing method for the approximate nearest neighbor problem
Lu, Ying-Hua; Ma, Ting-Huai; Zhong, Shui-Ming; Cao, Jie; Wang, Xin; Abdullah, Al-Dhelaan
2014-08-01
In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.
20. Conjugate gradient and approximate Newton methods for an optimal probablilistic neural network for food color classification
Chtioui, Younes; Panigrahi, Suranjan; Marsh, Ronald A.
1998-11-01
The probabilistic neural network (PNN) is based on the estimation of the probability density functions. The estimation of these density functions uses smoothing parameters that represent the width of the activation functions. A two-step numerical procedure is developed for the optimization of the smoothing parameters of the PNN: a rough optimization by the conjugate gradient method and a fine optimization by the approximate Newton method. The thrust is to compare the classification performances of the improved PNN and the standard back-propagation neural network (BPNN). Comparisons are performed on a food quality problem: french fry classification into three different color classes (light, normal, and dark). The optimized PNN correctly classifies 96.19% of the test data, whereas the BPNN classifies only 93.27% of the same data. Moreover, the PNN is more stable than the BPNN with regard to the random initialization. The optimized PNN requires 1464 s for training compared to only 71 s required by the BPNN.
1. A new heuristic method for approximating the number of local minima in partial RNA energy landscapes.
PubMed
Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen
2016-02-01
The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.
2. Nonadaptive methods for polyhedral approximation of the Edgeworth—Pareto hull using suboptimal coverings on the direction sphere
Lotov, A. V.; Maiskaya, T. S.
2012-01-01
For multicriteria convex optimization problems, new nonadaptive methods are proposed for polyhedral approximation of the multidimensional Edgeworth-Pareto hull (EPH), which is a maximal set having the same Pareto frontier as the set of feasible criteria vectors. The methods are based on evaluating the support function of the EPH for a collection of directions generated by a suboptimal covering on the unit sphere. Such directions are constructed in advance by applying an asymptotically effective adaptive method for the polyhedral approximation of convex compact bodies, namely, by the estimate refinement method. Due to the a priori definition of the directions, the proposed EPH approximation procedure can easily be implemented with parallel computations. Moreover, the use of nonadaptive methods considerably simplifies the organization of EPH approximation on the Internet. Experiments with an applied problem (from 3 to 5 criteria) showed that the methods are fairly similar in characteristics to adaptive methods. Therefore, they can be used in parallel computations and on the Internet.
3. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
4. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
5. Path integral molecular dynamics method based on a pair density matrix approximation: An algorithm for distinguishable and identical particle systems
Miura, Shinichi; Okazaki, Susumu
2001-09-01
In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.
6. Simple finite element methods for approximating predator-prey dynamics in two dimensions using MATLAB.
PubMed
Garvie, Marcus R; Burkardt, John; Morgan, Jeff
2015-03-01
We describe simple finite element schemes for approximating spatially extended predator-prey dynamics with the Holling type II functional response and logistic growth of the prey. The finite element schemes generalize 'Scheme 1' in the paper by Garvie (Bull Math Biol 69(3):931-956, 2007). We present user-friendly, open-source MATLAB code for implementing the finite element methods on arbitrary-shaped two-dimensional domains with Dirichlet, Neumann, Robin, mixed Robin-Neumann, mixed Dirichlet-Neumann, and Periodic boundary conditions. Users can download, edit, and run the codes from http://www.uoguelph.ca/~mgarvie/ . In addition to discussing the well posedness of the model equations, the results of numerical experiments are presented and demonstrate the crucial role that habitat shape, initial data, and the boundary conditions play in determining the spatiotemporal dynamics of predator-prey interactions. As most previous works on this problem have focussed on square domains with standard boundary conditions, our paper makes a significant contribution to the area.
7. A fault diagnosis system for PV power station based on global partitioned gradually approximation method
Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.
2016-08-01
As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.
8. The optical properties of tropospheric soot aggregates determined with the DDA (Discrete Dipole Approximation) method
Skorupski, Krzysztof
2015-06-01
Black carbon particles soon after emission interact with organic and inorganic matter. The primary goal of this work was to approximate the accuracy of the DDA method in determining the optical properties of such composites. For the light scattering simulations the ADDA code was selected and the superposition T-Matrix code by Mackowski was used as the reference algorithm. The first part of the study was to compare alternative models of a single primary particle. When only one material is considered the largest averaged relative extinction error is associated with black carbon (δCext ≍ 2.8%). However, for inorganic and organic matter it is lowered to δCext ≍ 0.75%. There is no significant difference between spheres and ellipsoids with the same volume, and therefore, both of them can be used interchangeably. The next step was to investigate aggregates composed of Np = 50 primary particles. When the coating is omitted, the averaged relative extinction error is δCext ≍ 2.6%. Otherwise, it can be lower than δCext < 0.2%.
9. On the enhancement of the approximation order of triangular Shepard method
Dell'Accio, Francesco; Di Tommaso, Filomena; Hormann, Kai
2016-10-01
Shepard's method is a well-known technique for interpolating large sets of scattered data. The classical Shepard operator reconstructs an unknown function as a normalized blend of the function values at the scattered points, using the inverse distances to the scattered points as weight functions. Based on the general idea of defining interpolants by convex combinations, Little suggested to extend the bivariate Shepard operator in two ways. On the one hand, he considers a triangulation of the scattered points and substitutes function values with linear polynomials which locally interpolate the given data at the vertices of each triangle. On the other hand, he modifies the classical point-based weight functions and defines instead a normalized blend of the locally interpolating polynomials with triangle-based weight functions which depend on the product of inverse distances to the three vertices of the corresponding triangle. The resulting triangular Shepard operator interpolates all data required for its definition and reproduces polynomials up to degree 1, whereas the classical Shepard operator reproduces only constants, and has quadratic approximation order. In this paper we discuss an improvement of the triangular Shepard operator.
10. Heats of Segregation of BCC Metals Using Ab Initio and Quantum Approximate Methods
NASA Technical Reports Server (NTRS)
Good, Brian; Chaka, Anne; Bozzolo, Guillermo
2003-01-01
Many multicomponent alloys exhibit surface segregation, in which the composition at or near a surface may be substantially different from that of the bulk. A number of phenomenological explanations for this tendency have been suggested, involving, among other things, differences among the components' surface energies, molar volumes, and heats of solution. From a theoretical standpoint, the complexity of the problem has precluded a simple, unified explanation, thus preventing the development of computational tools that would enable the identification of the driving mechanisms for segregation. In that context, we investigate the problem of surface segregation in a variety of bcc metal alloys by computing dilute-limit heats of segregation using both the quantum-approximate energy method of Bozzolo, Ferrante and Smith (BFS), and all-electron density functional theory. In addition, the composition dependence of the heats of segregation is investigated using a BFS-based Monte Carlo procedure, and, for selected cases of interest, density functional calculations. Results are discussed in the context of a simple picture that describes segregation behavior as the result of a competition between size mismatch and alloying effects
11. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
DOE PAGES
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
12. A Simple, Approximate Method for Analysis of Kerr-Newman Black Hole Dynamics and Thermodynamics
Pankovic, V.; Ciganovic, S.; Glavatovic, R.
2009-06-01
In this work we present a simple approximate method for analysis of the basic dynamical and thermodynamical characteristics of Kerr-Newman black hole. Instead of the complete dynamics of the black hole self-interaction, we consider only the stable (stationary) dynamical situations determined by condition that the black hole (outer) horizon "circumference" holds the integer number of the reduced Compton wave lengths corresponding to mass spectrum of a small quantum system (representing the quantum of the black hole self-interaction). Then, we show that Kerr-Newman black hole entropy represents simply the ratio of the sum of static part and rotation part of the mass of black hole on one hand, and the ground mass of small quantum system on the other hand. Also we show that Kerr-Newman black hole temperature represents the negative value of the classical potential energy of gravitational interaction between a part of black hole with reduced mass and a small quantum system in the ground mass quantum state. Finally, we suggest a bosonic great canonical distribution of the statistical ensemble of given small quantum systems in the thermodynamical equilibrium with (macroscopic) black hole as thermal reservoir. We suggest that, practically, only the ground mass quantum state is significantly degenerate while all the other, excited mass quantum states, are non-degenerate. Kerr-Newman black hole entropy is practically equivalent to the ground mass quantum state degeneration. Given statistical distribution admits a rough (qualitative) but simple modeling of Hawking radiation of the black hole too.
13. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
14. Analysis of iterative methods for the viscous/inviscid coupled problem via a spectral element approximation
Xu, Chuanju; Lin, Yumin
2000-03-01
Based on a new global variational formulation, a spectral element approximation of the incompressible Navier-Stokes/Euler coupled problem gives rise to a global discrete saddle problem. The classical Uzawa algorithm decouples the original saddle problem into two positive definite symmetric systems. Iterative solutions of such systems are feasible and attractive for large problems. It is shown that, provided an appropriate pre-conditioner is chosen for the pressure system, the nested conjugate gradient methods can be applied to obtain rapid convergence rates. Detailed numerical examples are given to prove the quality of the pre-conditioner. Thanks to the rapid iterative convergence, the global Uzawa algorithm takes advantage of this as compared with the classical iteration by sub-domain procedures. Furthermore, a generalization of the pre-conditioned iterative algorithm to flow simulation is carried out. Comparisons of computational complexity between the Navier-Stokes/Euler coupled solution and the full Navier-Stokes solution are made. It is shown that the gain obtained by using the Navier-Stokes/Euler coupled solution is generally considerable. Copyright
15. Best approximation to the weighted slab method. [for solution of full equation for interstellar cosmic-ray nuclei propagation
NASA Technical Reports Server (NTRS)
Jones, Frank C.
1991-01-01
The weighted-slab method is modified so that, although it is still not exact, it gives the 'best' approximation in a minimization sense when energy loss cannot be neglected. In this approximation the species-dependent energy-change term that operates on the path-length distribution is 'averaged' over the slab-model solution for that particular energy and path length.
16. An angularly refineable phase space finite element method with approximate sweeping procedure
SciTech Connect
Kophazi, J.; Lathouwers, D.
2013-07-01
An angularly refineable phase space finite element method is proposed to solve the neutron transport equation. The method combines the advantages of two recently published schemes. The angular domain is discretized into small patches and patch-wise discontinuous angular basis functions are restricted to these patches, i.e. there is no overlap between basis functions corresponding to different patches. This approach yields block diagonal Jacobians with small block size and retains the possibility for S{sub n}-like approximate sweeping of the spatially discontinuous elements in order to provide efficient preconditioners for the solution procedure. On the other hand, the preservation of the full FEM framework (as opposed to collocation into a high-order S{sub n} scheme) retains the possibility of the Galerkin interpolated connection between phase space elements at arbitrary levels of discretization. Since the basis vectors are not orthonormal, a generalization of the Riemann procedure is introduced to separate the incoming and outgoing contributions in case of unstructured meshes. However, due to the properties of the angular discretization, the Riemann procedure can be avoided at a large fraction of the faces and this fraction rapidly increases as the level of refinement increases, contributing to the computational efficiency. In this paper the properties of the discretization scheme are studied with uniform refinement using an iterative solver based on the S{sub 2} sweep order of the spatial elements. The fourth order convergence of the scalar flux is shown as anticipated from earlier schemes and the rapidly decreasing fraction of required Riemann faces is illustrated. (authors)
17. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process
PubMed Central
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
18. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.
PubMed
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
19. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process
SciTech Connect
Bu Sunyoung Huang Jingfang Boyer, Treavor H. Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
20. Rapid and Robust Cross-Correlation-Based Seismic Phase Identification Using an Approximate Nearest Neighbor Method
Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.
2016-12-01
The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.
1. An approximation solution of a nonlinear equation with Riemann-Liouville's fractional derivatives by He's variational iteration method
Abbasbandy, S.
2007-10-01
In this article, an application of He's variational iteration method is proposed to approximate the solution of a nonlinear fractional differential equation with Riemann-Liouville's fractional derivatives. Also, the results are compared with those obtained by Adomian's decomposition method and truncated series method. The results reveal that the method is very effective and simple.
2. High-Speed Decision Method of Combination of Risk-reducing Plans using Approximately Bounding on Branch and Bound
Imanara, Yuuki; Kawaratani, Keisuke; Samejima, Masaki; Akiyoshi, Masanori; Sasaki, Ryoichi
This paper addresses a problem to decide the combination of risk-reducing plans quickly. The combinatorial problem is formulated as one of the 0-1 integer programming, and Branch and Bound is used. However, Simplex method that is executed on Branch and Bound takes much time. Our proposed method decides the optimal combination based on approximation algorithms, greedy algorithm and single constraint selection in addition to Simplex method. Only if bounding by approximate algorithms leads to incorrect optimal solutions, Simplex method is executed to verify the bounding. As a result of evaluation experiments, the proposed method can reduce the computational time by 71% in comparison with the existing method.
3. A novel window based method for approximating the Hausdorff in 3D range imagery.
SciTech Connect
Koch, Mark William
2004-10-01
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
4. High-order-harmonic spectra from atoms in intense laser fields: Exact versus approximate methods
Pugliese, S. N.; Simonsen, A. S.; Førre, M.; Hansen, J. P.
2015-08-01
We compare harmonic spectra from hydrogen based on the numerical solution of the time-dependent Schrödinger equation and three approximate models: (i) the strong field approximation (SFA), (ii) the Coulomb-Volkov modified strong field approximation (CVA), and (iii) the strong field approximation with the stationary phase approximation applied to the momentum integrals (SPSFA). At laser intensities in the range of (1 -3 ) ×1014W/cm 2 we find good agreement when comparing the SFA and CVA with exact results. In general the CVA displays an overall better agreement with ab initio results, which reflects the role of the Coulomb field in the ionization as well as in the recombination process. Furthermore, it is found that the widely used SPSFA breaks down for low-order harmonic generation; i.e., the approximation turns out to be accurate only in the outer part of the harmonic plateau region as well as in the cutoff region. We trace this deficiency to the singularity of the SPSFA associated with short trajectories, i.e., short return times. When removing these, we obtain a version of the SPSFA which works rather well for the entire harmonic spectrum.
5. Rapid and Robust Cross-Correlation-Based Seismic Signal Identification Using an Approximate Nearest Neighbor Method
DOE PAGES
Tibi, Rigobert; Young, Christopher; Gonzales, Antonio; ...
2017-07-04
The matched filtering technique that uses the cross correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this paper, we introduce an approximate nearest neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation. Our method begins with a projection into a reduced dimensionality space, based on correlation with a randomized subset ofmore » the full template archive. Searching for a specified number of nearest neighbors for a query waveform is accomplished by iteratively comparing it with the neighbors of its immediate neighbors. We used the approach to search for matches to each of ~2300 analyst-reviewed signal detections reported in May 2010 for the International Monitoring System station MKAR. The template library in this case consists of a data set of more than 200,000 analyst-reviewed signal detections for the same station from February 2002 to July 2016 (excluding May 2010). Of these signal detections, 73% are teleseismic first P and 17% regional phases (Pn, Pg, Sn, and Lg). Finally, the analyses performed on a standard desktop computer show that the proposed ANN approach performs a search of the large template libraries about 25 times faster than the standard full linear search and achieves recall rates greater than 80%, with the recall rate increasing for higher correlation thresholds.« less
6. Stochastic approximation methods for fusion-rule estimation in multiple sensor systems
SciTech Connect
Rao, N.S.V.
1994-06-01
A system of N sensors S{sub 1}, S{sub 2},{hor_ellipsis},S{sub N} is considered; corresponding to an object with parameter x {element_of} {Re}{sup d}, sensor S{sub i} yields output y{sup (i)}{element_of}{Re}{sup d} according to an unknown probability distribution p{sub i}(y{sup (i)}{vert_bar}x). A training l-sample (x{sub 1}, y{sub 1}), (x{sub 2}, y{sub 2}),{hor_ellipsis},(x{sub l}, y{sub l}) is given where y{sub i} = (y{sub i}({sup 1}), y{sub i}({sup 2}),{hor_ellipsis},y{sub i}({sup N}) and y{sub i}({sup j}) is the output of S{sub j} in response to input X{sub i}. The problem is to estimate a fusion rule f : {Re}{sup Nd} {yields} {Re}{sup d}, based on the sample, such that the expected square error I(f) = {integral}[x {minus} f(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N})]{sup 2} p(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N}){vert_bar}x)p(x)dy{sup 1}dy{sup 2} {hor_ellipsis} dy{sup N}dx is to be minimized over a family of fusion rules {lambda} based on the given l-sample. Let f{sub *} {element_of} {lambda} minimize I(f); f{sub *} cannot be computed since the underlying probability distributions are unknown. Three stochastic approximation methods are presented to compute {cflx f}, such that under suitable conditions, for sufficiently large sample, P[I{cflx f} {minus} I(f{sub *}) > {epsilon}] < {delta} for arbitrarily specified {epsilon} > 0 and {delta}, 0 < {delta} < 1. The three methods are based on Robbins-Monro style algorithms, empirical risk minimization, and regression estimation algorithms.
7. Optimal approximation method to characterize the resource trade-off functions for media servers
Chang, Ray-I.
1999-08-01
We have proposed an algorithm to smooth the transmission of the pre-recorded VBR media stream. It takes O(n) time complexity, where n is large, this algorithm is not suitable for online resource management and admission control in media servers. To resolve this drawback, we have explored the optimal tradeoff among resources by an O(nlogn) algorithm. Based on the pre-computed resource tradeoff function, the resource management and admission control procedure is as simple as table hashing. However, this approach requires O(n) space to store and maintain the resource tradeoff function. In this paper, while giving some extra resources, a linear-time algorithm is proposed to approximate the resource tradeoff function by piecewise line segments. We can prove that the number of line segments in the obtained approximation function is minimized for the given extra resources. The proposed algorithm has been applied to approximate the bandwidth-buffer-tradeoff function of the real-world Star War movie. While an extra 0.1 Mbps bandwidth is given, the storage space required for the approximation function is over 2000 times smaller than that required for the original function. While an extra 10 KB buffer is given, the storage space for the approximation function is over 2200 over times smaller than that required for the original function. The proposed algorithm is really useful for resource management and admission control in real-world media servers.
8. A fast accurate approximation method with multigrid solver for two-dimensional fractional sub-diffusion equation
Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei
2016-10-01
A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.
9. Elastic Critical Axial Force for the Torsional-Flexural Buckling of Thin-Walled Metal Members: An Approximate Method
Kováč, Michal
2015-03-01
Thin-walled centrically compressed members with non-symmetrical or mono-symmetrical cross-sections can buckle in a torsional-flexural buckling mode. Vlasov developed a system of governing differential equations of the stability of such member cases. Solving these coupled equations in an analytic way is only possible in simple cases. Therefore, Goľdenvejzer introduced an approximate method for the solution of this system to calculate the critical axial force of torsional-flexural buckling. Moreover, this can also be used in cases of members with various boundary conditions in bending and torsion. This approximate method for the calculation of critical force has been adopted into norms. Nowadays, we can also solve governing differential equations by numerical methods, such as the finite element method (FEM). Therefore, in this paper, the results of the approximate method and the FEM were compared to each other, while considering the FEM as a reference method. This comparison shows any discrepancies of the approximate method. Attention was also paid to when and why discrepancies occur. The approximate method can be used in practice by considering some simplifications, which ensure safe results.
10. A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators
PubMed Central
Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong
2014-01-01
Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065
11. Approximation methods for inverse problems involving the vibration of beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Two cubic spline based approximation schemes for the estimation of structural parameters associated with the transverse vibration of flexible beams with tip appendages are outlined. The identification problem is formulated as a least squares fit to data subject to the system dynamics which are given by a hybrid system of coupled ordinary and partial differential equations. The first approximation scheme is based upon an abstract semigroup formulation of the state equation while a weak/variational form is the basis for the second. Cubic spline based subspaces together with a Rayleigh-Ritz-Galerkin approach were used to construct sequences of easily solved finite dimensional approximating identification problems. Convergence results are briefly discussed and a numerical example demonstrating the feasibility of the schemes and exhibiting their relative performance for purposes of comparison is provided.
12. The Investigation of Optimal Discrete Approximations for Real Time Flight Simulations
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Mcvey, E. S.; Cook, G.; Henderson, K. C.
1976-01-01
The results are presented of an investigation of discrete approximations for real time flight simulation. Major topics discussed include: (1) consideration of the particular problem of approximation of continuous autopilots by digital autopilots; (2) use of Bode plots and synthesis of transfer functions by asymptotic fits in a warped frequency domain; (3) an investigation of the various substitution formulas, including the effects of nonlinearities; (4) use of pade approximation to the solution of the matrix exponential arising from the discrete state equations; and (5) an analytical integration of the state equation using interpolated input.
13. The effect of Fisher information matrix approximation methods in population optimal design calculations.
PubMed
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
14. Development of approximate method to analyze the characteristics of latent heat thermal energy storage system
SciTech Connect
Saitoh, T.S.; Hoshi, Akira
1999-07-01
numerical methods (e.g. Saitoh and Kato, 1994). In addition, close-contact melting heat transfer characteristics including melt flow in the liquid film under inner wall temperature distribution were analyzed and simple approximate equations were already presented by Saitoh and Hoshi (1997). In this paper, the authors will propose an analytical solution on combined close-contact and natural convection melting in horizontal cylindrical and spherical capsules, which is useful for the practical capsule bed LHTES system.
15. Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
16. New results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
Warrick et al. (2007) proposed an approximate solution following the two-dimensional Richards equation, which can be used to estimate furrow infiltration based on soil physical properties. The equation computes infiltration as the sum of one-dimensional infiltration and a term labeled the edge effe...
17. Stochastic Approximation Methods for Latent Regression Item Response Models. Research Report. ETS RR-09-09
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2009-01-01
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
18. Existence and uniqueness results for neural network approximations.
PubMed
Williamson, R C; Helmke, U
1995-01-01
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.
19. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
20. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs
PubMed Central
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public. PMID:27148036
1. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.
PubMed
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.
2. Vibration suppression with approximate finite dimensional compensators for distributed systems: Computational methods and experimental results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Wang, Yun
1994-01-01
Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.
3. Rational approximations, software and test methods for sine and cosine integrals
MacLeod, Allan
1996-09-01
Rational approximations to the sine integralSi(x) and cosine-integralCi(x) are developed which give an accuracy of 20sf. The robust construction of software for these functions is discussed, together with a test procedure for assessing the performance of such codes. Use of the tests discovers a major error in the netlib library FN codes forSi. Fortran versions of the codes and tests are available by electronic mail.
4. Approximate method for calculating convective heat flux on the surface of bodies of simple geometric shapes
Kuzenov, V. V.; Ryzhkov, S. V.
2017-02-01
The paper formulated engineering and physical mathematical model for aerothermodynamics hypersonic flight vehicle (HFV) in laminar and turbulent boundary layers (model designed for an approximate estimate of the convective heat flow in the range of speeds M = 6-28, and height H = 20-80 km). 2D versions of calculations of convective heat flows for bodies of simple geometric forms (individual elements of the design HFV) are presented.
5. Extended proton-neutron quasiparticle random-phase approximation in a boson expansion method
Civitarese, O.; Montani, F.; Reboiro, M.
1999-08-01
The proton-neutron quasiparticle random phase approximation (pn-QRPA) is extended to include next to leading order terms of the QRPA harmonic expansion. The procedure is tested for the case of a separable Hamiltonian in the SO(5) symmetry representation. The pn-QRPA equation of motion is solved by using a boson expansion technique adapted to the treatment of proton-neutron correlations. The resulting wave functions are used to calculate the matrix elements of double-Fermi transitions.
6. Heat capacity of liquid organic compounds: Experimental determination and method of group approximation of its temperature dependence
Vasil'Ev, I. A.; Treibsho, E. I.; Korkhov, A. D.; Petrov, V. M.; Orlova, N. G.; Balakina, M. M.
1981-06-01
The article describes the experimental method and presents results of the investigation of the heat capacity of liquid n-alcohols and esters. It examines the method of group approximation of the temperature dependence on the example of n-alkanes and n-alkenes.
7. Coherent diffractive imaging beyond the Fresnel approximation using a deterministic phase-retrieval method with an aperture-array filter.
PubMed
Nakajima, Nobuharu
2013-03-01
Previously, we have proposed a lensless coherent imaging using a nonholographic and noniterative phase-retrieval method that allows the reconstruction of a complex-valued object from a single diffraction intensity measured with an aperture-array filter. The proof-of-concept experiment of this method has been demonstrated under the Fresnel diffraction approximation. In applications to microscopy, however, the measurement of the diffraction intensity with high numerical aperture beyond the Fresnel approximation is required to obtain the object information at high spatial resolution. Thus we have also presented an extension procedure to apply the method to the cases beyond the Fresnel approximation by means of computer simulations. Here the effectiveness of the procedure is demonstrated by the experiments, in which the reconstruction with about 10 times the resolution of our previous experiment has been achieved and the object information in depth direction has been retrieved.
8. A correlation of thin lens approximation to thick lens design by using context based method in optics education
Farsakoglu, O. F.; Inal Atik, Ipek; Kocabas, Hikmet
2014-07-01
The effect of Coddington factors on aberration functions has been analysed using thin lens approximation with optical glass parameters. The dependence of spherical aberration on Coddington shape factor for the various optical glasses in real lens design was discussed using exact ray tracing for the optics education and training purposes. Thin lens approximation and thick lens design are generally taught with only lecturing method. But, thick lens design is closely related to the real life. Hence, it is more appropriate to teach thin lens approximation and thick lens design with real-life context based approach. Context based teaching can be effective in solving problems in which the subject is very difficult and irrelevant. It also provides extensive evidence for optics education that students are generally unable to correctly apply the concepts of lens design to optical instruments currently used. Therefore, the outline of real-life context based thick lens design lessons were proposed and explained in detail considering thin lens approximation.
9. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations
DOE PAGES
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
10. Integration of large chemical kinetic mechanisms via exponential methods with Krylov approximations to Jacobian matrix functions
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
11. An approximate reasoning-based method for screening high-level-waste tanks for flammable gas
SciTech Connect
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
2000-06-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
12. A simple approximate method for obtaining spanwise lift distributions over swept wings
NASA Technical Reports Server (NTRS)
Diederich, Franklin W
1948-01-01
It is shown how Schrenk's empirical method of estimating the lift distribution over straight wings can be adapted to swept wings by replacing the elliptical distribution by a new "ideal" distribution which varies with sweep.The application of the method is discussed in detail and several comparisons are made to show the agreement of the proposed method with more rigorous ones. It is shown how first-order compressibility corrections applicable to subcritical speeds may be included in this method.
13. A Method to Approximate and Statistically Model the Shape of Triggered Landslides
Taylor, F. E.; Malamud, B. D.
2014-12-01
The planimetric shape of an individual landslide area is controlled by factors such as terrain morphology, material involved and speed, with landslide shapes varying in total area (AL), type of shape, and their length-to-width (L/W) ratios. Here, we abstract landslide shapes to ellipses, and examine how the corresponding L/W ratios vary as a function of AL in two substantially complete triggered landslide inventories: (i) 11,111 landslides triggered by the 1994 (M = 6.7) Northridge Earthquake, USA (ii) 9,594 landslides triggered by heavy rain during the 1998 Hurricane Mitch in Guatemala. For each landslide, an ellipse with equivalent area (AL) and perimeter (PL) of the original shape was created and a non-dimensional value of the ratio of the ellipse length-to-width (L/W) then calculated. Using Maximum Likelihood Estimation, the statistical distribution of landslide L/W ratio values were then considered for ten landslide area (AL in m2) categories: 0-99, 100-199, 200-399, 400-799, 800-1599, 1600-3199, 3200-6399, 6400-12,799, 12,800-25,600, and ≥25,600 m2. We find that for each of the landslide area categories considered separately, the probability density function p(L/W) as a function of (L/W) approximately follows a three-parameter inverse gamma distribution, which has a power-law decay for medium and large L/W values and exponential rollover for small L/W values. The 'rollover' value where p(L/W) is at its maximum, tends to increase with increasing AL category, from approximately L/W = 1.7 for landslides in the smallest AL category (0 < AL < 99 m2), to L/W = 7.5 for landslides in the largest AL category (AL ≥25,600 m2). Broadly, this suggests that as AL increases, L/W increases, i.e. as landslide areas increase, the probability of observing a more elongated shape increases. There is generally good agreement between the two inventories' statistical distributions in spite of differences in location, triggering mechanism and geology. This work will aid in
14. Evidence of iridescence in TiO2 nanostructures: An approximation in plane wave expansion method
Quiroz, Heiddy P.; Barrera-Patiño, C. P.; Rey-González, R. R.; Dussan, A.
2016-11-01
Titanium dioxide nanotubes, TiO2 NTs, can be obtained by electrochemical anodization of Titanium sheets. After nanotubes are removed by mechanical stress, residual structures or traces on the surface of titanium sheets can be observed. These traces show iridescent effects. In this paper we carry out both experimental and theoretical study of those interesting and novel optical properties. For the experimental analysis we use angle resolved UV-vis spectroscopy while in the theoretical study is evaluated the photonic spectra using numerical simulations into the frequency-domain and the framework of the wave plane approximation. The iridescent effect is a strong property and independent of the sample. This behavior can be important to design new materials or compounds for several applications such as, cosmetic industry, optoelectronic devices, photocatalysis, sensors, among others.
15. Newton's method applied to finite-difference approximations for the steady-state compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bailey, Harry E.; Beam, Richard M.
1991-01-01
Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.
16. Integral approximants for functions of higher monodromic dimension
SciTech Connect
Baker, G.A. Jr.
1987-01-01
In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.
17. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods
SciTech Connect
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
18. An adaptive meshfree method for phase-field models of biomembranes. Part I: Approximation with maximum-entropy basis functions
Rosolen, A.; Peco, C.; Arroyo, M.
2013-09-01
We present an adaptive meshfree method to approximate phase-field models of biomembranes. In such models, the Helfrich curvature elastic energy, the surface area, and the enclosed volume of a vesicle are written as functionals of a continuous phase-field, which describes the interface in a smeared manner. Such functionals involve up to second-order spatial derivatives of the phase-field, leading to fourth-order Euler-Lagrange partial differential equations (PDE). The solutions develop sharp internal layers in the vicinity of the putative interface, and are nearly constant elsewhere. Thanks to the smoothness of the local maximum-entropy (max-ent) meshfree basis functions, we approximate numerically this high-order phase-field model with a direct Ritz-Galerkin method. The flexibility of the meshfree method allows us to easily adapt the grid to resolve the sharp features of the solutions. Thus, the proposed approach is more efficient than common tensor product methods (e.g. finite differences or spectral methods), and simpler than unstructured C0 finite element methods, applicable by reformulating the model as a system of second-order PDE. The proposed method, implemented here under the assumption of axisymmetry, allows us to show numerical evidence of convergence of the phase-field solutions to the sharp interface limit as the regularization parameter approaches zero. In a companion paper, we present a Lagrangian method based on the approximants analyzed here to study the dynamics of vesicles embedded in a viscous fluid.
19. Approximate methods to determine the modulation transfer function of a hololens system: a comparative study
Varela, Alberto J.; Calvo, Maria L.
1995-04-01
We present a comparative study between two experimental methods to determine the modulation transfer function (MTF) of a hololens system. The two hololenses were previously recorded and tested for filtering pseudocolor. In the first method we used the classical Foucault test. The second, alternative method is based on the digital image processing of a perfect edge under incoherent illumination. From the digitized intensity line profiles we obtain the MTF and cutoff frequency of the optical system according to the reciprocity between line spread function and MTF. Comments are made on the applicability and accuracy of these two methods.
20. Approximate method for calculating transonic flow about lifting wing-body configurations
NASA Technical Reports Server (NTRS)
Barnwell, R. W.
1976-01-01
The three-dimensional problem of transonic flow about lifting wing-body configurations is reduced to a two-variable computational problem with the method of matched asymptotic expansions. The computational problem is solved with the method of relaxation. The method accounts for leading-edge separation, the presence of shock waves, and the presence of solid, slotted, or porous tunnel walls. The Mach number range of the method extends from zero to the supersonic value at which the wing leading edge becomes sonic. A modified form of the transonic area rule which accounts for the effect of lift is developed. This effect is explained from simple physical considerations.
1. A summary of methods for approximating salt creep and disposal room closure in numerical models of multiphase flow
SciTech Connect
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.
2. A 3D finite element ALE method using an approximate Riemann solution
DOE PAGES
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problemmore » results are presented.« less
3. A 3D finite element ALE method using an approximate Riemann solution
SciTech Connect
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problem results are presented.
4. A 3D finite element ALE method using an approximate Riemann solution
SciTech Connect
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problem results are presented.
5. A new analytic approximation method for the non-zero angular momentum states of the Hulthén potential
Dutt, Ranabir; Mukherji, Uma
1982-08-01
We propose a new approximation scheme to obtain analytic expressions for the bond-state energies and eigenfunctions for any arbitrary bound nl-state of the Hulthén potential. The predicted energies Enl are in excellent agreement with the perturbative results of Lai and Lin. The scope for an extension of the method to the continuum states is also discussed.
6. Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
7. An approximate-reasoning-based method for screening high-level waste tanks for flammable gas
SciTech Connect
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-07-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at Hanford have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. AR models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. The authors performed a pilot study to investigate the utility of AR for flammable gas screening. They found that the effort to implement such a model was acceptable and that computational requirements were reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
8. High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
9. A formulation and numerical approach to molecular systems by the Green function method without the Born-Oppenheimer approximation
Shigeta, Yasuteru; Nagao, Hidemi; Nishikawa, Kiyoshi; Yamaguchi, Kizashi
1999-10-01
We have proposed a new numerical scheme for the non-Born-Oppenheimer density functional calculation based upon the Green function techniques within the GW approximation for evaluating molecular properties in the full quantum mechanical treatment. We numerically calculate the physical properties of the individual motion in a hydrogen molecule and a muon molecule by means of this method and discuss the isotope effect on the properties in relation to correlation effects. It is concluded that the GW approximation is work well not only for calculation of the electronic state but also for that of nuclear state.
10. An approximate-reasoning-based method for screening flammable gas tanks
SciTech Connect
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-03-01
High-level waste (HLW) produces flammable gases as a result of radiolysis and thermal decomposition of organics. Under certain conditions, these gases can accumulate within the waste for extended periods and then be released quickly into the dome space of the storage tank. As part of the effort to reduce the safety concerns associated with flammable gas in HLW tanks at Hanford, a flammable gas watch list (FGWL) has been established. Inclusion on the FGWL is based on criteria intended to measure the risk associated with the presence of flammable gas. It is important that all high-risk tanks be identified with high confidence so that they may be controlled. Conversely, to minimize operational complexity, the number of tanks on the watchlist should be reduced as near to the true number of flammable risk tanks as the current state of knowledge will support. This report presents an alternative to existing approaches for FGWL screening based on the theory of approximate reasoning (AR) (Zadeh 1976). The AR-based model emulates the inference process used by an expert when asked to make an evaluation. The FGWL model described here was exercised by performing two evaluations. (1) A complete tank evaluation where the entire algorithm is used. This was done for two tanks, U-106 and AW-104. U-106 is a single shell tank with large sludge and saltcake layers. AW-104 is a double shell tank with over one million gallons of supernate. Both of these tanks had failed the screening performed by Hodgson et al. (2) Partial evaluations using a submodule for the predictor likelihood for all of the tanks on the FGWL that had been flagged previously by Whitney (1995).
11. High order filtering methods for approximating hyperbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1991-01-01
The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.
12. First-order mean-spherical approximation for interfacial phenomena: a unified method from bulk-phase equilibria study.
PubMed
Tang, Yiping
2005-11-22
The recently proposed first-order mean-spherical approximation (FMSA) [Y. Tang, J. Chem. Phys. 121, 10605 (2004)] for inhomogeneous fluids is extended to the study of interfacial phenomena. Computation is performed for the Lennard-Jones fluid, in which all phase equilibria properties and direct correlation function for density-functional theory are developed consistently and systematically from FMSA. Three functional methods, including fundamental measure theory for the repulsive force, local-density approximation, and square-gradient approximation, are applied in this interfacial investigation. Comparisons with the latest computer simulation data indicate that FMSA is satisfactory in predicting surface tension, density profile, as well as relevant phase equilibria. Furthermore, this work strongly suggests that FMSA is very capable of unifying homogeneous and inhomogeneous fluids, as well as those behaviors outside and inside the critical region within one framework.
13. A method to approximate maximum local SAR in multichannel transmit MR systems without transmit phase information.
PubMed
2017-08-01
To calculate local specific absorption rate (SAR) correctly, both the amplitude and phase of the signal in each transmit channel have to be known. In this work, we propose a method to derive a conservative upper bound for the local SAR, with a reasonable safety margin without knowledge of the transmit phases of the channels. The proposed method uses virtual observation points (VOPs). Correction factors are calculated for each set of VOPs that prevent underestimation of local SAR when the VOPs are applied with the correct amplitudes but fixed phases. The proposed method proved to be superior to the worst-case calculation based on the maximum eigenvalue of the VOPs. The mean overestimation for six coil setups could be reduced, whereas no underestimation of the maximum local SAR occurred. In the best investigated case, the overestimation could be reduced from a factor of 3.3 to a factor of 1.7. The upper bound for the local SAR calculated with the proposed method allows a fast estimation of the local SAR based on power measurements in the transmit channels and facilitates SAR monitoring in systems that do not have the capability to monitor transmit phases. Magn Reson Med 78:805-811, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
14. Iterative and direct methods employing distributed approximating functionals for the reconstruction of a potential energy surface from its sampled values
Szalay, Viktor
1999-11-01
The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.
15. Spectral approximation methods and error estimates for Caputo fractional derivative with applications to initial-value problems
Duan, Beiping; Zheng, Zhoushun; Cao, Wen
2016-08-01
In this paper, we revisit two spectral approximations, including truncated approximation and interpolation for Caputo fractional derivative. The two approaches have been studied to approximate Riemann-Liouville (R-L) fractional derivative by Chen et al. and Zayernouri et al. respectively in their most recent work. For truncated approximation the reconsideration partly arises from the difference between fractional derivative in R-L sense and Caputo sense: Caputo fractional derivative requires higher regularity of the unknown than R-L version. Another reason for the reconsideration is that we distinguish the differential order of the unknown with the index of Jacobi polynomials, which is not presented in the previous work. Also we provide a way to choose the index when facing multi-order problems. By using generalized Hardy's inequality, the gap between the weighted Sobolev space involving Caputo fractional derivative and the classical weighted space is bridged, then the optimal projection error is derived in the non-uniformly Jacobi-weighted Sobolev space and the maximum absolute error is presented as well. For the interpolation, analysis of interpolation error was not given in their work. In this paper we build the interpolation error in non-uniformly Jacobi-weighted Sobolev space by constructing fractional inverse inequality. With combining collocation method, the approximation technique is applied to solve fractional initial-value problems (FIVPs). Numerical examples are also provided to illustrate the effectiveness of this algorithm.
16. A Discontinuous Galerkin Method for Parabolic Problems with Modified hp-Finite Element Approximation Technique
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
17. Window-based method for approximating the Hausdorff in three-dimensional range imagery
DOEpatents
Koch, Mark W.
2009-06-02
One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.
18. Testing a Novel Method to Approximate Wood Specific Gravity of Trees
Treesearch
Michael C. Wiemann; G. Bruce. Williamson
2012-01-01
Wood specific gravity (SG) has long been used by foresters as an index for wood properties. More recently, SG has been widely used by ecologists as a plant functional trait and as a key variable in estimates of biomass. However, sampling wood to determine SG can be problematic; at present, the most common method is sampling with an increment borer to extract a bark-to-...
19. Spin-orbit coupling with approximate equation-of-motion coupled-cluster method for ionization potential and electron attachment
Cao, Zhanli; Wang, Fan; Yang, Mingli
2016-10-01
Various approximate approaches to calculate cluster amplitudes in equation-of-motion coupled-cluster (EOM-CC) approaches for ionization potentials (IP) and electron affinities (EA) with spin-orbit coupling (SOC) included in post self-consistent field (SCF) calculations are proposed to reduce computational effort. Our results indicate that EOM-CC based on cluster amplitudes from the approximate method CCSD-1, where the singles equation is the same as that in CCSD and the doubles amplitudes are approximated with MP2, is able to provide reasonable IPs and EAs when SOC is not present compared with CCSD results. It is an economical approach for calculating IPs and EAs and is not as sensitive to strong correlation as CC2. When SOC is included, the approximate method CCSD-3, where the same singles equation as that in SOC-CCSD is used and the doubles equation of scalar-relativistic CCSD is employed, gives rise to IPs and EAs that are in closest agreement with those of CCSD. However, SO splitting with EOM-CC from CC2 generally agrees best with that with CCSD, while that of CCSD-1 and CCSD-3 is less accurate. This indicates that a balanced treatment of SOC effects on both single and double excitation amplitudes is required to achieve reliable SO splitting.
20. Interpolation and Approximation Theory.
ERIC Educational Resources Information Center
Kaijser, Sten
1991-01-01
Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)
1. Approximate Dirichlet Boundary Conditions in the Generalized Finite Element Method (PREPRINT)
DTIC Science & Technology
2006-02-01
works of Babuška [2, 3], Bramble and Nitsche [13], and Bramble and Schatz [15, 16], among others, for examples of how this approach works in prac...α! = α1! . . . αn!, is the Taylor polynomial of v at y of degree m and Φj ∈ C∞c (g̃−1(ωj)) is a function with integral 1. Then, by the Bramble ...Academic Press, New York, 1972. [13] J.H. Bramble , J.A. Nitsche, A Generalized Ritz–Least–Squares Method for Dirichlet Prob- lems, SIAM J. Numer
2. Approximate Methods for Obtaining the Complex Natural Electromagnetic Oscillations of an Object.
DTIC Science & Technology
1984-02-01
studying Prony’s method r for other scatterers and looking also for solutions to the problems inherent in the Prony process . E.M. Kennaugh suggested the...The search procedure is time consuming in machine computing. ILE 3. The search procedure cannot be used to process measured scattering data. 0. POLES...of the extracted poles as P. E. of real part = IReal part (Poleext.Poletrue)l , (3-11) SIPol I-oL etrue P. E. of imaginary part = IImag . part(Poleext
3. Approximate natural vibration analysis of rectangular plates with openings using assumed mode method
Cho, Dae Seung; Vladimir, Nikola; Choi, Tae MuK
2013-09-01
Natural vibration analysis of plates with openings of different shape represents an important issue in naval architecture and ocean engineering applications. In this paper, a procedure for vibration analysis of plates with openings and arbitrary edge constraints is presented. It is based on the assumed mode method, where natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange's equations of motion. The presented solution represents an extension of a procedure for natural vibration analysis of rectangular plates without openings, which has been recently presented in the literature. The effect of an opening is taken into account in an intuitive way, i.e. by subtracting its energy from the total plate energy without opening. Illustrative numerical examples include dynamic analysis of rectangular plates with rectangular, elliptic, circular as well as oval openings with various plate thicknesses and different combinations of boundary conditions. The results are compared with those obtained by the finite element method (FEM) as well as those available in the relevant literature, and very good agreement is achieved.
4. Optimization of parameters for semiempirical methods V: modification of NDDO approximations and application to 70 elements.
PubMed
Stewart, James J P
2007-12-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol(-1). For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol(-1). The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6-31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6-31G*: 7.4, and AM1: 10.0 kcal mol(-1). Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries.
5. Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method
Yang, Xiaofeng; Zhao, Jia; Wang, Qi
2017-03-01
The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg-Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the "Invariant Energy Quadratization" (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.
6. Optimization of parameters for semiempirical methods V: Modification of NDDO approximations and application to 70 elements
PubMed Central
2007-01-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol−1. For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol−1. The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6–31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6–31G*: 7.4, and AM1: 10.0 kcal mol−1. Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries. Figure Calculated structure of the complex ion [Ta6Cl12]2+ (footnote): Reference value in parenthesis Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0233-4) contains supplementary material, which is available to authorized users. PMID:17828561
Cakmakci, Ozan
today are functions mapping two dimensional vectors to real numbers. The majority of optical designs to-date have relied on conic sections and polynomials as the functions of choice. The choice of conic sections is justified since conic sections are stigmatic surfaces under certain imaging geometries. The choice of polynomials from the point of view of surface description can be challenged. A polynomial surface description may link a designer's understanding of the wavefront aberrations and the surface description. The limitations of using multivariate polynomials are described by a theorem due to Mairhuber and Curtis from approximation theory. This thesis proposes and applies radial basis functions to represent free-form optical surfaces as an alternative to multivariate polynomials. We compare the polynomial descriptions to radial basis functions using the MTF criteria. The benefits of using radial basis functions for surface description are summarized in the context of specific head-worn displays. The benefits include, for example, the performance increase measured by the MTF, or the ability to increase the field of view or pupil size. Even though Zernike polynomials are a complete and orthogonal set of basis over the unit circle and they can be orthogonalized for rectangular or hexagonal pupils using Gram-Schmidt, taking practical considerations into account, such as optimization time and the maximum number of variables available in current raytrace codes, for the specific case of the single off-axis magnifier with a 3 mm pupil, 15 mm eye relief, 24 degree diagonal full field of view, we found the Gaussian radial basis functions to yield a 20% gain in the average MTF at 17 field points compared to a Zernike (using 66 terms) and an x-y polynomial up to and including 10th order. The linear combination of radial basis function representation is not limited to circular apertures. Visualization tools such as field map plots provided by nodal aberration theory have been
8. Domain decomposition method for nonconforming finite element approximations of anisotropic elliptic problems on nonmatching grids
SciTech Connect
Maliassov, S.Y.
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
9. Approximate and Low Regularity Dirichlet Boundary Conditions in the Generalized Finite Element Method
DTIC Science & Technology
2006-07-31
Bramble and Nitsche [12], and Bramble and Schatz [13, 14], among others, for examples of how this approach works in practice. Another approach (used also in...with integral 1. Then, by the Bramble –Hilbert Lemma, we have (43) |v − Pj |Hs(g̃−1(ωj)) ≤ Ch m+1−s k |v|Hm+1(g̃−1(ωj)), for all 0 ≤ s ≤ m+ 1. Consider...New York, 1972. [12] J.H. Bramble , J.A. Nitsche, A Generalized Ritz–Least–Squares Method for Dirichlet Prob- lems, SIAM J. Numer. Anal, vol 10, no. 1
10. Approximation of mechanical properties of sintered materials with discrete element method
Dosta, Maksym; Besler, Robert; Ziehdorn, Christian; Janßen, Rolf; Heinrich, Stefan
2017-06-01
Sintering process is a key step in ceramic processing, which has strong influence on quality of final product. The final shape, microstructure and mechanical properties, e.g. density, heat conductivity, strength and hardness are depending on the sintering process. In order to characterize mechanical properties of sintered materials, in this contribution we present a microscale modelling approach. This approach consists of three different stages: simulation of the sintering process, transition to final structure and modelling of mechanical behaviour of sintered material with discrete element method (DEM). To validate the proposed simulation approach and to investigate products with varied internal structures alumina powder has been experimentally sintered at different temperatures. The comparison has shown that simulation results are in a very good agreement with experimental data and that the novel strategy can be effectively used for modelling of sintering process.
11. The Determination of the Spectrum Energy on the model of DNA-protein interactions using WKB approximation method
Syahroni, Edy; Suparmi, A.; Cari, C.
2017-01-01
The spectrum energy’s equation for Killingback potential on the model of DNA and protein interactions was obtained using WKB approximation method. The Killingbeck potential was substituted into the general equation of WKB approximation method to determine the energy. The general equation required the value of critical turning point to complete the form equation. In this research, the general form of Killingbeck potential was causing the equation of critical turning point turn into cube equation. In this case we take the value of critical turning point only with the real value. In mathematical condition, it was satisfied with requirement Discriminant was less than or equal to 0. If D=0, it would give two values of critical turning point and if D<0, it would give three values of critical turning point. In this research we present both of those requirements to complete the general Equation of Energy.
12. Approximate spin projected spin-unrestricted density functional theory method: Application to diradical character dependences of second hyperpolarizabilities
SciTech Connect
Nakano, Masayoshi Minami, Takuya Fukui, Hitoshi Yoneda, Kyohei Shigeta, Yasuteru Kishi, Ryohei; Champagne, Benoît; Botek, Edith
2015-01-22
We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.
13. Improvement of the recursive projection method for linear iterative scheme stabilization based on an approximate eigenvalue problem
Renac, Florent
2011-06-01
An algorithm for stabilizing linear iterative schemes is developed in this study. The recursive projection method is applied in order to stabilize divergent numerical algorithms. A criterion for selecting the divergent subspace of the iteration matrix with an approximate eigenvalue problem is introduced. The performance of the present algorithm is investigated in terms of storage requirements and CPU costs and is compared to the original Krylov criterion. Theoretical results on the divergent subspace selection accuracy are established. The method is then applied to the resolution of the linear advection-diffusion equation and to a sensitivity analysis for a turbulent transonic flow in the context of aerodynamic shape optimization. Numerical experiments demonstrate better robustness and faster convergence properties of the stabilization algorithm with the new criterion based on the approximate eigenvalue problem. This criterion requires only slight additional operations and memory which vanish in the limit of large linear systems.
14. A comparison of methods to estimate organ doses in CT when utilizing approximations to the tube current modulation function
PubMed Central
Khatonabadi, Maryam; Zhang, Di; Mathieu, Kelsey; Kim, Hyun J.; Lu, Peiyun; Cody, Dianna; DeMarco, John J.; Cagnon, Chris H.; McNitt-Gray, Michael F.
2012-01-01
Purpose: Most methods to estimate patient dose from computed tomography (CT) exams have been developed based on fixed tube current scans. However, in current clinical practice, many CT exams are performed using tube current modulation (TCM). Detailed information about the TCM function is difficult to obtain and therefore not easily integrated into patient dose estimate methods. The purpose of this study was to investigate the accuracy of organ dose estimates obtained using methods that approximate the TCM function using more readily available data compared to estimates obtained using the detailed description of the TCM function. Methods: Twenty adult female models generated from actual patient thoracic CT exams and 20 pediatric female models generated from whole body PET/CT exams were obtained with IRB (Institutional Review Board) approval. Detailed TCM function for each patient was obtained from projection data. Monte Carlo based models of each scanner and patient model were developed that incorporated the detailed TCM function for each patient model. Lungs and glandular breast tissue were identified in each patient model so that organ doses could be estimated from simulations. Three sets of simulations were performed: one using the original detailed TCM function (x, y, and z modulations), one using an approximation to the TCM function (only the z-axis or longitudinal modulation extracted from the image data), and the third was a fixed tube current simulation using a single tube current value which was equal to the average tube current over the entire exam. Differences from the reference (detailed TCM) method were calculated based on organ dose estimates. Pearson's correlation coefficients were calculated between methods after testing for normality. Equivalence test was performed to compare the equivalence limit between each method (longitudinal approximated TCM and fixed tube current method) and the detailed TCM method. Minimum equivalence limit was reported for
15. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.
PubMed
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
16. On the implementation of the discrete ordinate method with small-angle approximation for a pseudo-spherical atmosphere
Efremenko, D.; Doicu, A.; Loyola, D.; Trautmann, T.
2012-04-01
Numerical problems appear when solving the radiative transfer equation for systems with strong anisotropic scattering. To avoid oscillations in the solution a large number of discrete ordinates is required. As a consequence, the computing time increases considerably with O(N3), where N is the number of discrete ordinates. The performance can be improved partially by the delta-M method of Wiscombe [1], but this approach distorts the initial boundary problem and can lead to errors in small viewing angles. The efficiency of the discrete ordinate method with small-angle approximation for analyzing systems containing clouds and coarsest fraction of aerosol has been demonstrated by Budak and Korkin [2]. In this work we extend the plan-parallel version of the discrete ordinate method with small-angle approximation, as described in [2], to a pseudo-spherical atmosphere. The conventional pseudo-spherical technique relies on the separation of the total radiance into the direct solar beam and the diffuse radiance [3];the direct solar radiance is treated in a spherical geometry, while the diffuse radiance is computed in a plane-parallel geometry. Taking into account that in the discrete ordinate method with small-angle approximation, the radiance is separated into an 'anisotropic' and a smooth part, and that the direct solar beam is already included into anisotropic part, we introduce a pseudo-spherical correction by substracting the direct solar beam in a plane-parallel geometry and adding it in a pseudo-spherical geometry. In our simulations we considered a scenario which is typically for the UV/UIS instruments like GOME-2: a spectral interval between 315 nm and 335 nm, and an inhomogeneous atmosphere containing a cloud layer with an asymmetry parameter of 0.9. The numerical results evidenced that the differences between the pseudo-spherical and the plan-parallel models are of about 10 % for an incident angle of 80 degrees, 1 % for 65 degrees and less than 0.3 % for 50
17. Using the method of discrete dipoles to approximate solutions of the problems of light scattering and absorption by particles
Asenchik, O. D.
2017-02-01
A method of approximate calculation of the interaction inverse matrix in the method of discrete dipoles is proposed. The knowledge of this matrix makes it possible to determine the optical response of a system to the action of an electromagnetic wave with an arbitrary shape, which can be represented as a combination of vector spherical wave functions. The number of calculation operations of the matrix in the proposed method is considerably smaller than in the case of its direct calculation. In the case of a change in the refractive index of scattering particles, two methods of approximate calculation of the interaction inverse matrix are also proposed. This makes it possible to calculate the optical response of systems with new characteristics without direct solving equations of a system with a large dimension. The accuracy of the methods is numerically determined for particles with spherical and cubic shapes. It is shown that the methods are computationally efficient and can be used to calculate the values of polarization vectors inside particles and extinction and absorption cross sections of systems.
18. Simulation of near-field plasmonic interactions with a local approximation order discontinuous Galerkin time-domain method
Viquerat, Jonathan; Lanteri, Stéphane
2016-01-01
During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-difference time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied in various application contexts including those requiring to model light/matter interactions on the nanoscale. Several recent works have demonstrated the viability of the DGDT method for nanophotonics. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.
19. Estimation of relative permeability curves using an improved Levenberg-Marquardt method with simultaneous perturbation Jacobian approximation
Zhou, Kang; Hou, Jian; Fu, Hongfei; Wei, Bei; Liu, Yongge
2017-01-01
Relative permeability controls the flow of multiphase fluids in porous media. The estimation of relative permeability is generally solved by Levenberg-Marquardt method with finite difference Jacobian approximation (LM-FD). However, the method can hardly be used in large-scale reservoirs because of unbearably huge computational cost. To eliminate this problem, the paper introduces the idea of simultaneous perturbation to simplify the generation of the Jacobian matrix needed in the Levenberg-Marquardt procedure and denotes the improved method as LM-SP. It is verified by numerical experiments and then applied to laboratory experiments and a real commercial oilfield. Numerical experiment indicates that LM-SP uses only 16.1% computational cost to obtain similar estimation of relative permeability and prediction of production performance compared with LM-FD. Laboratory experiment also shows the LM-SP has a 60.4% decrease in simulation cost while a 68.5% increase in estimation accuracy compared with the earlier published results. This is mainly because LM-FD needs 2n (n is the number of controlling knots) simulations to approximate Jacobian in each iteration, while only 2 simulations are enough in basic LM-SP. The convergence rate and estimation accuracy of LM-SP can be improved by averaging several simultaneous perturbation Jacobian approximations but the computational cost of each iteration may be increased. Considering the estimation accuracy and computational cost, averaging two Jacobian approximations is recommended in this paper. As the number of unknown controlling knots increases from 7 to 15, the saved simulation runs by LM-SP than LM-FD increases from 114 to 1164. This indicates LM-SP is more suitable than LM-FD for multivariate problems. Field application further proves the applicability of LM-SP on large real field as well as small laboratory problems.
20. On-site approximation for spin orbit coupling in linear combination of atomic orbitals density functional methods
Fernández-Seivane, L.; Oliveira, M. A.; Sanvito, S.; Ferrer, J.
2006-08-01
We propose a computational method that drastically simplifies the inclusion of the spin-orbit interaction in density functional theory when implemented over localized basis sets. Our method is based on a well-known procedure for obtaining pseudopotentials from atomic relativistic ab initio calculations and on an on-site approximation for the spin-orbit matrix elements. We have implemented the technique in the SIESTA (Soler J M et al 2002 J. Phys.: Condens. Matter 14 2745-79) code, and show that it provides accurate results for the overall band-structure and splittings of group IV and III-IV semiconductors as well as for 5d metals.
1. Approximation algorithms
PubMed Central
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
2. Isoparametric fitting: A method for approximating full-field experimental data distributed on any shaped 3D domain
Bruno, Luigi
2016-12-01
With the present paper, the author proposes a fitting method for approximating experimental data retrieved from any full-field technique. Unlike most of the fitting procedures, the method works on data distributed on a surface of any shape, and the mathematical model is able to take into account of both the 3D shape of the surface and of the experimental quantity to be fitted. The paper reports all the mathematical steps necessary for applying the method, which was tested on two sets of experimental data obtained by an out-of-plane speckle interferometer working in two different conditions of noise. Experimental results showed the capability of the method to work in presence of high level of noise.
3. Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
4. Approximate method for predicting the permanent set in a beam in vacuo and in water subject to a shock wave
NASA Technical Reports Server (NTRS)
Stiehl, A. L.; Haberman, R. C.; Cowles, J. H.
1988-01-01
An approximate method to compute the maximum deformation and permanent set of a beam subjected to shock wave laoding in vacuo and in water was investigated. The method equates the maximum kinetic energy of the beam (and water) to the elastic plastic work done by a static uniform load applied to a beam. Results for the water case indicate that the plastic deformation is controlled by the kinetic energy of the water. The simplified approach can result in significant savings in computer time or it can expediently be used as a check of results from a more rigorous approach. The accuracy of the method is demonstrated by various examples of beams with simple support and clamped support boundary conditions.
5. Performance of stochastic Runge-Kutta Methods in approximating the solution of stochastic model in biological system
Amalina Nisa Ariffin, Noor; Rosli, Norhayati; Syahidatul Ayuni Mazlan, Mazma; Samsudin, Adam
2017-09-01
Recently, modelling the biological systems by using stochastic differential equations (SDEs) are becoming an interest among researchers. In SDEs the random fluctuations are taking into account, which resulting to the complexity of finding the exact solution of SDEs and contribute to the increasing number of research focusing in finding the best numerical approach to solve the systems of SDEs. This paper will examine the performance of 4-stage stochastic Runge-Kutta (SRK4) and specific stochastic Runge-Kutta (SRKS) methods with order 1.5 in approximating the solution of stochastic model in biological system. A comparative study of SRK4 and SRKS method will be presented in this paper. The non-linear biological model will be used to examine the performance of both methods and the result of numerical experiment will be discussed.
6. Electron-nucleus cusp correction scheme for the relativistic zeroth-order regular approximation quantum Monte Carlo method.
PubMed
Nakatsuka, Yutaka; Nakajima, Takahito; Hirao, Kimihiko
2010-05-07
A cusp correction scheme for the relativistic zeroth-order regular approximation (ZORA) quantum Monte Carlo method is proposed by extending the nonrelativistic cusp correction scheme of Ma et al. [J. Chem. Phys. 122, 224322 (2005)]. In this scheme, molecular orbitals that appear in Slater-Jastrow type wave functions are replaced with the exponential-type correction functions within a correction radius. Analysis of the behavior of the ZORA local energy in electron-nucleus collisions reveals that the Kato's cusp condition is not applicable to the ZORA QMC method. The divergence of the electron-nucleus Coulomb potential term in the ZORA local energy is remedied by adding a new logarithmic correction term. This method is shown to be useful for improving the numerical stability of the ZORA-QMC calculations using both Gaussian and Slater basis functions.
7. Speed-up of the volumetric method of moments for the approximate RCS of large arbitrary-shaped dielectric targets
Moreno, Javier; Somolinos, Álvaro; Romero, Gustavo; González, Iván; Cátedra, Felipe
2017-08-01
A method for the rigorous computation of the electromagnetic scattering of large dielectric volumes is presented. One goal is to simplify the analysis of large dielectric targets with translational symmetries taken advantage of their Toeplitz symmetry. Then, the matrix-fill stage of the Method of Moments is efficiently obtained because the number of coupling terms to compute is reduced. The Multilevel Fast Multipole Method is applied to solve the problem. Structured meshes are obtained efficiently to approximate the dielectric volumes. The regular mesh grid is achieved by using parallelepipeds whose centres have been identified as internal to the target. The ray casting algorithm is used to classify the parallelepiped centres. It may become a bottleneck when too many points are evaluated in volumes defined by parametric surfaces, so a hierarchical algorithm is proposed to minimize the number of evaluations. Measurements and analytical results are included for validation purposes.
8. Partition resampling and extrapolation averaging: approximation methods for quantifying gene expression in large numbers of short oligonucleotide arrays.
PubMed
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
9. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.
USGS Publications Warehouse
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
10. FELIX-1.0: A finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation
SciTech Connect
Regnier, D.; Verriere, M.; Dubray, N.; Schunck, N.
2015-11-30
In this study, we describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in NN-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank–Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.
11. An application of forward-backward difference approximation method on the optimal control problem in the transmission of tuberculosis model
Rahmah, Z.; Subartini, B.; Djauhari, E.; Anggriani, N.; Supriatna, A. K.
2017-03-01
Tuberculosis (TB) is a disease that is infected by the bacteria Mycobacterium tuberculosis. The World Health Organization (WHO) recommends to implement the Baccilus Calmete Guerin (BCG) vaccine in toddler aged two to three months to be protected from the infection. This research explores the numerical simulation of forward-backward difference approximation method on the model of TB transmission considering this vaccination program. The model considers five compartments of sub-populations, i.e. susceptible, vaccinated, exposed, infected, and recovered human sub-populations. We consider here the vaccination as a control variable. The results of the simulation showed that vaccination can indeed reduce the number of infected human population.
12. Fast modal method for crossed grating computation, combining finite formulation of Maxwell equations with polynomial approximated constitutive relations.
PubMed
Portier, Benjamin; Pardo, Fabrice; Bouchon, Patrick; Haïdar, Riad; Pelouard, Jean-Luc
2013-04-01
We present a modal method for the fast analysis of 2D-layered gratings. It combines exact discrete formulations of Maxwell equations in 2D space with polynomial approximations of the constitutive equations, and provides a sparse formulation of the eigenvalue equations. In specific cases, the use of sparse matrices allows us to calculate the electromagnetic response while solving only a small fraction of the eigenmodes. This significantly increases computational speed up to 100×, as shown on numerical examples of both dielectric and metallic subwavelength gratings.
13. Communication: On the consistency of approximate quantum dynamics simulation methods for vibrational spectra in the condensed phase
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-01
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm-1. Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
14. Communication: On the consistency of approximate quantum dynamics simulation methods for vibrational spectra in the condensed phase.
PubMed
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
15. Communication: On the consistency of approximate quantum dynamics simulation methods for vibrational spectra in the condensed phase
SciTech Connect
Rossi, Mariana; Liu, Hanchao; Bowman, Joel; Paesani, Francesco; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D{sub 2}O doped with HOD and pure H{sub 2}O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm{sup −1}. Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
16. Factorized cumulant expansion approximation method for turbulence with reacting and mixing chemical elements of type A + B → Product
Meshram, M. C.
2013-07-01
The Lewis-Kraichnan space-time version of Hopf functional formalism is considered for the investigation of turbulence with reacting and mixing chemical elements of type A + B → Product. The equations of motion are written in Fourier space. We first define the characteristic functional (or the moments generating functional) for the joint probability distribution of the velocity vector of the flow field and the reactants’ concentration scalar fields and translate the equations of motion in terms of the differential equations for the characteristic functional. These differential equations for the characteristic functional are further written in terms of the second characteristic functional (or the cumulant generating functional). This helps us in obtaining the equations for various order cumulants. We note from these equations for cumulants the characteristic difficulty of the theory of turbulence that the (n + 1)th order cumulant C(n+1) occurs in the equation for the dynamics of nth order cumulant Cn. We use the factorized cumulant expansion approximation method for the present investigation. Under this approximation an arbitrary nth order cumulant Cn is expressed in terms of the lower-order cumulants C(2), C(3) and C(n-1) and thus we obtain a closed but untruncated system of equations for the cumulants. On using the factorized fourth-cumulant approximation method a closed set of equations for the reactants’ energy spectrum functions and the reactants’ energy transfer functions are derived. These equations are solved numerically and the similarity laws of the solutions are derived analytically. The statistical quantities such as the reactants’ energy, the reactants’ enstrophy, the reactants’ scale of segregations and so on are calculated numerically and the statistical laws of these quantities are discussed. Also, the scope of this tool for investigation of turbulent phenomena not covered in the present study is discussed.
17. Semi-implicit iterative methods for low Mach number turbulent reacting flows: Operator splitting versus approximate factorization
MacArt, Jonathan F.; Mueller, Michael E.
2016-12-01
Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.
18. The Extended Projector Method and the Trotter - Approximation Applied to the Study of Few Level Systems Coupled to Harmonic Oscillators.
Vanhimbeeck, Marc
In this thesis a technique is developed to determine the low-energy eigensolutions of an unspecified few-level system which is coupled both linearly and quadratically to a finite collection of harmonic oscillators. The method is based on the second-order symmetrized Trotter-Suzuki approximation for e^{lambda} ^{H} with H standing for the Hamiltonian of the quantum mechanical system. Taking lambda = -beta (real), we use e ^{beta}^{H} as a projection operator which sorts out the low-energy eigenstates from the decomposition of an initially randomly constructed system-state. Once the eigenstates are found, a second approximation on the time propagator e ^{-itH} is applied in order to determine some relevant time-correlation functions for the systems under study. Next to a general formulation of the theory we also provide a study of some example systems. The coupled two-level system is shown to account phenomenologically for the anomalous isotope shift which was observed in the Raman spectrum of the tunneling Li^+ defect in KCl. Furthermore, we examine the low-energy eigenvalues and Ham-reduction factors for some of the cubic Jahn-Teller (JT) systems. The triplet systems T otimes tau _2 and T otimes epsilon are studied with a linear JT-interaction but for the E otimes epsilon doublet system a quadratic warping is included in the description. The results are in good agreement with the literature and confirm the applicability of the method.
19. New method for reducing the general formula for lattice specific heat to the Einstein and Nernst-Lindemann approximations
Irons, F. E.
2003-08-01
To reduce the general formula for lattice specific heat to Einstein's formula of 1907, one traditionally models the spectrum of lattice modes-of-vibration as a set of independent oscillators all of one frequency, nu(1). Not only is this a poor representation of a real solid, but no formula is provided for the frequency nu(1), which has to be determined empirically. We offer a new and more compelling method for reducing the general formula to Einstein's formula. The reduction involves a simple mathematical approximation, proceeds without any reference to independent oscillators all of one frequency, and leads to a formula for the characteristic frequency, nu(1), equal to the mean modal frequency. The mathematical approximation is valid at all but low temperatures, thereby providing insight into the failure of Einstein's formula at low temperatures. A simple extension of the new method leads to the Nernst-Lindemann formula for specific heat, proposed in 1911 on the basis of trial and error and currently without a sound theoretical basis. Empirical values (from the literature) of the frequencies that characterize the Einstein, the Nernst-Lindemann, and also the Debye formulae are all in support of the present theory.
20. Calculation of absorption spectra involving multiple excited states: approximate methods based on the mixed quantum classical Liouville equation.
PubMed
Bai, Shuming; Xie, Weiwei; Zhu, Lili; Shi, Qiang
2014-02-28
We investigate the calculation of absorption spectra based on the mixed quantum classical Liouville equation (MQCL) methods. It has been shown previously that, for a single excited state, the averaged classical dynamics approach to calculate the linear and nonlinear spectroscopy can be derived using the MQCL formalism. This work focuses on problems involving multiple coupled excited state surfaces, such as in molecular aggregates and in the cases of coupled electronic states. A new equation of motion to calculate the dipole-dipole correlation functions within the MQCL formalism is first presented. Two approximate methods are then proposed to solve the resulted equations of motion. The first approximation results in a mean field approach, where the nuclear dynamics is governed by averaged forces depending on the instantaneous electronic states. A modification to the mean field approach based on first order moment expansion is also proposed. Numerical examples including calculation of the absorption spectra of Frenkel exciton models of molecular aggregates, and the pyrazine molecule are presented.
1. Calculation of absorption spectra involving multiple excited states: Approximate methods based on the mixed quantum classical Liouville equation
SciTech Connect
Bai, Shuming; Xie, Weiwei; Zhu, Lili; Shi, Qiang
2014-02-28
We investigate the calculation of absorption spectra based on the mixed quantum classical Liouville equation (MQCL) methods. It has been shown previously that, for a single excited state, the averaged classical dynamics approach to calculate the linear and nonlinear spectroscopy can be derived using the MQCL formalism. This work focuses on problems involving multiple coupled excited state surfaces, such as in molecular aggregates and in the cases of coupled electronic states. A new equation of motion to calculate the dipole-dipole correlation functions within the MQCL formalism is first presented. Two approximate methods are then proposed to solve the resulted equations of motion. The first approximation results in a mean field approach, where the nuclear dynamics is governed by averaged forces depending on the instantaneous electronic states. A modification to the mean field approach based on first order moment expansion is also proposed. Numerical examples including calculation of the absorption spectra of Frenkel exciton models of molecular aggregates, and the pyrazine molecule are presented.
2. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation
Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia
2013-08-01
Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.
3. Application of vector-valued rational approximations to the matrix eigenvalue problem and connections with Krylov subspace methods
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
4. On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
5. Higher-order numerical methods derived from three-point polynomial interpolation
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.
6. Analysis of design for Hartmann-Shack measurements under usage of Fourier-iteration and Zernike approximation wavefront reconstruction methods
Kabardiadi, Alexander; Greiner, Andreas; Assmann, Heiko; Baselt, Tobias; Hartmann, Peter
2016-03-01
The measurement of a wavefront is a powerful tool for characterizing optical systems. The most commonly used wavefront measurement technique is the method of local-light aberrometry. The conventional version of this kind of measurement principle is the Hartmann-Shack wavefront sensor. This method returns the result of the matrix of spatially-resolved gradients of the wavefront. However, the last and crucial step of the wavefront analysis is the reconstruction of the wavefront from the measured data packets. The issues of the measurement preparation and design are interesting in the same volume. The work presented here describes the comparison between a Fourier-Iteration algorithm and the Zernike approximation method for the wavefront reconstruction in relation to the measurement design. In the context of this work, the term "design of the measurement" refers to the issue of the number and relative positions of the measurement points. In this work, the behavior of the wavefront reconstruction method using Monte-Carlo simulations was analyzed. The optimum point distribution was found and a validation parameter to describe the impact of measurement errors on the analysis results was determined. Based on this parameter, a Monte-Carlo based simulation to make the design of the experiment with the highest accuracy was realized. The technique of white noise injection was implemented in the reconstruction routine and the propagation of errors was analyzed. The presented comparison technique was applied to determine the optimum measurement positions over the beam's surface.
7. Condensing complex atmospheric chemistry mechanisms. 1: The direct constrained approximate lumping (DCAL) method applied to alkane photochemistry
SciTech Connect
Wang, S.W.; Georgopoulos, P.G.; Li, G.; Rabitz, H.
1998-07-01
Atmospheric chemistry mechanisms are the most computationally intensive components of photochemical air quality simulation models (PAQSMs). The development of a photochemical mechanism, that accurately describes atmospheric chemistry while being computationally efficient for use in PAQSMs, is a difficult undertaking that has traditionally been pursued through semiempirical (diagnostic) lumping approaches. The limitations of these diagnostic approaches are often associated with inaccuracies due to the fact that the lumped mechanisms have typically been optimized to fit the concentration profile of a specific species. Formal mathematical methods for model reduction have the potential (demonstrated through past applications in other areas) to provide very effective solutions to the need for computational efficiency combined with accuracy. Such methods, that can be used to condense a chemical mechanism, include kinetic lumping and domain separation. An application of the kinetic lumping method, using the direct constrained approximately lumping (DCAL) approach, to the atmospheric photochemistry of alkanes is presented in this work. It is shown that the lumped mechanism generated through the application of the DCAL method has the potential to overcome the limitations of existing semiempirical approaches, especially in relation to the consistent and accurate calculation of the time-concentration profiles of multiple species.
8. The narrow pulse approximation and long length scale determination in xenon gas diffusion NMR studies of model porous media
NASA Technical Reports Server (NTRS)
Mair, R. W.; Sen, P. N.; Hurlimann, M. D.; Patz, S.; Cory, D. G.; Walsworth, R. L.
2002-01-01
9. Approximate solution of Schrodinger equation in D-dimensions for Scarf trigonometry potential using Nikiforov-Uvarov method
Deta, U. A.; Suparmi, Cari
2013-09-01
The approximate analytical solution of Schrodinger equation in D-Dimensions for Scarf trigonometry potential were investigated using Nikiforov-Uvarov method. The bound state energy are given in the close form and the corresponding wave function for arbitary l-state in D-dimensions are formulated in the form of generalized Jacobi Polynomials. The example of bound state energy and wave function in 3, 4, and 5 dimensions presented in condition of ground state to second excited state. The existence of arbitrary dimensions increase bound state energy and the amplitude of the wave function of this potential. The effect of the presence of Scarf trigonometry potential increase the energy spectrum of this potential.
10. The surface morphology analysis based on progressive approximation method using confocal three-dimensional micro X-ray fluorescence
Yi, Longtao; Sun, Tianxi; Wang, Kai; Qin, Min; Yang, Kui; Wang, Jinbang; Liu, Zhiguo
2016-08-01
Confocal three-dimensional micro X-ray fluorescence (3D MXRF) is an excellent surface analysis technology. For a confocal structure, only the X-rays from the confocal volume can be detected. Confocal 3D MXRF has been widely used for analysing elements, the distribution of elements and 3D image of some special samples. However, it has rarely been applied to analysing surface topography by surface scanning. In this paper, a confocal 3D MXRF technology based on polycapillary X-ray optics was proposed for determining surface topography. A corresponding surface adaptive algorithm based on a progressive approximation method was designed to obtain surface topography. The surface topography of the letter "R" on a coin of the People's Republic of China and a small pit on painted pottery were obtained. The surface topography of the "R" and the pit are clearly shown in the two figures. Compared with the method in our previous study, it exhibits a higher scanning efficiency. This approach could be used for two-dimensional (2D) elemental mapping or 3D elemental voxel mapping measurements as an auxiliary method. It also could be used for analysing elemental mapping while obtaining the surface topography of a sample in 2D elemental mapping measurement.
11. Bayesian methods for quantitative trait loci mapping based on model selection: approximate analysis using the Bayesian information criterion.
PubMed
Ball, R D
2001-11-01
We describe an approximate method for the analysis of quantitative trait loci (QTL) based on model selection from multiple regression models with trait values regressed on marker genotypes, using a modification of the easily calculated Bayesian information criterion to estimate the posterior probability of models with various subsets of markers as variables. The BIC-delta criterion, with the parameter delta increasing the penalty for additional variables in a model, is further modified to incorporate prior information, and missing values are handled by multiple imputation. Marginal probabilities for model sizes are calculated, and the posterior probability of nonzero model size is interpreted as the posterior probability of existence of a QTL linked to one or more markers. The method is demonstrated on analysis of associations between wood density and markers on two linkage groups in Pinus radiata. Selection bias, which is the bias that results from using the same data to both select the variables in a model and estimate the coefficients, is shown to be a problem for commonly used non-Bayesian methods for QTL mapping, which do not average over alternative possible models that are consistent with the data.
12. Comparison of iterative methods and preconditioners for two-phase flow in porous media using exact and approximate Jacobians
Büsing, Henrik
2013-04-01
Two-phase flow in porous media occurs in various settings, such as the sequestration of CO2 in the subsurface, radioactive waste management, the flow of oil or gas in hydrocarbon reservoirs, or groundwater remediation. To model the sequestration of CO2, we consider a fully coupled formulation of the system of nonlinear, partial differential equations. For the solution of this system, we employ the Box method after Huber & Helmig (2000) for the space discretization and the fully implicit Euler method for the time discretization. After linearization with Newton's method, it remains to solve a linear system in every Newton step. We compare different iterative methods (BiCGStab, GMRES, AGMG, c.f., [Notay (2012)]) combined with different preconditioners (ILU0, ASM, Jacobi, and AMG as preconditioner) for the solution of these systems. The required Jacobians can be obtained elegantly with automatic differentiation (AD) [Griewank & Walther (2008)], a source code transformation providing exact derivatives. We compare the performance of the different iterative methods with their respective preconditioners for these linear systems. Furthermore, we analyze linear systems obtained by approximating the Jacobian with finite differences in terms of Newton steps per time step, steps of the iterative solvers and the overall solution time. Finally, we study the influence of heterogeneities in permeability and porosity on the performance of the iterative solvers and their robustness in this respect. References [Griewank & Walther(2008)] Griewank, A. & Walther, A., 2008. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, SIAM, Philadelphia, PA, 2nd edn. [Huber & Helmig(2000)] Huber, R. & Helmig, R., 2000. Node-centered finite volume discretizations for the numerical simulation of multiphase flow in heterogeneous porous media, Computational Geosciences, 4, 141-164. [Notay(2012)] Notay, Y., 2012. Aggregation-based algebraic multigrid for convection
13. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
14. Frozen Gaussian approximation-based two-level methods for multi-frequency Schrödinger equation
Lorin, E.; Yang, X.
2016-10-01
In this paper, we develop two-level numerical methods for the time-dependent Schrödinger equation (TDSE) in multi-frequency regime. This work is motivated by attosecond science (Corkum and Krausz, 2007), which refers to the interaction of short and intense laser pulses with quantum particles generating wide frequency spectrum light, and allowing for the coherent emission of attosecond pulses (1 attosecond=10-18 s). The principle of the proposed methods consists in decomposing a wavefunction into a low/moderate frequency (quantum) contribution, and a high frequency contribution exhibiting a semi-classical behavior. Low/moderate frequencies are computed through the direct solution to the quantum TDSE on a coarse mesh, and the high frequency contribution is computed by frozen Gaussian approximation (Herman and Kluk, 1984). This paper is devoted to the derivation of consistent, accurate and efficient algorithms performing such a decomposition and the time evolution of the wavefunction in the multi-frequency regime. Numerical simulations are provided to illustrate the accuracy and efficiency of the derived algorithms.
15. On the stability analysis of approximate factorization methods for 3D Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1993-01-01
The convergence characteristics of various approximate factorizations for the 3D Euler and Navier-Stokes equations are examined using the von-Neumann stability analysis method. Three upwind-difference based factorizations and several central-difference based factorizations are considered for the Euler equations. In the upwind factorizations both the flux-vector splitting methods of Steger and Warming and van Leer are considered. Analysis of the Navier-Stokes equations is performed only on the Beam and Warming central-difference scheme. The range of CFL numbers over which each factorization is stable is presented for one-, two-, and three-dimensional flow. Also presented for each factorization is the CFL number at which the maximum eigenvalue is minimized, for all Fourier components, as well as for the high frequency range only. The latter is useful for predicting the effectiveness of multigrid procedures with these schemes as smoothers. Further, local mode analysis is performed to test the suitability of using a uniform flow field in the stability analysis. Some inconsistencies in the results from previous analyses are resolved.
16. Mixing and transport in the Kelvin-Stuart cat eyes driven flow using the topological approximation method
Rodrigue, Stephen Michael
Transport rates for the Kelvin-Stuart Cat Eyes driven flow are calculated using the lobe transport theory of Rom-Kedar and Wiggins through application of the Topological Approximation Method (TAM) developed by Rom-Kedar. Numerical studies by Ottino (1989) and Tsega, Michaelides, and Eschenazi (2001) of the driven or perturbed flow indicated frequency dependence of the transport. One goal of the present research is to derive an analytical expression for the transport and to study its dependence upon the perturbation frequency o. The Kelvin-Stuart Cat Eyes dynamical system consists of an infinite string of equivalent vortices exhibiting a 2pi spatial periodicity in x with an unperturbed streamfunction of H( x, y) = ln(cosh y + A cos x) - ln(1+A). The driven flow has perturbation terms of a sin(o) in both the x and y directions. Lobe dynamics transport theory states that transport occurs through the transfer of turnstile lobes, and that transport rates are equal to the area of the lobes transferred. Lobes may intersect, necessitating the calculation and removal of lobe intersection areas. The TAM requires the use of a Melnikov integral function, the zeroes of which locate the lobes, and a Whisker map (Chirikov 1979), which locates lobe intersection points. An analytical expression for the Melnikov integral function is derived for the Kelvin-Stuart Cat Eyes driven flow. Using the derived analytical Melnikov integral function, derived expressions for the periods of internal and external orbits as functions of H, and the Whisker map, the Topological Approximation Method is applied to the Kelvin-Stuart driven flow to calculate transport rates for a range of frequencies from (o = 1.21971 to o = 3.27532 as the structure index L is varied from L = 2 to L = 10. Transport rates per iteration, and cumulative transport per iteration, are calculated for 100 iterations for both internal and external lobes. The transport rates exhibit strong frequency dependence in the frequency
17. Technical Note: Adaptation of an Animal-Model Method for Approximation of Reliabilities to a Sire-Maternal Grandsire Model
USDA-ARS?s Scientific Manuscript database
The ACCF90 computer program, which approximates reliability for animal models, was modified to estimate reliabilities for sire-maternal grandsire (MGS) models. Accuracy of the approximation was tested on a calving-ease data set for 2,968 bulls for which the inverse of the coefficient matrix could be...
18. A handy approximate solution for a squeezing flow between two infinite plates by using of Laplace transform-homotopy perturbation method.
PubMed
Filobello-Nino, Uriel; Vazquez-Leal, Hector; Cervantes-Perez, Juan; Benhammouda, Brahim; Perez-Sesma, Agustin; Hernandez-Martinez, Luis; Jimenez-Fernandez, Victor Manuel; Herrera-May, Agustin Leobardo; Pereyra-Diaz, Domitilo; Marin-Hernandez, Antonio; Huerta Chua, Jesus
2014-01-01
This article proposes Laplace Transform Homotopy Perturbation Method (LT-HPM) to find an approximate solution for the problem of an axisymmetric Newtonian fluid squeezed between two large parallel plates. After comparing figures between approximate and exact solutions, we will see that the proposed solutions besides of handy, are highly accurate and therefore LT-HPM is extremely efficient.
19. Approximation of periodic functions in the classes H{sub q}{sup {Omega}} by linear methods
SciTech Connect
Pustovoitov, Nikolai N
2012-01-31
The following result is proved: if approximations in the norm of L{sub {infinity}} (of H{sub 1}) of functions in the classes H{sub {infinity}}{sup {Omega}} (in H{sub 1}{sup {Omega}}, respectively) by some linear operators have the same order of magnitude as the best approximations, then the set of norms of these operators is unbounded. Also Bernstein's and the Jackson-Nikol'skii inequalities are proved for trigonometric polynomials with spectra in the sets Q(N) (in {Gamma}(N,{Omega})). Bibliography: 15 titles.
20. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-01
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
1. Stochastic approximation methods-Powerful tools for simulation and optimization: A survey of some recent work on multi-agent systems and cyber-physical systems
SciTech Connect
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
2. Approximate solution of two-term fractional-order diffusion, wave-diffusion, and telegraph models arising in mathematical physics using optimal homotopy asymptotic method
Sarwar, S.; Rashidi, M. M.
2016-07-01
This paper deals with the investigation of the analytical approximate solutions for two-term fractional-order diffusion, wave-diffusion, and telegraph equations. The fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], (1,2), and [1,2], respectively. In this paper, we extended optimal homotopy asymptotic method (OHAM) for two-term fractional-order wave-diffusion equations. Highly approximate solution is obtained in series form using this extended method. Approximate solution obtained by OHAM is compared with the exact solution. It is observed that OHAM is a prevailing and convergent method for the solutions of nonlinear-fractional-order time-dependent partial differential problems. The numerical results rendering that the applied method is explicit, effective, and easy to use, for handling more general fractional-order wave diffusion, diffusion, and telegraph problems.
3. Summary of Time Period-Based and Other Approximation Methods for Determining the Capacity Value of Wind and Solar in the United States: September 2010 - February 2012
SciTech Connect
Rogers, J.; Porter, K.
2012-03-01
This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.
4. Approximation of the Lévy Feller advection dispersion process by random walk and finite difference method
Liu, Q.; Liu, F.; Turner, I.; Anh, V.
2007-03-01
In this paper we present a random walk model for approximating a Lévy-Feller advection-dispersion process, governed by the Lévy-Feller advection-dispersion differential equation (LFADE). We show that the random walk model converges to LFADE by use of a properly scaled transition to vanishing space and time steps. We propose an explicit finite difference approximation (EFDA) for LFADE, resulting from the Grünwald-Letnikov discretization of fractional derivatives. As a result of the interpretation of the random walk model, the stability and convergence of EFDA for LFADE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
5. Approximate method for calculating transonic flow about lifting wing-body configurations: Computer program and user's manual
NASA Technical Reports Server (NTRS)
Barnwell, R. W.; Davis, R. M.
1975-01-01
A user's manual is presented for a computer program which calculates inviscid flow about lifting configurations in the free-stream Mach-number range from zero to low supersonic. Angles of attack of the order of the configuration thickness-length ratio and less can be calculated. An approximate formulation was used which accounts for shock waves, leading-edge separation and wind-tunnel wall effects.
6. A new look at the statistical assessment of approximate and rigorous methods for the estimation of stabilized formation temperatures in geothermal and petroleum wells
Espinoza-Ojeda, O. M.; Santoyo, E.; Andaverde, J.
2011-06-01
Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates.
7. Two coupled particle-finite volume methods using Delaunay-Voronoie meshes for the approximation of Vlasov-Poisson and Vlasov-Maxwell equations
SciTech Connect
Hermeline, F. )
1993-05-01
This paper deals with the approximation of Vlasov-Poisson and Vlasov-Maxwell equations. We present two coupled particle-finite volume methods which use the properties of Delaunay-Voronoi meshes. These methods are applied to benchmark calculations and engineering problems such as simulation of electron injector devices. 42 refs., 13 figs.
8. Existence of short-time approximations of any polynomial order for the computation of density matrices by path integral methods
Predescu, Cristian
2004-05-01
In this paper I provide significant mathematical evidence in support of the existence of direct short-time approximations of any polynomial order for the computation of density matrices of physical systems described by arbitrarily smooth and bounded from below potentials. While for Theorem 2, which is “experimental,” I only provide a “physicist’s” proof, I believe the present development is mathematically sound. As a verification, I explicitly construct two short-time approximations to the density matrix having convergence orders 3 and 4, respectively. Furthermore, in Appendix B, I derive the convergence constant for the trapezoidal Trotter path integral technique. The convergence orders and constants are then verified by numerical simulations. While the two short-time approximations constructed are of sure interest to physicists and chemists involved in Monte Carlo path integral simulations, the present paper is also aimed at the mathematical community, who might find the results interesting and worth exploring. I conclude the paper by discussing the implications of the present findings with respect to the solvability of the dynamical sign problem appearing in real-time Feynman path integral simulations.
9. Identifying Early Paleozoic tectonic relations in a region affected by post-Taconian transcurrent faulting, an example from the PA-DE Piedmont
SciTech Connect
Alcock, J. . Dept. of Environmental Science); Wagner, M.E. . Geology); Srogi, L.A. . Dept. of Geology and Astronomy)
1993-03-01
Post-Taconian transcurrent faulting in the Appalachian Piedmont presents a significant problem to workers attempting to reconstruct the Early Paleozoic tectonic history. One solution to the problem is to identify blocks that lie between zones of transcurrent faulting and that retain the Early Paleozoic arrangement of litho-tectonic units. The authors propose that a comparison of metamorphic histories of different units can be used to recognize blocks of this type. The Wilmington Complex (WC) arc terrane, the pre-Taconian Laurentian margin rocks (LM) exposed in basement-cored massifs, and the Wissahickon Group metapelites (WS) that lie between them are three litho-tectonic units in the PA-DE Piedmont that comprise a block assembled in the Early Paleozoic. Evidence supporting this interpretation includes: (1) Metamorphic and lithologic differences across the WC-WS contact and detailed geologic mapping of the contact that suggest thrusting of the WC onto the WS; (2) A metamorphic gradient in the WS with highest grade, including spinel-cordierite migmatites, adjacent to the WC indicating that peak metamorphism of the WS resulted from heating by the WC; (3) A metamorphic discontinuity at the WS-LM contact, evidence for emplacement of the WS onto the LM after WS peak metamorphism; (4) A correlation of mineral assemblage in the Cockeysville Marble of the LM with distance from the WS indicating that peak metamorphism of the LM occurred after emplacement of the WS; and (5) Early Paleozoic lower intercept zircon ages for the LM that are interpreted to date Taconian regional metamorphism. Analysis of metamorphism and its timing relative to thrusting suggest that the WS was associated with the WC before the WS was emplaced onto the LM during the Taconian. It follows that these units form a block that has not been significantly disrupted by later transcurrent shear.
10. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
11. An approximate but efficient method to calculate free energy trends by computer simulation: Application to dihydrofolate reductase-inhibitor complexes
Gerber, Paul R.; Mark, Alan E.; van Gunsteren, Wilfred F.
1993-06-01
Derivatives of free energy differences have been calculated by molecular dynamics techniques. The systems under study were ternary complexes of Trimethoprim (TMP) with dihydrofolate reductases of E. coli and chicken liver, containing the cofactor NADPH. Derivatives are taken with respect to modification of TMP, with emphasis on altering the 3-, 4- and 5-substituents of the phenyl ring. A linear approximation allows the encompassing of a whole set of modifications in a single simulation, as opposed to a full perturbation calculation, which requires a separate simulation for each modification. In the case considered here, the proposed technique requires a factor of 1000 less computing effort than a full free energy perturbation calculation. For the linear approximation to yield a significant result, one has to find ways of choosing the perturbation evolution, such that the initial trend mirrors the full calculation. The generation of new atoms requires a careful treatment of the singular terms in the non-bonded interaction. The result can be represented by maps of the changed molecule, which indicate whether complex formation is favoured under movement of partial charges and change in atom polarizabilities. Comparison with experimental measurements of inhibition constants reveals fair agreement in the range of values covered. However, detailed comparison fails to show a significant correlation. Possible reasons for the most pronounced deviations are given.
12. Comparison of methods for calculating Franck-Condon factors beyond the harmonic approximation: how important are Duschinsky rotations?
Meier, Patrick; Rauhut, Guntram
2015-12-01
Three different approaches for calculating Franck-Condon factors beyond the harmonic approximation are compared and discussed in detail. Duschinsky effects are accounted for either by a rotation of the initial or final wavefunctions - which are obtained from state-specific configuration-selective vibrational configuration interaction calculations - or by a rotation of the underlying multi-dimensional potential energy surfaces being determined from explicitly correlated coupled-cluster approaches. An analysis of the Duschinsky effects in dependence on the rotational angles and the anisotropy of the wavefunction is provided. Benchmark calculations for the photoelectron spectra of ClO2, HS-2 and ZnOH- are presented. An application of the favoured approach for calculating Franck-Condon factors to the oxidation of Zn(H2O)+ and Zn2(H2O)+ demonstrates its applicability to systems with more than three atoms.
13. A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
SciTech Connect
M. D. Landon; R. W. Johnson
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve complex curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
14. A B-Spline-Based Colocation Method to Approximate the Solutions to the Equations of Fluid Dynamics
SciTech Connect
Johnson, Richard Wayne; Landon, Mark Dee
1999-07-01
The potential of a B-spline collocation method for numerically solving the equations of fluid dynamics is discussed. It is known that B-splines can resolve curves with drastically fewer data than can their standard shape function counterparts. This feature promises to allow much faster numerical simulations of fluid flow than standard finite volume/finite element methods without sacrificing accuracy. An example channel flow problem is solved using the method.
15. An Approximate Method of Calculation of Relative Humidity Required to Prevent Frosting on Inside of Aircraft Pressure Cabin Windows, Special Report
NASA Technical Reports Server (NTRS)
Jones, Alun R.
1940-01-01
This report has been prepare in response to a request for information from an aircraft company. A typical example was selected for the presentation of an approximate method of calculation of the relative humidity required to prevent frosting on the inside of a plastic window in a pressure type cabin on a high speed airplane. The results of the study are reviewed.
16. Green Ampt approximations
Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.
2005-10-01
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.
17. The exact solutions and approximate analytic solutions of the (2 + 1)-dimensional KP equation based on symmetry method.
PubMed
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
18. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
19. Hodgkin-Huxley revisited: reparametrization and identifiability analysis of the classic action potential model with approximate Bayesian methods.
PubMed
Daly, Aidan C; Gavaghan, David J; Holmes, Chris; Cooper, Jonathan
2015-12-01
As cardiac cell models become increasingly complex, a correspondingly complex 'genealogy' of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models.
20. Hodgkin–Huxley revisited: reparametrization and identifiability analysis of the classic action potential model with approximate Bayesian methods
PubMed Central
Daly, Aidan C.; Holmes, Chris
2015-01-01
As cardiac cell models become increasingly complex, a correspondingly complex ‘genealogy’ of inherited parameter values has also emerged. The result has been the loss of a direct link between model parameters and experimental data, limiting both reproducibility and the ability to re-fit to new data. We examine the ability of approximate Bayesian computation (ABC) to infer parameter distributions in the seminal action potential model of Hodgkin and Huxley, for which an immediate and documented connection to experimental results exists. The ability of ABC to produce tight posteriors around the reported values for the gating rates of sodium and potassium ion channels validates the precision of this early work, while the highly variable posteriors around certain voltage dependency parameters suggests that voltage clamp experiments alone are insufficient to constrain the full model. Despite this, Hodgkin and Huxley's estimates are shown to be competitive with those produced by ABC, and the variable behaviour of posterior parametrized models under complex voltage protocols suggests that with additional data the model could be fully constrained. This work will provide the starting point for a full identifiability analysis of commonly used cardiac models, as well as a template for informative, data-driven parametrization of newly proposed models. PMID:27019736
1. A Finite-Difference Numerical Method for Onsager's Pancake Approximation for Fluid Flow in a Gas Centrifuge
SciTech Connect
2007-11-12
Gas centrifuges exhibit very complex flows. Within the centrifuge there is a rarefied region, a transition region, and a region with an extreme density gradient. The flow moves at hypersonic speeds and shock waves are present. However, the flow is subsonic in the axisymmetric plane. The analysis may be simplified by treating the flow as a perturbation of wheel flow. Wheel flow implies that the fluid is moving as a solid body. With the very large pressure gradient, the majority of the fluid is located very close to the rotor wall and moves at an azimuthal velocity proportional to its distance from the rotor wall; there is no slipping in the azimuthal plane. The fluid can be modeled as incompressible and subsonic in the axisymmetric plane. By treating the centrifuge as long, end effects can be appropriately modeled without performing a detailed boundary layer analysis. Onsager's pancake approximation is used to construct a simulation to model fluid flow in a gas centrifuge. The governing 6th order partial differential equation is broken down into an equivalent coupled system of three equations and then solved numerically. In addition to a discussion on the baseline solution, known problems and future work possibilities are presented.
2. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.
PubMed
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
3. Analysis of field-angle dependent specific heat in unconventional superconductors: A comparison between Doppler-shift method and Kramer-Pesch approximation
Hayashi, Nobuhiko; Nagai, Yuki; Higashi, Yoichi
2010-12-01
We theoretically discuss the magnetic-field-angle dependence of the zero-energy density of states (ZEDOS) in superconductors. Point-node and line-node superconducting gaps on spherical and cylindrical Fermi surfaces are considered. The Doppler-shift (DS) method and the Kramer-Pesch approximation (KPA) are used to calculate the ZEDOS. Numerical results show that consequences of the DS method are corrected by the KPA.
4. The boundary-quality penalty: a quantitative method for approximating species responses to fragmentation in reserve selection.
PubMed
Moilanen, Atte; Wintle, Brendan A
2007-04-01
Aggregation of reserve networks is generally considered desirable for biological and economic reasons: aggregation reduces negative edge effects and facilitates metapopulation dynamics, which plausibly leads to improved persistence of species. Economically, aggregated networks are less expensive to manage than fragmented ones. Therefore, many reserve-design methods use qualitative heuristics, such as distance-based criteria or boundary-length penalties to induce reserve aggregation. We devised a quantitative method that introduces aggregation into reserve networks. We call the method the boundary-quality penalty (BQP) because the biological value of a land unit (grid cell) is penalized when the unit occurs close enough to the edge of a reserve such that a fragmentation or edge effect would reduce population densities in the reserved cell. The BQP can be estimated for any habitat model that includes neighborhood (connectivity) effects, and it can be introduced into reserve selection software in a standardized manner. We used the BQP in a reserve-design case study of the Hunter Valley of southeastern Australia. The BQP resulted in a more highly aggregated reserve network structure. The degree of aggregation required was specified by observed (albeit modeled) biological responses to fragmentation. Estimating the effects of fragmentation on individual species and incorporating estimated effects in the objective function of reserve-selection algorithms is a coherent and defensible way to select aggregated reserves. We implemented the BQP in the context of the Zonation method, but it could as well be implemented into any other spatially explicit reserve-planning framework.
5. Covariant approximation averaging
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
6. Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
7. Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
8. An experimental and analytical method for approximate determination of the tilt rotor research aircraft rotor/wing download
NASA Technical Reports Server (NTRS)
Jordon, D. E.; Patterson, W.; Sandlin, D. R.
1985-01-01
The XV-15 Tilt Rotor Research Aircraft download phenomenon was analyzed. This phenomenon is a direct result of the two rotor wakes impinging on the wing upper surface when the aircraft is in the hover configuration. For this study the analysis proceeded along tow lines. First was a method whereby results from actual hover tests of the XV-15 aircraft were combined with drag coefficient results from wind tunnel tests of a wing that was representative of the aircraft wing. Second, an analytical method was used that modeled that airflow caused gy the two rotors. Formulas were developed in such a way that acomputer program could be used to calculate the axial velocities were then used in conjunction with the aforementioned wind tunnel drag coefficinet results to produce download values. An attempt was made to validate the analytical results by modeling a model rotor system for which direct download values were determinrd..
9. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs
SciTech Connect
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
10. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs
SciTech Connect
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
11. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
12. Structure of Bergman-type W-TiZrNi approximants to quasicrystal, analyzed by lattice inversion method
Huang, H.; Meng, D. Q.; Lai, X. C.; Liu, T. W.; Long, Y.; Hu, Q. M.
2014-08-01
The combined interatomic pair potentials of TiZrNi, including Morse and Inversion Gaussian, are successfully built by the lattice inversion method. Some experimental controversies on atomic occupancies of sites 6-8 in W-TiZrNi are analyzed and settled with these inverted potentials. According to the characteristics of composition and site preference occupancy of W-TiZrNi, two stable structural models of W-TiZrNi are proposed and the possibilities are partly confirmed by experimental data. The stabilities of W-TiZrNi mostly result from the contribution of Zr atoms to the phonon densities of states in lower frequencies.
13. Use of Lagrange Multipliers to Provide an Approximate Method for the Optimisation of a Shield Radius and Contents
Warner, Paul
2017-09-01
For any appreciable radiation source, such as a nuclear reactor core or radiation physics accelerator, there will be the safety requirement to shield operators from the effects of the radiation from the source. Both the size and weight of the shield need to be minimised to reduce costs (and to increase the space available for the maintenance envelope on a plant). This needs to be balanced against legal radiation dose safety limits and the requirement to reduce the dose to operators As Low As Reasonably Practicable (ALARP). This paper describes a method that can be used, early in a shield design, to scope the design and provide a practical estimation of the size of the shield by optimising the shield internals. In particular, a theoretical model representative of a small reactor is used to demonstrate that the primary shielding radius, thickness of the primary shielding inner wall and the thicknesses of two steel inner walls, can be set using the Lagrange multiplier method with a constraint on the total flux on the outside of the shielding. The results from the optimisation are presented and an RZ finite element transport theory calculation is used to demonstrate that, using the optimised geometry, the constraint is achieved.
14. Modified method of simplest equation: Powerful tool for obtaining exact and approximate traveling-wave solutions of nonlinear PDEs
Vitanov, Nikolay K.
2011-03-01
We discuss the class of equations ∑i,j=0mAij(u){∂iu}/{∂ti}∂+∑k,l=0nBkl(u){∂ku}/{∂xk}∂=C(u) where Aij( u), Bkl( u) and C( u) are functions of u( x, t) as follows: (i) Aij, Bkl and C are polynomials of u; or (ii) Aij, Bkl and C can be reduced to polynomials of u by means of Taylor series for small values of u. For these two cases the above-mentioned class of equations consists of nonlinear PDEs with polynomial nonlinearities. We show that the modified method of simplest equation is powerful tool for obtaining exact traveling-wave solution of this class of equations. The balance equations for the sub-class of traveling-wave solutions of the investigated class of equations are obtained. We illustrate the method by obtaining exact traveling-wave solutions (i) of the Swift-Hohenberg equation and (ii) of the generalized Rayleigh equation for the cases when the extended tanh-equation or the equations of Bernoulli and Riccati are used as simplest equations.
15. Numerical approximation of higher-order time-fractional telegraph equation by using a combination of a geometric approach and method of line
Hashemi, M. S.; Baleanu, D.
2016-07-01
We propose a simple and accurate numerical scheme for solving the time fractional telegraph (TFT) equation within Caputo type fractional derivative. A fictitious coordinate ϑ is imposed onto the problem in order to transform the dependent variable u (x , t) into a new variable with an extra dimension. In the new space with the added fictitious dimension, a combination of method of line and group preserving scheme (GPS) is proposed to find the approximate solutions. This method preserves the geometric structure of the problem. Power and accuracy of this method has been illustrated through some examples of TFT equation.
16. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods
Chen, Peng; Quarteroni, Alfio
2015-10-01
In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.
17. Description of spin–orbit coupling in excited states with two-component methods based on approximate coupled-cluster theory
SciTech Connect
Krause, Katharina; Klopper, Wim
2015-03-14
A generalization of the approximated coupled-cluster singles and doubles method and the algebraic diagrammatic construction scheme up to second order to two-component spinors obtained from a relativistic Hartree–Fock calculation is reported. Computational results for zero-field splittings of atoms and monoatomic cations, triplet lifetimes of two organic molecules, and the spin-forbidden part of the UV/Vis absorption spectrum of tris(ethylenediamine)cobalt(III) are presented.
18. Performance of a pen-type laser fluorescence device and conventional methods in detecting approximal caries lesions in primary teeth--in vivo study.
PubMed
Novaes, T F; Matos, R; Braga, M M; Imparato, J C P; Raggio, D P; Mendes, F M
2009-01-01
This in vivo study aimed to compare the performance of different methods of approximal caries detection in primary molars. Fifty children (aged 5-12 years) were selected, and 2 examiners evaluated 621 approximal surfaces of primary molars using: (a) visual inspection, (b) the radiographic method and (c) a pen-type laser fluorescence device (LFpen). As reference standard method, the teeth were separated using orthodontic rubbers during 7 days, and the surfaces were evaluated by 2 examiners for the presence of white spots or cavitations. The area under the receiver-operating characteristics curve (A(z)) as well as sensitivity, specificity and accuracy (percentage of correct diagnosis) were calculated and compared with the McNemar test at both thresholds. The interexaminer reproducibility was calculated using the intraclass correlation coefficient (ICC-absolute values) and the kappa test (dichotomizing for both thresholds). The ICC value of the reference standard procedure was 0.94. At white-spot threshold, no methods tested presented good performance (sensitivity: visual 0.20-0.21; radiographic 0.16-0.23; LFpen 0.16; specificity: visual 0.95; radiographic 0.99-1.00; LFpen 0.94-0.96). At cavitation threshold, both LFpen and radiographic methods demonstrated higher sensitivity (0.55-0.65 and 0.65-0.70, respectively) and A(z) (0.92 and 0.88-0.89, respectively) than visual inspection sensitivity (0.30) and A(z) (0.69-0.76). All methods presented high specificities (around 0.99) and similar ICCs, but the kappa value for LFpen at white-spot threshold was lower (0.44). In conclusion, both LFpen and radiographic methods present similar performance in detecting the presence of cavitations on approximal surfaces of primary molars. Copyright 2009 S. Karger AG, Basel.
19. Selection and drift in subdivided populations: a straightforward method for deriving diffusion approximations and applications involving dominance, selfing and local extinctions.
PubMed Central
Roze, Denis; Rousset, François
2003-01-01
Population structure affects the relative influence of selection and drift on the change in allele frequencies. Several models have been proposed recently, using diffusion approximations to calculate fixation probabilities, fixation times, and equilibrium properties of subdivided populations. We propose here a simple method to construct diffusion approximations in structured populations; it relies on general expressions for the expectation and variance in allele frequency change over one generation, in terms of partial derivatives of a "fitness function" and probabilities of genetic identity evaluated in a neutral model. In the limit of a very large number of demes, these probabilities can be expressed as functions of average allele frequencies in the metapopulation, provided that coalescence occurs on two different timescales, which is the case in the island model. We then use the method to derive expressions for the probability of fixation of new mutations, as a function of their dominance coefficient, the rate of partial selfing, and the rate of deme extinction. We obtain more precise approximations than those derived by recent work, in particular (but not only) when deme sizes are small. Comparisons with simulations show that the method gives good results as long as migration is stronger than selection. PMID:14704194
20. A minimalistic approach to static and dynamic electron correlations: Amending generalized valence bond method with extended random phase approximation correlation correction
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-01
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
1. Convergence of the standard RLS method and UDUT factorisation of covariance matrix for solving the algebraic Riccati equation of the DLQR via heuristic approximate dynamic programming
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
2. A minimalistic approach to static and dynamic electron correlations: Amending generalized valence bond method with extended random phase approximation correlation correction.
PubMed
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-28
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
3. Introducing the mean field approximation to CDFT/MMpol method: Statistically converged equilibrium and nonequilibrium free energy calculation for electron transfer reactions in condensed phases
Nakano, Hiroshi; Sato, Hirofumi
2017-04-01
A new theoretical method to study electron transfer reactions in condensed phases is proposed by introducing the mean-field approximation into the constrained density functional theory/molecular mechanical method with a polarizable force field (CDFT/MMpol). The method enables us to efficiently calculate the statistically converged equilibrium and nonequilibrium free energies for diabatic states in an electron transfer reaction by virtue of the mean field approximation that drastically reduces the number of CDFT calculations. We apply the method to the system of a formanilide-anthraquinone dyad in dimethylsulfoxide, in which charge recombination and cis-trans isomerization reactions can take place, previously studied by the CDFT/MMpol method. Quantitative agreement of the driving force and the reorganization energy between our results and those from the CDFT/MMpol calculation and the experimental estimates supports the utility of our method. The calculated nonequilibrium free energy is analyzed by its decomposition into several contributions such as those from the averaged solute-solvent electrostatic interactions and the explicit solvent electronic polarization. The former contribution is qualitatively well described by a model composed of a coarse-grained dyad in a solution in the linear response regime. The latter contribution reduces the reorganization energy by more than 10 kcal/mol.
4. Approximate kernel competitive learning.
PubMed
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
5. Approximate quantum chemical methods for modelling carbohydrate conformation and aromatic interactions: β-cyclodextrin and its adsorption on a single-layer graphene sheet.
PubMed
Jaiyong, Panichakorn; Bryce, Richard A
2017-06-14
Noncovalent functionalization of graphene by carbohydrates such as β-cyclodextrin (βCD) has the potential to improve graphene dispersibility and its use in biomedical applications. Here we explore the ability of approximate quantum chemical methods to accurately model βCD conformation and its interaction with graphene. We find that DFTB3, SCC-DFTB and PM3CARB-1 methods provide the best agreement with density functional theory (DFT) in calculation of relative energetics of gas-phase βCD conformers; however, the remaining NDDO-based approaches we considered underestimate the stability of the trans,gauche vicinal diol conformation. This diol orientation, corresponding to a clockwise hydrogen bonding arrangement in the glucosyl residue of βCD, is present in the lowest energy βCD conformer. Consequently, for adsorption on graphene of clockwise or counterclockwise hydrogen bonded forms of βCD, calculated with respect to this unbound conformer, the DFTB3 method provides closer agreement with DFT values than PM7 and PM6-DH2 approaches. These findings suggest approximate quantum chemical methods as potentially useful tools to guide the design of carbohydrate-graphene interactions, but also highlights the specific challenge to NDDO-based methods in capturing the relative energetics of carbohydrate hydrogen bond networks.
6. Comparative assessment of density functional methods for evaluating essential parameters to simulate SERS spectra within the excited state energy gradient approximation
2016-05-01
The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.
7. Application of the Method of Approximating Polynomials for the Determination of the Temperature and Concentration of Hot Carbon Dioxide from Its Transmission Spectrum
Voitsekhovskaya, O. K.; Egorov, O. V.; Kashirskii, D. E.; Emel'yanov, N. M.
2017-09-01
An advanced method of approximating polynomials for simultaneous determination of the temperature and concentration of a hot gas from its spectral characteristics is presented. The technique has been approved with application of the most correct measurements of carbon dioxide transmission function at temperatures 500-1770 K and partial pressures ρ_{{CO}_2} = 0.17-1 atm. An arbitrary number (≥2) of spectral centers is used to solve unambiguously the inverse optical problem for the transmission function in the measured spectral region. The influence of the value of the transmission function on its approximation error by the polynomial of a fixed degree is analyzed. Dependences of errors in determining the temperature and concentration of carbon dioxide on the values of its transmission function and the number of the employed spectral centers are obtained. The accuracy of determining experimental values of the thermodynamic parameters with allowance for the error of measuring the transmission function is increased.
8. Advanced Methods of Approximate Reasoning
DTIC Science & Technology
1990-11-30
possibilistic or " fuzzy " logic. Using a conceptual framework, previously employed to explain the meaning of the Dempster-Shafer calculus of I evidence (i.e...explanations for possibilistic constructs on the basis of previously existing notions rather than generalizations of modal frameworks by means of fuzzy ...the near future: 5 1. Control of unstable systems. such as helicopters, land vehicles, or weapon platforn. by means of possibilistic control
9. Derivation and evaluation of an approximate analysis for three-dimensional viscous subsonic flow with large secondary velocities. [finite difference method
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Briley, W. R.; Mcdonald, H.
1978-01-01
An approximate analysis is presented for calculating three-dimensional, low Mach number, laminar viscous flows in curved passages with large secondary flows and corner boundary layers. The analysis is based on the decomposition of the overall velocity field into inviscid and viscous components with the overall velocity being determined from superposition. An incompressible vorticity transport equation is used to estimate inviscid secondary flow velocities to be used as corrections to the potential flow velocity field. A parabolized streamwise momentum equation coupled to an adiabatic energy equation and global continuity equation is used to obtain an approximate viscous correction to the pressure and longitudinal velocity fields. A collateral flow assumption is invoked to estimate the viscous correction to the transverse velocity fields. The approximate analysis is solved numerically using an implicit ADI solution for the viscous pressure and velocity fields. An iterative ADI procedure is used to solve for the inviscid secondary vorticity and velocity fields. This method was applied to computing the flow within a turbine vane passage with inlet flow conditions of M = 0.1 and M = 0.25, Re = 1000 and adiabatic walls, and for a constant radius curved rectangular duct with R/D = 12 and 14 and with inlet flow conditions of M = 0.1, Re = 1000, and adiabatic walls.
10. Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
11. Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
12. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.
PubMed
Thorn, Graeme J; King, John R
2016-01-01
The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.
13. An approximate theoretical method for modeling the static thrust performance of non-axisymmetric two-dimensional convergent-divergent nozzles. M.S. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
1995-01-01
An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.
14. Influence of contact points on the performance of caries detection methods in approximal surfaces of primary molars: an in vivo study.
PubMed
Ribeiro, Apoena A; Purger, Flávia; Rodrigues, Jonas A; Oliveira, Patrícia R A; Lussi, Adrian; Monteiro, Antonio Henrique; Alves, Haimon D L; Assis, Joaquim T; Vasconcellos, Adalberto B
2015-01-01
This in vivo study aimed to evaluate the influence of contact points on the approximal caries detection in primary molars, by comparing the performance of the DIAGNOdent pen and visual-tactile examination after tooth separation to bitewing radiography (BW). A total of 112 children were examined and 33 children were selected. In three periods (a, b, and c), 209 approximal surfaces were examined: (a) examiner 1 performed visual-tactile examination using the Nyvad criteria (EX1); examiner 2 used DIAGNOdent pen (LF1) and took BW; (b) 1 week later, after tooth separation, examiner 1 performed the second visual-tactile examination (EX2) and examiner 2 used DIAGNOdent again (LF2); (c) after tooth exfoliation, surfaces were directly examined using DIAGNOdent (LF3). Teeth were examined by computed microtomography as a reference standard. Analyses were based on diagnostic thresholds: D1: D 0 = health, D 1 –D 4 = disease; D2: D 0 , D 1 = health, D 2 –D 4 = disease; D3: D 0 –D 2 = health, D 3 , D 4 = disease. At D1, the highest sensitivity/specificity were observed for EX1 (1.00)/LF3 (0.68), respectively. At D2, the highest sensitivity/ specificity were observed for LF3 (0.69)/BW (1.00), respectively. At D3, the highest sensitivity/specificity were observed for LF3 (0.78)/EX1, EX2 and BW (1.00). EX1 showed higher accuracy values than LF1, and EX2 showed similar values to LF2. We concluded that the visual-tactile examination showed better results in detecting sound surfaces and approximal caries lesions without tooth separation. However, the effectiveness of approximal caries lesion detection of both methods was increased by the absence of contact points. Therefore, regardless of the method of detection, orthodontic separating elastics should be used as a complementary tool for the diagnosis of approximal noncavitated lesions in primary molars.
15. Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
16. Combining global and local approximations
SciTech Connect
Haftka, R.T. )
1991-09-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.
17. Analytical Method of Approximating the Motion of a Spinning Vehicle with Variable Mass and Inertia Properties Acted Upon by Several Disturbing Parameters
NASA Technical Reports Server (NTRS)
Buglia, James J.; Young, George R.; Timmons, Jesse D.; Brinkworth, Helen S.
1961-01-01
An analytical method has been developed which approximates the dispersion of a spinning symmetrical body in a vacuum, with time-varying mass and inertia characteristics, under the action of several external disturbances-initial pitching rate, thrust misalignment, and dynamic unbalance. The ratio of the roll inertia to the pitch or yaw inertia is assumed constant. Spin was found to be very effective in reducing the dispersion due to an initial pitch rate or thrust misalignment, but was completely Ineffective in reducing the dispersion of a dynamically unbalanced body.
18. Configuration Interaction-Corrected Tamm-Dancoff Approximation: A Time-Dependent Density Functional Method with the Correct Dimensionality of Conical Intersections.
PubMed
Li, Shaohong L; Marenich, Aleksandr V; Xu, Xuefei; Truhlar, Donald G
2014-01-16
Linear response (LR) Kohn-Sham (KS) time-dependent density functional theory (TDDFT), or KS-LR, has been widely used to study electronically excited states of molecules and is the method of choice for large and complex systems. The Tamm-Dancoff approximation to TDDFT (TDDFT-TDA or KS-TDA) gives results similar to KS-LR and alleviates the instability problem of TDDFT near state intersections. However, KS-LR and KS-TDA share a debilitating feature; conical intersections of the reference state and a response state occur in F - 1 instead of the correct F - 2 dimensions, where F is the number of internal degrees of freedom. Here, we propose a new method, named the configuration interaction-corrected Tamm-Dancoff approximation (CIC-TDA), that eliminates this problem. It calculates the coupling between the reference state and an intersecting response state by interpreting the KS reference-state Slater determinant and linear response as if they were wave functions. Both formal analysis and test results show that CIC-TDA gives similar results to KS-TDA far from a conical intersection, but the intersection occurs with the correct dimensionality. We anticipate that this will allow more realistic application of TDDFT to photochemistry.
19. Speeding up spin-component-scaled third-order pertubation theory with the chain of spheres approximation: the COSX-SCS-MP3 method
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
20. Comparison of the auxiliary density perturbation theory and the noniterative approximation to the coupled perturbed Kohn-Sham method: case study of the polarizabilities of disubstituted azoarene molecules.
PubMed
Shedge, Sapana V; Carmona-Espíndola, Javier; Pal, Sourav; Köster, Andreas M
2010-02-18
We present a theoretical study of the polarizabilities of free and disubstituted azoarenes employing auxiliary density perturbation theory (ADPT) and the noniterative approximation to the coupled perturbed Kohn-Sham (NIA-CPKS) method. Both methods are noniterative but use different approaches to obtain the perturbed density matrix. NIA-CPKS is different from the conventional CPKS approach in that the perturbed Kohn-Sham matrix is obtained numerically, thereby yielding a single-step solution to CPKS. ADPT is an alternative approach to the analytical CPKS method in the framework of the auxiliary density functional theory. It is shown that the polarizabilities obtained using these two methods are in good agreement with each other. Comparisons are made for disubstituted azoarenes, which give support to the push-pull mechanism. Both methods reproduce the same trend for polarizabilities because of the substitution pattern of the azoarene moiety. Our results are consistent with the standard organic chemistry "activating/deactivating" sequence. We present the polarizabilities of the above molecules calculated with three different exchange-correlation functionals and two different auxiliary function sets. The computational advantages of both methods are also discussed.
1. An optimal implicit staggered-grid finite-difference scheme based on the modified Taylor-series expansion with minimax approximation method for elastic modeling
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
2. Linear-scaling self-consistent field calculations based on divide-and-conquer method using resolution-of-identity approximation on graphical processing units.
PubMed
Yoshikawa, Takeshi; Nakai, Hiromi
2015-01-30
Graphical processing units (GPUs) are emerging in computational chemistry to include Hartree-Fock (HF) methods and electron-correlation theories. However, ab initio calculations of large molecules face technical difficulties such as slow memory access between central processing unit and GPU and other shortfalls of GPU memory. The divide-and-conquer (DC) method, which is a linear-scaling scheme that divides a total system into several fragments, could avoid these bottlenecks by separately solving local equations in individual fragments. In addition, the resolution-of-the-identity (RI) approximation enables an effective reduction in computational cost with respect to the GPU memory. The present study implemented the DC-RI-HF code on GPUs using math libraries, which guarantee compatibility with future development of the GPU architecture. Numerical applications confirmed that the present code using GPUs significantly accelerated the HF calculations while maintaining accuracy. © 2014 Wiley Periodicals, Inc.
3. Multicriteria approximation through decomposition
SciTech Connect
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
4. Multicriteria approximation through decomposition
SciTech Connect
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
5. The accuracy of the DDA (Discrete Dipole Approximation) method in determining the optical properties of black carbon fractal-like aggregates
Skorupski, Krzysztof
2015-05-01
Black carbon (BC) particles are a product of incomplete combustion of carbon-based fuels. One of the possibilities of studying the optical properties of BC structures is to use the DDA (Discrete Dipole Approximation) method. The main goal of this work was to investigate its accuracy and to approximate the most reliable simulation parameters. For the light scattering simulations the ADDA code was used and for the reference program the superposition T-Matrix code by Mackowski was selected. The study was divided into three parts. First, DDA simulations for a single particle (sphere) were performed. The results proved that the meshing algorithm can significantly affect the particle shape, and therefore, the extinction diagrams. The volume correction procedure is recommended for sparse or asymmetrical meshes. In the next step large fractal-like aggregates were investigated. When sparse meshes are used, the impact of the volume correction procedure cannot be easily predicted. In some cases it can even lead to more erroneous results. Finally, the optical properties of fractal-like aggregates composed of spheres in point contact were compared to much more realistic structures made up of connected, non-spherical primary particles.
6. Analytic energy gradients for the coupled-cluster singles and doubles with perturbative triples method with the density-fitting approximation
Bozkaya, Uǧur; Sherrill, C. David
2017-07-01
An efficient implementation of analytic gradients for the coupled-cluster singles and doubles with perturbative triples [CCSD(T)] method with the density-fitting (DF) approximation, denoted as DF-CCSD(T), is reported. For the molecules considered, the DF approach substantially accelerates conventional CCSD(T) analytic gradients due to the reduced input/output time and the acceleration of the so-called "gradient terms": formation of particle density matrices (PDMs), computation of the generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the effective PDMs and GFM, back-transformation of the PDMs and GFM, from the molecular orbital to the atomic orbital (AO) basis, and computation of gradients in the AO basis. For the largest member of the molecular test set considered (C6H14), the computational times for analytic gradients (with the correlation-consistent polarized valence triple-ζ basis set in serial) are 106.2 [CCSD(T)] and 49.8 [DF-CCSD(T)] h, a speedup of more than 2-fold. In the evaluation of gradient terms, the DF approach completely avoids the use of four-index two-electron integrals. Similar to our previous studies on DF-second-order Møller-Plesset perturbation theory and DF-CCSD gradients, our formalism employs 2- and 3-index two-particle density matrices (TPDMs) instead of 4-index TPDMs. Errors introduced by the DF approximation are negligible for equilibrium geometries and harmonic vibrational frequencies.
7. Analytic energy gradients for the coupled-cluster singles and doubles with perturbative triples method with the density-fitting approximation.
PubMed
Bozkaya, Uğur; Sherrill, C David
2017-07-28
An efficient implementation of analytic gradients for the coupled-cluster singles and doubles with perturbative triples [CCSD(T)] method with the density-fitting (DF) approximation, denoted as DF-CCSD(T), is reported. For the molecules considered, the DF approach substantially accelerates conventional CCSD(T) analytic gradients due to the reduced input/output time and the acceleration of the so-called "gradient terms": formation of particle density matrices (PDMs), computation of the generalized Fock-matrix (GFM), solution of the Z-vector equation, formation of the effective PDMs and GFM, back-transformation of the PDMs and GFM, from the molecular orbital to the atomic orbital (AO) basis, and computation of gradients in the AO basis. For the largest member of the molecular test set considered (C6H14), the computational times for analytic gradients (with the correlation-consistent polarized valence triple-ζ basis set in serial) are 106.2 [CCSD(T)] and 49.8 [DF-CCSD(T)] h, a speedup of more than 2-fold. In the evaluation of gradient terms, the DF approach completely avoids the use of four-index two-electron integrals. Similar to our previous studies on DF-second-order Møller-Plesset perturbation theory and DF-CCSD gradients, our formalism employs 2- and 3-index two-particle density matrices (TPDMs) instead of 4-index TPDMs. Errors introduced by the DF approximation are negligible for equilibrium geometries and harmonic vibrational frequencies.
8. An Efficient Method for Calculating the Characteristics of the Integrated Lens Antennas on the Basis of the Geometrical and Physical Optics Approximations
Mozharovskiy, A. V.; Artemenko, A. A.; Mal'tsev, A. A.; Maslennikov, R. O.; Sevast'yanov, A. G.; Ssorin, V. N.
2015-11-01
We develop a combined method for calculating the characteristics of the integrated lens antennas for millimeter-wave wireless local radio-communication systems on the basis of the geometrical and physical optics approximations. The method is based on the concepts of geometrical optics for calculating the electromagnetic-field distribution on the lens surface (with allowance for multiple internal re-reflections) and physical optics for determining the antenna-radiated fields in the Fraunhofer zone. Using the developed combined method, we study various integrated lens antennas on the basis of the data on the used-lens shape and material and the primary-feed radiation model, which is specified analytically or by computer simulation. Optimal values of the cylindrical-extension length, which ensure the maximum antenna directivity equal to 19.1 and 23.8 dBi for the greater and smaller lenses, respectively, are obtained for the hemispherical quartz-glass lenses having the cylindrical extensions with radii of 7.5 and 12.5 mm. In this case, the scanning-angle range of the considered antennas is greater than ±20° for an admissible 2-dB decrease in the directivity of the deflected beam. The calculation results obtained using the developed method are confirmed by the experimental studies performed for the prototypes of the integrated quartz-glass lens antennas within the framework of this research.
9. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow
NASA Technical Reports Server (NTRS)
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
10. Approximate constants of motion for classically chaotic vibrational dynamics - Vague tori, semiclassical quantization, and classical intramolecular energy flow
NASA Technical Reports Server (NTRS)
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
11. Applied Routh approximation
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
12. An exact formulation of k-distribution methods in non-uniform gaseous media and its approximate treatment within the Multi-Spectral framework
ANDRE, Frédéric; HOU, Longfeng; SOLOVJOV, Vladimir P.
2016-01-01
The main restriction of k-distribution approaches for applications in radiative heat transfer in gaseous media arises from the use of a scaling or correlation assumption to treat non-uniform situations. It is shown that those cases can be handled exactly by using a multidimensional k-distribution that addresses the problem of spectral correlations without using any simplifying assumptions. Nevertheless, the approach cannot be suggested for engineering applications due to its computational cost. Accordingly, a more efficient method, based on the so-called Multi-Spectral Framework, is proposed to approximate the previous exact formulation. The model is assessed against reference LBL calculations and shown to outperform usual k-distribution approaches for radiative heat transfer in non-uniform media.
13. Ab initio study of the lattice thermal conductivity of Cu2O using the generalized gradient approximation and hybrid density functional methods
Linnera, J.; Karttunen, A. J.
2017-07-01
The lattice thermal conductivity of Cu2O was studied using ab initio density functional methods. The performance of generalized gradient approximation (GGA), GGA-PBE, and PBE0 exchange-correlation functionals was compared for various electronic and phonon-related properties. The 3 d transition metal oxides such as Cu2O are known to be a challenging case for pure GGA functionals, and in comparison to the GGA-PBE the PBE0 hybrid functional clearly improves the description of both electronic and phonon-related properties. The most striking difference is found in the lattice thermal conductivity, where the GGA underestimates it as much as 40% in comparison to experiments, while the difference between the experiment and the PBE0 hybrid functional is only a few percent.
14. Part I: Microscopic description of liquid He II. Part II: Uniformly approximated WKB method as used for the calculation of phase shifts in heavy-ion collision problems
SciTech Connect
Suebka, P.
1984-01-01
In Part I, the excitation spectrum of liquid He II is obtained using the two-body potential consists of a hardcore potential plus an outside attractive potential. The sum of two gaussian potential of Khanna and Das which is similar to the Lennard-Jones potential is chosen as the attractive potential. The t-matrix method due to Brueckner and Sawada is adopted with modifications to replace the interaction potential. The spectrum gives the phonon branch and the roton dip which resemble the excitation spectrum for liquid He II. The temperature dependence of the excitation spectrum enters into calculation through the zero-momentum state occupation number. A better approximation of thermodynamic functions is obtained by extending Landau's theory to the situation where the excitation is a function of temperature as well as of momentum. Our thermodynamic calculations also bear qualitative agreement with measurements on He II as expected.
15. A comparative study of the centroid and ring-polymer molecular dynamics methods for approximating quantum time correlation functions from path integrals
Pérez, Alejandro; Tuckerman, Mark E.; Müser, Martin H.
2009-05-01
The problems of ergodicity and internal consistency in the centroid and ring-polymer molecular dynamics methods are addressed in the context of a comparative study of the two methods. Enhanced sampling in ring-polymer molecular dynamics (RPMD) is achieved by first performing an equilibrium path integral calculation and then launching RPMD trajectories from selected, stochastically independent equilibrium configurations. It is shown that this approach converges more rapidly than periodic resampling of velocities from a single long RPMD run. Dynamical quantities obtained from RPMD and centroid molecular dynamics (CMD) are compared to exact results for a variety of model systems. Fully converged results for correlations functions are presented for several one dimensional systems and para-hydrogen near its triple point using an improved sampling technique. Our results indicate that CMD shows very similar performance to RPMD. The quality of each method is further assessed via a new χ2 descriptor constructed by transforming approximate real-time correlation functions from CMD and RPMD trajectories to imaginary time and comparing these to numerically exact imaginary time correlation functions. For para-hydrogen near its triple point, it is found that adiabatic CMD and RPMD both have similar χ2 error.
16. Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
17. Robust methods to detect disease-genotype association in genetic association studies: calculate p-values using exact conditional enumeration instead of simulated permutations or asymptotic approximations.
PubMed
Langaas, Mette; Bakke, Øyvind
2014-12-01
In genetic association studies, detecting disease-genotype association is a primary goal. We study seven robust test statistics for such association when the underlying genetic model is unknown, for data on disease status (case or control) and genotype (three genotypes of a biallelic genetic marker). In such studies, p-values have predominantly been calculated by asymptotic approximations or by simulated permutations. We consider an exact method, conditional enumeration. When the number of simulated permutations tends to infinity, the permutation p-value approaches the conditional enumeration p-value, but calculating the latter is much more efficient than performing simulated permutations. We have studied case-control sample sizes with 500-5000 cases and 500-15,000 controls, and significance levels from 5 × 10(-8) to 0.05, thus our results are applicable to genetic association studies with only a few genetic markers under study, intermediate follow-up studies, and genome-wide association studies. Our main findings are: (i) If all monotone genetic models are of interest, the best performance in the situations under study is achieved for the robust test statistics based on the maximum over a range of Cochran-Armitage trend tests with different scores and for the constrained likelihood ratio test. (ii) For significance levels below 0.05, for the test statistics under study, asymptotic approximations may give a test size up to 20 times the nominal level, and should therefore be used with caution. (iii) Calculating p-values based on exact conditional enumeration is a powerful, valid and computationally feasible approach, and we advocate its use in genetic association studies.
18. Handbook for quick cost estimates. A method for developing quick approximate estimates of costs for generic actions for nuclear power plants
SciTech Connect
Ball, J.R.
1986-04-01
This document is a supplement to a ''Handbook for Cost Estimating'' (NUREG/CR-3971) and provides specific guidance for developing ''quick'' approximate estimates of the cost of implementing generic regulatory requirements for nuclear power plants. A method is presented for relating the known construction costs for new nuclear power plants (as contained in the Energy Economic Data Base) to the cost of performing similar work, on a back-fit basis, at existing plants. Cost factors are presented to account for variations in such important cost areas as construction labor productivity, engineering and quality assurance, replacement energy, reworking of existing features, and regional variations in the cost of materials and labor. Other cost categories addressed in this handbook include those for changes in plant operating personnel and plant documents, licensee costs, NRC costs, and costs for other government agencies. Data sheets, worksheets, and appropriate cost algorithms are included to guide the user through preparation of rough estimates. A sample estimate is prepared using the method and the estimating tools provided.
19. ASEP/MD: A program for the calculation of solvent effects combining QM/MM methods and the mean field approximation
Galván, I. Fdez; Sánchez, M. L.; Martín, M. E.; Olivares del Valle, F. J.; Aguilar, M. A.
2003-11-01
ASEP/MD is a computer program designed to implement the Averaged Solvent Electrostatic Potential/Molecular Dynamics (ASEP/MD) method developed by our group. It can be used for the study of solvent effects and properties of molecules in their liquid state or in solution. It is written in the FORTRAN90 programming language, and should be easy to follow, understand, maintain and modify. Given the nature of the ASEP/MD method, external programs are needed for the quantum calculations and molecular dynamics simulations. The present version of ASEP/MD includes interface routines for the GAUSSIAN package, HONDO, and MOLDY, but adding support for other programs is straightforward. This article describes the program and its usage. Program summaryTitle of program: ASEP/MD Catalogue identifier:ADSF Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed: it has been tested on Intel-based PC and Sun Operating systems under which the program has been tested: Red Hat Linux 7.2 and SunOS 5.6 Programming language used: FORTRAN90 Memory required to execute with typical data: greatly depends on the system No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of bytes in distributed program, including test data, etc.: 44 544 Distribution format: tar gzip file Keywords: Solvent effects, QM/MM methods, mean field approximation, geometry optimization Nature of physical problem: The study of molecules in solution with quantum methods is a difficult task because of the large number of molecules and configurations that must be taken into account. The quantum mechanics/molecular mechanics methods proposed to date either require massive computational power or oversimplify the solute quantum description. Method of solution: A non-traditional QM/MM method based on the mean field approximation was developed where a classical molecular
20. A novel solution procedure for a three-level atom interacting with one-mode cavity field via modified homotopy analysis method
Abdel Wahab, N. H.; Salah, Ahmed
2015-05-01
In this paper, the interaction of a three-level -configration atom and a one-mode quantized electromagnetic cavity field has been studied. The detuning parameters, the Kerr nonlinearity and the arbitrary form of both the field and intensity-dependent atom-field coupling have been taken into account. The wave function when the atom and the field are initially prepared in the excited state and coherent state, respectively, by using the Schrödinger equation has been given. The analytical approximation solution of this model has been obtained by using the modified homotopy analysis method (MHAM). The homotopy analysis method is mentioned summarily. MHAM can be obtained from the homotopy analysis method (HAM) applied to Laplace, inverse Laplace transform and Pade approximate. MHAM is used to increase the accuracy and accelerate the convergence rate of truncated series solution obtained by the HAM. The time-dependent parameters of the anti-bunching of photons, the amplitude-squared squeezing and the coherent properties have been calculated. The influence of the detuning parameters, Kerr nonlinearity and photon number operator on the temporal behavior of these phenomena have been analyzed. We noticed that the considered system is sensitive to variations in the presence of these parameters.
1. Approximate option pricing
SciTech Connect
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
2. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.
PubMed
2017-02-15
diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
3. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries
2017-02-01
diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
4. Phenomenological applications of rational approximants
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
5. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: higher order theory based on the Bethe-Peierls and path probability method approximations.
PubMed
Edison, John R; Monson, Peter A
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
6. The calculation of ionization energies by perturbation, configuration interaction and approximate coupled pair techniques and comparisons with green's function methods for Ne, H 2O and N 2
Bacskay, George B.
1980-05-01
The vertical valence ionization potentials of Ne, H 2O and N 2 have been calculated by Rayleigh-Schrödinger perturbation and configuration interaction methods. The calculations were carried out in the space of a single determinant reference state and its single and double excitations, using both the N and N - 1 electron Hartree-Fock orbitals as hole/particle bases. The perturbation series for the ion state were generally found to converge fairly slowly in the N electron Hartree-Fock (frozen) orbital basis, but considerably faster in the appropriate N - 1 electron RHF (relaxed) orbital basis. In certain cases, however, due to near-degeneracy effects, partial, and even complete, breakdown of the (non-degenerate) perturbation treatment was observed. The effects of higher excitations on the ionization potentials were estimated by the approximate coupled pair techniques CPA' and CPA″ as well as by a Davidson type correction formula. The final, fully converged CPA″ results are generally in good agreement with those from PNO-CEPA and Green's function calculations as well as experiment.
7. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: Higher order theory based on the Bethe-Peierls and path probability method approximations
SciTech Connect
Edison, John R.; Monson, Peter A.
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
8. Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
9. Approximate flavor symmetries
SciTech Connect
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
10. Adaptive approximation models in optimization
SciTech Connect
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
11. Approximation of Laws
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
12. Fast approximate stochastic tractography.
PubMed
Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen
2012-01-01
Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore
13. Forensic facial approximation: an overview of current methods used at the Victorian Institute of Forensic Medicine/Victoria Police Criminal Identification Squad.
PubMed
Hayes, S; Taylor, R; Paterson, A
2005-12-01
Forensic facial approximation involves building a likeness of the head and face on the skull of an unidentified individual, with the aim that public broadcast of the likeness will trigger recognition in those who knew the person in life. This paper presents an overview of the collaborative practice between Ronn Taylor (Forensic Sculptor to the Victorian Institute of Forensic Medicine) and Detective Sergeant Adrian Paterson (Victoria Police Criminal Identification Squad). This collaboration involves clay modelling to determine an approximation of the person's head shape and feature location, with surface texture and more speculative elements being rendered digitally onto an image of the model. The advantages of this approach are that through clay modelling anatomical contouring is present, digital enhancement resolves some of the problems of visual perception of a representation, such as edge and shape determination, and the approximation can be easily modified as and when new information is received.
14. Approximate symmetries of Hamiltonians
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
15. Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
16. Approximation techniques for neuromimetic calculus.
PubMed
Vigneron, V; Barret, C
1999-06-01
Approximation Theory plays a central part in modern statistical methods, in particular in Neural Network modeling. These models are able to approximate a large amount of metric data structures in their entire range of definition or at least piecewise. We survey most of the known results for networks of neurone-like units. The connections to classical statistical ideas such as ordinary least squares (LS) are emphasized.
17. Gadgets, approximation, and linear programming
SciTech Connect
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
18. Non-Linear Vibration Problems Treated by the Averaging Method of W. Ritz. Part 2. Single Degree of Freedom Systems Single Term Approximations
DTIC Science & Technology
1951-06-01
qulvaltet:Wnter »rtoNq; eoefficie^.,- t; ..„*hieh ls *ha -.5 „Is, •£ H5 coefficient of tte displaceAenS jTä- a lineal differential...34 _ ._ section 1) were treated. Signer order approximations generally lead to a set of two or more {coupled) algebraic equations for_ the two G~-acr
19. Comparison of Three Efficient Approximate Exact-Exchange Algorithms: The Chain-of-Spheres Algorithm, Pair-Atomic Resolution-of-the-Identity Method, and Auxiliary Density Matrix Method.
PubMed
Rebolini, Elisa; Izsák, Róbert; Reine, Simen Sommerfelt; Helgaker, Trygve; Pedersen, Thomas Bondo
2016-08-09
We compare the performance of three approximate methods for speeding up evaluation of the exchange contribution in Hartree-Fock and hybrid Kohn-Sham calculations: the chain-of-spheres algorithm (COSX; Neese , F. Chem. Phys. 2008 , 356 , 98 - 109 ), the pair-atomic resolution-of-identity method (PARI-K; Merlot , P. J. Comput. Chem. 2013 , 34 , 1486 - 1496 ), and the auxiliary density matrix method (ADMM; Guidon , M. J. Chem. Theory Comput. 2010 , 6 , 2348 - 2364 ). Both the efficiency relative to that of a conventional linear-scaling algorithm and the accuracy of total, atomization, and orbital energies are compared for a subset containing 25 of the 200 molecules in the Rx200 set using double-, triple-, and quadruple-ζ basis sets. The accuracy of relative energies is further compared for small alkane conformers (ACONF test set) and Diels-Alder reactions (DARC test set). Overall, we find that the COSX method provides good accuracy for orbital energies as well as total and relative energies, and the method delivers a satisfactory speedup. The PARI-K and in particular ADMM algorithms require further development and optimization to fully exploit their indisputable potential.
20. Approximate spatial reasoning
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822519838809967, "perplexity": 1250.4936497341428}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597295.74/warc/CC-MAIN-20171217171653-20171217193653-00420.warc.gz"} |
https://link.springer.com/article/10.1007/s11229-020-02531-4?error=cookies_not_supported&code=67284dac-3e6a-4288-bb24-612e7759727e | # Intrinsic local distances: a mixed solution to Weyl’s tile argument
## Abstract
Weyl’s tile argument purports to show that there are no natural distance functions in atomistic space that approximate Euclidean geometry. I advance a response to this argument that relies on a new account of distance in atomistic space, called the mixed account, according to which local distances are primitive and other distances are derived from them. Under this account, atomistic space can approximate Euclidean space (and continuous space in general) very well. To motivate this account as a genuine solution to Weyl’s tile argument, I argue that this account is no less natural than the standard account of distance in continuous space. I also argue that the mixed account has distinctive advantages over Forrest’s (Synthese 103:327–354, 1995) account in response to Weyl’s tile argument, which can be considered as a restricted version of the mixed account.
This is a preview of subscription content, access via your institution.
## Notes
1. My presentation of the argument follows Salmon (1980).
2. I put “atoms” in quotes because it is not entirely clear what philosophical theory of spacetime we should explicate from Hogan (2012). More technically, the tested hypothesis implies that the geometry of spacetime is not commutative below the Planck level. Among other things, this means that unextended points do not exist because the coordinates of a point are necessarily commutative (e.g., in the (xy)-coordinate system, for any point (ab), $$ab-ba=0$$).
3. Strictly speaking, it is more natural to think that the distance between a and b is the length of a shortest path from a to b minus one. For example, while the length of the side AB is four in Fig. 1, it’s more natural to think that the distance between A and B is three. However, for the sake of generalization in later discussions, it’s better to use Distance.
4. Another intuitive option is to assume that two atoms are adjacent iff their representing tiles are horizontally or vertically adjacent. Under this option, the diagonal BC is represented by the zigzag region along the diagonal direction (Fig. 3). But this option has the same problem: the ratio of the diagonal to the side is about 2:1 rather than $$\sqrt{2}:1$$.
5. Here I am using “dimension” in an informal (and hopefully intuitive) way that every region of N-dimensional atomistic space is also N-dimensional. In other words, dimensionality is an intrinsic property of an atom. But we can have alternative definitions of dimension in atomistic space, which will be briefly discussed in Sect. 6.
6. Note that this example does not solve Weyl’s tile argument: even though the sides and the diagonal of the square region satisfy the Pythagorean theorem, the distances along other directions don’t.
7. In Fritz’s formalism, atomistic space is modeled by an infinite graph composed of $$\mathbb {Z}^d$$-translates of a certain finite pattern—call each of those translates a “cell.” For example, in the hexagonal tile space, each cell contains just one vertex and six edges. According to Fritz, a cell must contain a very large number of edges in order for the metric of the graph to approximate Euclidean geometry closely at the large scale. This means that, if there is an atomistic space represented by a tile space that approximates Euclidean space very well at the large scale, the repeated pattern must be very complicated. I thank Fritz for clarifying the gist of Fritz (2013) in personal correspondence.
8. See McDaniel (2007) for more discussion of the view. McDaniel argued that the intrinsic account is true in some possible worlds, and in such worlds, atomistic space can approximate Euclidean distance.
9. In a general context, I use “point” to simply refer to an ultimate part of an arbitrary space.
10. Here, “connected” is used in the sense that a path $$a_1,\ldots ,a_k$$ can be connected with a path $$a_k,\ldots ,a_n$$ to form a single path $$a_1,\ldots ,a_n$$ ($$1\le k\le n$$).
11. A semimetric is a generalized distance function that does not satisfy triangle inequality. Under the intrinsic account, it is hard to see why a space cannot have a semimetric.
12. This condition is violated in some approaches to discrete spacetime, such as that of Crouse and Skufca (2018). According to Crouse and Skufca, a particle can jump in any direction as long as the minimal length of a step is a constant number $$\chi$$. This allows every point in continuous space to be a potential position of a particle. So it may be more natural to consider their approach to be about a discrete dynamics rather than a discrete spacetime.
13. The construction of distance from proto-distance is closely related to the definition of geodesic distance in a weighted graph in graph theory, and to the construction of metric from semi-metric or quasi-metric (for example, see Harary 1969; Paluszyński and Stempak 2009).
In more general settings, especially for continuous space, it is standard to define the distance between two points as the infimum of the lengths of paths between them, since a shortest path between them may not exist. However, this definition coincides with my definition in the case of atomistic space due to the requirement that for any atom a and any real number r, there are only finitely many atoms x with $$\mathbf{d} (a,x)< r$$.
14. The mixed account can accommodate curved space as well. I will not go into details here, but one can refer to Forrest (1995, pp. 334–340), in which Forrest explained how an atomistic model can approximate curved space once we have a model that approximates Euclidean space.
15. For instance, in two-dimensional Euclidean space (or any flat two-dimensional Riemannian manifold), the length of a tangent vector expressed by $$(\frac{dx}{dt}, \frac{dy}{dt})$$ is $$\sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2}$$.
16. More formally, consider a path in two-dimensional Euclidean space. Let g be a metric tensor and T range over tangent vectors along a path. Then the length of that path is $$\int \sqrt{g(T,T)}dt$$.
17. For example, Weatherson (2006) argued that we should define duplicates in terms of fundamental properties and relations in a way that weeds out neighborhood-dependent aspects. Bricker (1993) suggested that local metrics are distances in infinitesimal neighborhoods of points.
18. Van Bendegem (1987, 1995) also proposed solutions to Weyl’s tile argument. I consider his later proposal as a restricted version of Forrest’s account. We can have a one-to-one correspondence between points (a technical notion) in Bendegem’s model and atoms in Forrest’s model that preserves distance. But Forrest’s account allows models that are incompatible with Bendegem’s account.
19. I change some of Forrest’s terminology to align with mine. He calls atomistic space “discrete space” and atoms “points.”
20. For the proof, see Forrest (1995, pp. 344–346).
21. The parameter $$m=10^{30}$$ is a number given by Forrest to ensure the model to approximate Euclidean geometry at the large scale (Forrest 1995, p. 333).
22. For example, see ’t Hooft (2016).
23. As shown in Appendix A, in order for the atomistic model to approximate Euclidean space, the longest primitive distance needs to be about as large as the shortest primitive distance divided by the permitted distortion (as expressed by “$$M>3r/\delta$$” in the Appendix).
24. Suppose Euclidean mixed model is more than two-dimensional locally, then there are more than three atoms in a local neighborhood equidistant from each other. But their distances just are the Euclidean distances among their representative pairs of integers. Thus there are more then three pairs of integers that are equidistant from each other on the Euclidean plane. But this is known to be impossible. Thus, Euclidean mixed model is no more than two-dimensional locally. Moreover, it is clear that Euclidean mixed model is not one-dimensional locally, so it is exactly two-dimensional.
25. Forrest needs the definition of dimensionality to be relative to the scale because he wants to recover some sense in which space is three (or four) dimensional.
26. This definition is analogous to the definition of the dimension of a manifold (i.e., a continuous space). One may try to translate this definition into a more intrinsic form such as this:
Dimension$$\dagger$$. A space is N-dimensional iff N is the least number that there are at most $$N+1$$ atoms that bear the same primitive distance to each other.
The problem with Dimension$$\dagger$$ is that it leads to counterintuitive results. For instance, if no two pairs of atoms in the same local neighborhood have the same primitive distance, then Dimension$$\dagger$$ would imply that the space is one-dimensional. But when such a space is not embeddable into one-dimensional continuous space, it is intuitively not one-dimensional.
27. Here, the notion of approximation is cast in a different way from Forrest’s (1995). Forrest showed that his model approximates Euclidean space in the sense that we can map Euclidean space into his model such that the distances are approximately preserved. Here, it is the other way around: a model approximates Euclidean space in the sense that we can map this model into Euclidean space that preserves distances approximately. I do not consider either interpretation of approximation to be better than the other, but I work with this one because I feel it a bit more natural.
28. Here’s a proof for the simple case in which two circles in question have the same radius, which is adequate for our purpose. Let two circles be $$x_1=r\cos \theta _1$$, $$y_1=r\sin \theta _1$$, $$x_2=r\cos \theta _2+n$$, $$y_2=r\sin \theta _2.$$ Then, $$(x_1-x_2)^2+(y_1-y_2)^2=n^2-2r^2\cos (\theta _1-\theta _2)-2nr(\cos \theta _1-\cos \theta _2)+2r^2\le n^2+2r^2+4nr+2r^2=(n+2r)^2.$$ That is, for two circles with the same size, the largest distance between two points on them is equal to the distance between their centers plus their radii.
## References
• Baez, J. (2018). Struggles with the Continuum. arXiv:1609.01421 [math-ph].
• Bricker, P. (1993). The fabric of space: Intrinsic vs. extrinsic distance relations. Midwest Studies in Philosophy, 18(1), 271–94.
• Crouse, D., & Skufca, J. (2018). On the nature of discrete space-time. Logique et Analyse, 62(246), 177–223.
• Forrest, P. (1995). Is space-time discrete or continuous?—-an empirical question. Synthese, 103, 327–354.
• Fritz, T. (2013). Velocity polytopes of periodic graphs and a no-go theorem for digital physics. Discrete Mathematics, 313, 1289–1301.
• Harary, F. (1969). Graph theory (Volume 2787 of Addison-Wesley series in mathematics). Boston: Addison-Wesley Pub.Co.
• Hogan, C. (2012). Interferometers as probes of Planckian quantum geometry. Physics Review D, 85(6), 064007.
• Lewis, D. (1986). Philosophical Papers: Volume II. Oxford: Blackwell Publishers.
• Maudlin, T. (2014). New foundation for physical geometry: The theory of linear structures. Oxford: Oxford University Press.
• Maudlin, T. (2007). The metaphysics within physics. Oxford: Clarendon.
• McDaniel, K. (2007). Distance and discrete space. Synthese, 155, 157–162.
• National Institute of Standards and Technology (abbr. NIST). (n.d.) Lattice parameter of silicon. The NIST reference on constants, units and uncertainty. Retrieved November 23, 2018, from https://physics.nist.gov/.
• Paluszyński, M., & Stempak, K. (2009). On quasi-metric and metric spaces. Proceedings of the American Mathematical Society, 137(12), 4307–4312.
• Perlick, V. (2007). On the radar method in general-relativistic spacetimes. In H. Dittus, C. Lammerzahl, & S. G. Turyshev (Eds.), Lasers, clocks and drag-free control (pp. 131–152). Berlin: Springer.
• Riemann, B. (1866). On the hypotheses which lie at The Foundations of Geometry. In Spivak (1999), A comprehensive introduction of differential geometry: volume II (pp. 153–164). Houston: Publish or Perish.
• Rosser, W. G. V. (1991). Introductory special relativity. London: Taylor & Francis.
• Salmon, W. (1980). Space, time, and motion. Minneapolis: University of Minnesota Press.
• Sorkin, R. D. (1990). Spacetime and causal sets. In Relativity and gravitation: Classical and quantum (Proceedings of the SILARG VII Conference) (pp. 150–173).
• ’t Hooft, G. (2016). How quantization of gravity leads to a discrete space-time. Journal of Physics: Conference Series, 701, 012014.
• Van Bendegem, J. P. (2019). Finitism in geometry. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2019 Edition). Retrieved Mar, 2019 from https://plato.stanford.edu/archives/fall2019/entries/geometry-finitism/.
• Van Bendegem, J. P. (1995). In defence of discrete space and time. Logique et Analyse, 38(150–152), 127–150.
• Van Bendegem, J. P. (1987). Zeno’s paradoxes and the Weyl tile argument. Philosophy of Science, 54(2), 295–302.
• Weatherson, B. (2006). The asymmetric magnets problem. Philosophical Perspectives, 20, 479–92.
• Weyl, H. (1949). Philosophy of mathematics and natural sciences. Princeton: Princeton University Press.
## Author information
Authors
### Corresponding author
Correspondence to Lu Chen.
### Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
I thank Philip Bricker and Jeffrey Russell for very helpful guidance, feedback, and discussions. I thank the audience at my talks based on this paper in Metaphysical Mayhem at Rutgers University in 2018, and in Philosophy of Logic, Mathematics, and Physics Graduate Conference at the University of Western Ontario in 2019. Among the audience, I especially thank Cian Dorr for his helpful feedback. I’d also like to thank a referee of Synthese for pressing me on the application of my account to relativistic settings, which helps clarify the relevance of the account.
## Appendix A
### Appendix A
Now I shall turn to how well space approximates Euclidean space under the mixed account. Under this account, an atomistic space can be represented by a set of points with a shortest path metric that assigns some pairs of points real-valued distances (bounded by a finite number) and derives other distances as their least sums.
We will understand “approximation” in terms of “almost isometry.” Let e(pq) be the Euclidean distance between two points pq in Euclidean space. Let $$\epsilon , r$$ be two positive numbers. A metric space X with a metric d is $$\epsilon$$-isometric to Euclidean space E with regard to r iff there is a map f from X to E such that (1) for $$x,y\in X$$, we have
\begin{aligned} 1-\epsilon \le \frac{e(f(x),f(y))}{d(x, y)}\le 1+\epsilon \end{aligned}
(the smallest $$\epsilon$$ such that f satisfies this condition is called the distortion of f);Footnote 27 (2) for every $$p\in E$$, there is a $$x\in X$$ such that $$e(p,f(x))\le r$$. In other words, the embedded points cover E reasonably well so that there are no obvious “clusters” and “holes.”
### Theorem A.1
For any $$\epsilon$$ and r, there is a set of points with a shortest path metric (with distances being bounded by a finite number) that is $$\epsilon$$-isometric to Euclidean space with regard to r.
### Proof
For brevity, I will resort to the following abbreviations when applicable. Given an embedding f of a metric space into Euclidean space, for any points xy in the space, let $$\Vert xy\Vert _f=e(f(x),f(y))$$ (the subscript “f” is omitted if it is clear which embedding we refer to). Also, for any points pq in Euclidean space, let $$\Vert pq\Vert =e(p,q).$$
Let G be an embedding of an infinite set X to Euclidean space E such that there is an r such that for any $$p\in E$$, we can find an $$x\in X$$ with $$e(p,G(x))<r$$. (For example, if G maps members of X to Euclidean points represented by pairs of integers, then r in question is at least $$\sqrt{2}/2$$.) We will construct a metric over X such that the resulting metric space is $$\epsilon$$-isometric to Euclidean space under G, where $$\epsilon$$ is a small number we choose.
M is a real-number parameter that will play an important role in assigning weights and in determining the distortion of the intended embedding. For any $$x,y\in X$$, if $$\Vert xy\Vert > M$$, we can find a sequence of points $$p_1,p_2,\ldots p_n$$ in E such that $$p_0=G(x)$$, $$p_n=G(y)$$, $$\Vert p_0p_1\Vert =\Vert p_1p_2\Vert =\cdots =\Vert p_{n-2}p_{n-1}\Vert =M$$ and $$\Vert p_{n-1}p_n\Vert <M$$. Let $$N=\Vert p_{n-1}p_n\Vert$$. Consider $$p_i,p_{i+1}$$, where $$i=1,\ldots ,n-2$$. We can find $$x_i,x_{i+1}\in X$$ such that $$e(G(x_i),p_i)<r$$ and $$e(G(x_{i+1}),p_{i+1})<r$$. We know that the largest distance between points on two circles is equal to the distance between their centers plus their radii.Footnote 28 Thus, $$\Vert x_ix_{i+1}\Vert <M+2r$$. Now, for any two $$a,b\in X$$, if $$\Vert ab\Vert <M+2r$$, then let them be connected by an edge with the weight $$d(a,b)=\Vert ab\Vert$$; otherwise, ab are not connected by an edge. Then, $$M\le d(x_i,x_{i+1})<M+2r$$. Moreover, it’s easy to see that $$M\le d(x,x_1)\le M+r$$ and $$N\le d(x_{n-1},y)\le N+r$$. It follows that $$d(x,y)\le d(x,x_1)+d(x_1,x_2)+\cdots +d(x_{n-1},y)< n\cdot (M+2r)+(N+r)$$. Furthermore, if $$x, x_1, \ldots x_n, y$$ is a shortest path, then $$d(x,y)=d(x,x_1)+d(x_1,x_2)+\cdots +d(x_{n-1},y)=\Vert xx_1\Vert +\cdots +\Vert x_{n-1}y\Vert \ge \Vert xy\Vert$$. Thus, we have:
\begin{aligned} 1\le \frac{d(x,y)}{\Vert xy\Vert }<\frac{n\cdot (M+2r)+(N+r)}{nM+N}=1+\frac{(2n+1)r}{nM+N} \end{aligned}
The distortion $$\displaystyle \delta =\frac{d(x,y)}{\Vert xy\Vert }-1<\frac{(2n+1)r}{nM+N}<\frac{(2n+1)r}{nM}<\frac{3r}{M}$$. Then, for any small positive number $$\epsilon$$, we can make $$\delta <\epsilon$$ by letting $$M=3r/\epsilon$$. (Note that if we are only concerned with distances that involve a large n, we only need M to be $$2r/\epsilon$$.) This completes the case for any $$x,y\in X$$ with $$\Vert xy\Vert > M$$. If $$\Vert xy\Vert \le M$$, then we have $$d(x,y)=\Vert xy\Vert$$, in which case there is no distortion. Therefore, we have found a metric space, in which all distances are bounded by $$3r/\epsilon +2r$$, that is $$\epsilon$$-isometric to Euclidean space at any scale. $$\square$$
## Rights and permissions
Reprints and Permissions
Chen, L. Intrinsic local distances: a mixed solution to Weyl’s tile argument. Synthese 198, 7533–7552 (2021). https://doi.org/10.1007/s11229-020-02531-4
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1007/s11229-020-02531-4
### Keywords
• Weyl’s tile argument
• Atomistic space
• Discrete space
• Intrinsic distance
• Path-dependent distance
• Locality
• Metric tensor | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9240057468414307, "perplexity": 704.7356432220505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00126.warc.gz"} |
https://www.tutorialspoint.com/voice-of-which-of-the-following-is-likely-to-have-a-minimum-frequency-a-baby-girl-b-baby-boy-c-a-man-d-a-woman | # Voice of which of the following is likely to have a minimum frequency?$(a)$. Baby girl$(b)$. Baby boy$(c)$. A man $(d)$. A woman
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
The correct answer is: $(c)$. A man
Explanation:
Let us know whose voice has the minimum frequency among the given group:
Voices with the least and highest shrillness:
The man's voice has the least shrill in his voice among a baby girl, a baby boy, and a woman. Therefore, a man's voice will have the least pitch. Similarly, a baby girl's voice has the highest-pitched voice among the given group.
Sounds with minimum and maximum frequencies:
We know that the frequency of sound is directly proportional to the pitch of the sound. Therefore, the higher the pitch of the sound, the higher its frequency. Here is a given group of a baby girl, a child, a woman, and a man, a man's voice will have the lowest frequency as a man's voice is the lowest pitched. Similarly, a baby girl will have the highest frequency of the voice.
Therefore, option $(c)$ is correct.
Updated on 10-Oct-2022 13:25:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16065578162670135, "perplexity": 4181.9283548201465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00223.warc.gz"} |
http://mathhelpforum.com/new-users/207355-exercise-conversion.html | # Math Help - Exercise with Conversion
1. ## Exercise with Conversion
Hello. I am making a research about representations in trigonometry. I am trying to find exercises with conversions from one representation to other. The 5 type of trigonometric representation i will use are trigonometric functions, right triangle, trigonometric circle, trigonometric identities, trigonometrics graphs. For example, "Sketch y=sinx" is a conversion from trigonometric function to trigonometrics graphs. Any ideas?
2. ## Re: Exercise with Conversion
please post trig questions in the trig forum | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9821094274520874, "perplexity": 3199.144629868438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00102-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/233652/generalized-catalan-numbers | # Generalized Catalan Numbers
I'm concerned here with Catalan numbers.
There are many combinatorial interpretation of these numbers. Here I would focus on the interpretation around words build with 2 symbols, let say [ and ]. The Nth Catalan number for such pair of symbols gives the number of words of length 2N such that any sub words never contains more [ than ].
I'm interested what we can say if we take not only 2 symbols but 4 for example. Let say [, ] and (, ).
Then Nth "Generalized" Catalan number would be interpreted as the number of words of length 2N build with these 4 symbols such that any sub word is, let say "simple Catalan" for the pair [,], or "simple Catalan" for the pair (,) or both. In other words for any sub words, there is a pair of symbols which are "simple Catalan" for this sub word.
Hope I'm clear enough.
-
As you can read in the answer by A.Schulz and its OEIS link for length $N$ there are $2^N$ as many properly nested words on two bracket pairs compared to Catalan. Be careful how to define those words. If you only test subwords on each of the bracket pairs separately, you might end up with ([)] which probably is not what you want? – Hendrik Jan Nov 9 '12 at 17:00
The question is a little bit ambiguous. Are these the pairs that you want to count for $N=2$? (()), [()], ([]), [[]], ()(), [](), ()[], and [][]. – Brian M. Scott Nov 9 '12 at 19:09
Hi Brian. No not exactly. In your set for N=2: The following are OK: (()), [()], ([]), [[]] The other ()(), [](), ()[], [][] are not because you have always balanced initial sub words. For example in this one ()[] you have the balanced initial subword (). Maybe 2 things are no clear in my first question: 1) I ask that in any INITIAL subword (and not any subword) there is at least one pair of symbols which are not balanced. 2) I realize that it's not exactly linked to Catalan number as is but quite the same, just a shift of index. Hope it's less ambiguous Gianfranco – Gianfranco OLDANI Nov 10 '12 at 16:45
Unless you restate this question more clearly, you will not get any satisfactory answers. You never define what property defines "simple Catalan" for one pair of symbols, but assuming this means "balanced after throwing out all all symbols not belonging to that pair", you are allowing things like ][][()() because, even though it is nonsense for [,], is "simple Catalan" for (,). You said "or", not "and". – Marc van Leeuwen Mar 4 '13 at 6:06
## 1 Answer
For two different types of parenthesis this the sequence is listed in the OEIS here.
Words with balanced $k$-type parentheses are known as $\text{Dyck}(k)$ words. Maybe this helps for further investigations.
-
Thanks Shulz and Hendrik. IN fact at step 2N yes I want a balanced pattern like ([)] for example. But before 2N I don't. I mean at any step before 2N I want a pair of unbalanced symbols () unbalanced or [] unbalanced. Example: [((]))[] is not good because it's balanced for all pairs at final step but also in a previous step. [((])[)] is good, only at the end all pairs are simultaneously balanced. Thanks again – Gianfranco OLDANI Nov 9 '12 at 17:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015138745307922, "perplexity": 1005.4080539450856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051196108.86/warc/CC-MAIN-20160524005316-00068-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/53178 | # The Effects of Numeracy and Brand Preference on the Left-Digit Effect
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/53178
Files Size Format View
DavidWeinerHonorsThesis.pdf 1.191Mb PDF View/Open
Title: The Effects of Numeracy and Brand Preference on the Left-Digit Effect Creators: Weiner, David Advisor: Peters, Ellen Issue Date: 2012-12 Abstract: A left-digit effect (LDE) is said to occur when a change in the left-most digit of a value (e.g., when \$4.00 drops to \$2.99 versus when it drops from \$4.01 to \$3.00) significantly increases consumer judgments of the price difference. Thomas and Morwitz (2005) showed that it was the change in the left-digit, rather than the one-cent drop, that affects these perceptions. This effect has domain invariance, meaning the left-digit effect manifests not only in prices, but also in other types of nine-ending numbers (Thomas & Morwitz, 2005). The finding has important implications for pricing practices. The present study examined whether individual differences in numeracy or affective information based on brand preference influence the left-digit effect. Peters et al. (2006) defined numeracy as the ability to use and understand basic probability and mathematical concepts. People range widely in their ability to use numbers, and the highly numerate encode numeracy and non-numeric information differently than the less numerate when making decisions. We predict that numeracy ability along with brand preference can help negate the left-digit effect and lead to better magnitude perceptions. Using a modification of the experimental paradigm from Thomas and Morwitz, we showed participants two 12-packs of soda that either have a brand name (Coke and Pepsi) or are generic (the control condition). Subsequently, subjects stated their preference between the sodas (e.g., ranging from strongly prefer Coke to strongly prefer Pepsi in the experimental condition). Subjects then took an eight-question math test to assess their numeracy ability (Weller et al., 2012). Results indicated that overall more numerate individuals paradoxically showed larger LDEs. Participants did not show significantly larger LDEs when brand names were used versus when they were not. Embargo: No embargo Series/Report no.: The Ohio State University. Department of Psychology Honors Theses; 2012 Keywords: decision making brand consumer behavior individual differences numeracy URI: http://hdl.handle.net/1811/53178 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.581365704536438, "perplexity": 8082.72673520019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737994631.83/warc/CC-MAIN-20151001221954-00209-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://cstheory.stackexchange.com/questions/18675/complexity-class-for-optimization-problems-over-p-functions | # Complexity class for Optimization problems over #P functions
Is there any complexity class which contains problems that can be expressed as an optimization over polynomially many #P functions ? i.e:
$$\tilde{f}(x) = \text{Max}_{f \in F}f(x)$$
where $f\in\# P$.
Moreover, if I can reduce a #P-Complete function to this, by means of a polynomial time Turing reduction, am I allowed to say formally that:
$$\tilde{f} \in \#P-\text{Complete}$$
OR
$$FP^{\tilde{f}} \in FP^{\#P}$$
The latter seems more plausible to me than the former, but could anyone please tell me if I could use the former claim.
It's in $\mathrm{FP^{\#P}}$, since you can compute it in poly time using oracle calls to compute the $f$s. If you can poly-time Turing reduce a $\mathrm{\#P}$-complete problem to it, then $\tilde{f}$ is $\mathrm{\#P}$-hard but you can't say it's $\mathrm{\#P}$-complete unless you can show that it's in $\mathrm{\#P}$. That seems unlikely, to me, as I can't see any way of building a nondeterministic Turing machine to have that many accepting paths.
Since $\tilde{f}\in\mathrm{FP^{\#P}}$ and $\mathrm{FP^{FP^{\#P}}}=\mathrm{FP^{\#P}}$ (just compose the reductions), $\mathrm{FP}^{\tilde{f}}\subseteq \mathrm{FP^{\#P}}$. Note that $\mathrm{FP}^{\tilde{f}}$ is a complexity class (all problems that are poly-time Turing reducible to $\tilde{f}$), not a problem.
• Since I mentioned that $F$ is the set of all $\#P$ functions and $\tilde{f}$ is one of them, can't we say that $\tilde{f} \in\#P-\text{Complete}$ ? – Pavithran Iyer Sep 3 '13 at 18:11
• $\tilde{f}$ is the maximum of a set of \#P functions, and that isn't obviously in \#P. Why do you think it's in \#P? – David Richerby Sep 3 '13 at 20:09
• Sorry if I am missing something. "$\tilde{f}$ is the maximum amongst the set of all $\# P$ functions", which means $\tilde{f}$ is in $\# P$ because the maximization is only over the set of $\# P$ functions. Right ? – Pavithran Iyer Sep 5 '13 at 17:31
• The definition given was $\tilde{f}(x) = \max_{f\in F}f(x)$, for some set $F\subseteq \#P$. That doesn't imply that $\tilde{f}\in F$, since the max is performed separately for each value of $x$. – David Richerby Sep 5 '13 at 21:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8203837871551514, "perplexity": 316.6678085037866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998844.16/warc/CC-MAIN-20190618223541-20190619005541-00424.warc.gz"} |
http://mathhelpforum.com/advanced-applied-math/177057-qm-proving-uncertainty-relation-commutators-print.html | # QM- Proving the uncertainty relation - Commutators
• Apr 6th 2011, 12:52 PM
bugatti79
QM- Proving the uncertainty relation - Commutators
Folks,
I am stuck on the derivation of the uncertainty relation when using the commutator and anti commutator...
given $\triangle A=\hat A -\langle A \rangle$
$
\displaystyle \triangle A \triangle B=\frac{1}{2}[\triangle A, \triangle B]+\frac{1}{2}\left(\triangle A, \triangle B\right)
$
Above on the RHS is the commutator and anti-commutator respectively. I dont understand the next line for the commutator
$
[\triangle A, \triangle B]_\pm=[\hat A -\langle A \rangle , \hat B -\langle B \rangle ]_\pm=[\hat A, \hat B -\langle B \rangle ]_\pm-[\langle A \rangle , \hat B-\langle B \rangle ]_\pm$
the last term in the above line is 0 because <A> is a c number
$
=[\hat A, \hat B]-[\hat A, \langle B \rangle ]_\pm
$
where the last term in the above line is also 0 for same reason.
Now I do know [A,B] is AB-BA but I dont know where this algebaic derivation is coming out of..
Thanks
• Apr 8th 2011, 10:12 AM
bugatti79
Quote:
Originally Posted by bugatti79
Above on the RHS is the commutator and anti-commutator respectively. I dont understand the next line for the commutator
$
[\triangle A, \triangle B]_\pm=[\hat A -\langle A \rangle , \hat B -\langle B \rangle ]_\pm=[\hat A, \hat B -\langle B \rangle ]_\pm-[\langle A \rangle , \hat B-\langle B \rangle ]_\pm$
The last term above can be written as
$
[, \hat B] - [,]$
where <A> and <B> are numbers and $\hat B$ is an operator.
The commutator of [<A>,<B>] is <A><B>-<B><A> but we know that<A><B>=<B><A> implies [<A>,<B>]=0. Simlarly for $[, \hat B]$
The procedure is the same for the second last term above and finally yields
$
[\triangle A, \triangle B]_\pm = [ \hat A, \hat B]
$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9633926153182983, "perplexity": 1155.8090068189001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00607.warc.gz"} |
http://tex.stackexchange.com/tags/theorems/hot | # Tag Info
8
This does what you want, though I find it cumbersome and not really informative. \documentclass{book} \usepackage{amsthm,xpatch} \makeatletter \let\qed@empty\openbox % <--- change here, if desired \def\@begintheorem#1#2[#3]{% \deferred@thm@head{% \the\thm@headfont\thm@indent \@ifempty{#1} {\let\thmname\@gobble} ...
7
If we use the amsthm package then we can do this by hijacking the \qedsymbol command and hacking the way that the theorem environments are constructed internally. This comes down to adding some code to \@begintheorem to overwrite \qedsymbol so that it becomes a boxed version of the last theorem number. There are two issues with the code below. The first ...
5
Use a variant of the \tmark command defined in my answer to Moving an object to the right margin \documentclass{article} \usepackage{amsthm} \makeatletter \renewcommand\@endtheorem{\vvv@endmarker\endtrivlist\@endpefalse} \newcommand\vvv@endmarker{% {\unskip\nobreak\hfil\penalty50 \hskip2em\vadjust{}\nobreak\hfil\openbox \parfillskip=0pt ...
4
You're on the wrong track. ;-) Just setup a new counter and use it. \documentclass{book} \usepackage{xparse} \newcounter{study} \NewDocumentCommand{\study}{om}{% \refstepcounter{study}\IfValueT{#1}{\label{#1}}% \section{Study \thestudy: #2}% } \begin{document} \frontmatter \tableofcontents \mainmatter \chapter{Title} \section{A regular ...
4
Asssume you are using \newtheorem from amsthm. Here \newtheorem defines new theorem-alike environments and ends them by \@endtheorem. The later is originally defined as \def\@endtheorem{\endtrivlist\@endpefalse } and you can insert your ending symbol here. \documentclass{article} \usepackage{amsthm} \usepackage{manfnt} \makeatletter ...
3
You want to use different \mdcreateextratikz for the two environments: \documentclass[a4paper,10pt]{memoir} \usepackage[utf8]{inputenc} \usepackage{xcolor} \usepackage[framemethod=tikz]{mdframed} \usetikzlibrary{shadows,shadings} \usepackage{lipsum} \usepackage{calc} \usepackage{tikz} \usetikzlibrary{shapes,snakes} \newcounter{demo_counter} ...
3
Define a new key and use it for setting the thickness of the top rule: \documentclass{report} \usepackage{thmbox} \usepackage{xcolor} \makeatletter \def\thmbox@color{black} \define@key{thmbox}{color}{\def\thmbox@color{#1}} \define@key{thmbox}{topthickness}{\def\thmbox@topthickness{#1}} \def\thmbox@topthickness{\thmbox@thickness}% default ...
3
The following likely won't work in the real world so you'll have to try it and see. However, it does at least produce the right result for the MWE. \documentclass{beamer} \usepackage{lmodern} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \uselanguage{Spanish} \languagepath{Spanish} ...
3
As it has been already explained, Beamer uses translator (its particular babel system). If the dictionary exists, you can active the system introducing the language name in beamer options: \documentclass[spanish]{beamer}. This way spanish is applied to beamer-translator system and babel. If spanish is passed only as babel option, you need to apply it for ...
3
amsthm is a package to create theorems and theorem-related environments. It does not do this by default. So, you could issue \newtheorem{theorem}{Theorem} to define the theorem environment: \documentclass{article} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \begin{document} \begin{theorem} Test \end{theorem} \end{document}
3
This is adapted from what I have used in the past (not sure where I got it from): Code: \documentclass{article} \usepackage{amsmath} \newcommand*{\QED}[1]{% \ifmmode% Check for math mode. \tag*{\fbox{#1}}% \else% {\rightskip\fill\parfillskip-\rightskip% \linepenalty100% \exhyphenpenalty0% ...
3
The idea is to add a conditional that's true when a proof starts and change the code accordingly: if the conditional is false, we're not nesting, so we set \qed@current to \qed@empty, otherwise we use the same mechanism as in the other answer. \documentclass{book} \usepackage{amsthm,xpatch} \makeatletter \let\qed@empty\openbox % <--- change here, if ...
3
As Barbara Beeton says this is expected behaviour. However, you can circumvent it by adding \unskip before the \pagebreak: An alternative suggested by egreg is \addpenalty{-10000} instead of the combination \unskip\pagebreak. The code for \addpenalty essentially includes \unskip and \pagebreak (with no argument) is essentially \penalty-10000. ...
2
I am not sure that I understand what you are asking as you seem to be saying that you want two theorem 0.1s an two theorem 0.2s. This does not make sense to me, so I think that you probably want something like this: To do this I have defined a fake \section command that uses a mysection counter, which is also used to number the theorems. ...
2
You should add the definition of \@dotsep to your preamble: \makeatletter \newcommand\@dotsep{4.5} \makeatother The above definition is similar to the other sectional units dot separations, which originally was taken from one of the default document classes (see, for example, article.cls).
1
I don't have enough reputation to add a comment to the previous answer, so let me add the following as an answer instead. I found what Philippe Goutet said in the comment to lockstep's answer to be true, namely that a footnote appears in references to the theorem too. Perhaps it is obvious to others how to implement Philippe's fix, but it wasn't to me: it ...
1
changing the format of the \section command is trivial, and a much better approach (as pointed out by ulrike fischer) than skipping that level and using \subsection. this is the definition of \section in amsart.cls: \def\section{\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\centering}} just ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945344090461731, "perplexity": 4159.339943980135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663036.27/warc/CC-MAIN-20140930004103-00178-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://qanda.ai/en/solutions/wgAC2Emb88-bora%20bne%20Herlasd%20diod%20bavsla%2020%20tedyshiov%20bns%20lediesd%20diod%20bsvsla | Symbol
Problem
bora bne Herlasd diod bavsla 20 tedyshiov bns lediesd diod bsvsla TEST $53$ Expressing Rational Numbers from Fraction Form to Decimal Form and $Mlce-versa$ $A$ Express the Fractions to Decimals forms, Round off your answers to hundredths place. $1.\right)$ $2/6=$ $0.P0$ $f0$ $1$ $C$ $-31$ $5Q.\left(=3$ $S$ $\sum$ $2$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5409132838249207, "perplexity": 3204.6751687326714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00216.warc.gz"} |