url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://github.com/phonchi
code
Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users. You must be logged in to block users. Contact GitHub support about this user’s behavior. Learn more about reporting abuse. Cryo-EM Platform Integrates Competitive Algorithms to Meet Current Challenges An Interactive Way To Illustrate Modern Cryptography for IoT A curated list of awesome computational cryo-EM methods. A curated list of awesome side-channel attack resources Seeing something unexpected? Take a look at the GitHub profile guide.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00020.warc.gz
CC-MAIN-2021-39
568
11
https://neurostars.org/t/bids-derivatives-current-status-of-spaces-mapping-metadata/16762
code
I’m working on scripts for analyzing BIDS-formatted fMRI data. I would like to keep the outputs consistent with upcoming BIDS releases (incorporating derivatives extensions), and am trying to figure out which metadata fields I should specify relating to spaces/atlases. Based on my understanding from documentation online, BIDS 1.4.0 has a SpatialReference field, specifying an atlas or reference image. The working copy of the BEP014 suggests that this information will be in two fields, ReferenceMap and NonstandardReference, depending on whether the reference is standard. A few questions, for @oesteban or whoever is aware - Will ReferenceMap and NonstandardReference supersede SpatialReference, or will the latter still be used for anything? I frequently analyze data in a functional template space defined per individual subject - i.e., one functional image that all runs/tasks for a given participant are registered to. Has there been any consideration of adding dedicated “space” labels for such functional template spaces, along the lines of the “individual” or “fsnative” labels for individual anatomical spaces?
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00260.warc.gz
CC-MAIN-2022-21
1,136
5
https://groups.yahoo.com/neo/groups/agile-usability/conversations/messages/6714
code
6714Re: [agile-usability] Linkedin Group on Agile and UX - Jan 5, 2010Tim, Could you be more specific about "clearing the path for project success" means? In my organization, where the PMs are mostly working in the waterfall way, they handle project resourcing (we are an highly projectized internal agency and each team member bills their time by the hour... which is it's own kettle of fish...), so making sure the teams have who they need when they need them and resolving resource conflicts between different teams/projects is one "clearing the way" task PMs perform (again, specifically waterfall talk here). Since we do hourly billing, the PMs are also responsible for budgeting and accounting. They give regular financial reports to management about how the projects are "burning" against planned budgets. Both of these activities also feed into long range resource and budge forecasting. I'm not sure who on the "team" would take on these responsibilities in the Agile world. -ccOn Tue, Jan 5, 2010 at 2:21 PM, Tim Wright <sambo.shacklock@...> wrote:In our organisation, Agile PMs are responsible for delivary of all "in scope" project outputs. This is different to a scrummaster who is responsible for effective functioning of the team (give or take a few sterotypes). Typically, tho, the PM is an outward facing role who is always talking to other PMs and stakeholders to clear the path for project success and the scrummaster helps the team follow the path.TimOn Wed, Jan 6, 2010 at 9:56 AM, Margaret Motamed <motamed@...> wrote:And we use yahoogroups! Seriously, I am currently the scrum master for our division's agile transformation (enterprise transition) team. And I hope to become a product owner for one of the dev teams. I am also a program manager, a card carrying PMP and now CSM too smile. I have previously people managed a team of ux folk. We set up our company's first usability lab and trailblased personas. But none of it took the first time. So that's why I'm listening here. I've been a business analyst. A sw dev. A hardware engr. A research team member. Etc. Think of us useful team members who are generally resourceful. And enterprise wide there are still project details to manage too Fledgling blog www.agiledreamer.com From: email@example.com <firstname.lastname@example.org> To: email@example.com <firstname.lastname@example.org> Sent: Tue Jan 05 12:25:49 2010 Subject: Re: [agile-usability] Linkedin Group on Agile and UXOn Jan 5, 2010, at 1:49 PM, William Pietri wrote:Apparently, they use LinkedIn.Jared Confidentiality notice: This message may contain confidential information. It is intended only for the person to whom it is addressed. If you are not that person, you should not use this message. We request that you notify us by replying to this message, and then delete all copies including any contained in your reply. Thank you. - << Previous post in topic Next post in topic >>
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690318.83/warc/CC-MAIN-20170925040422-20170925060422-00278.warc.gz
CC-MAIN-2017-39
2,925
17
https://forums.unrealengine.com/t/line-of-sight-dynamic-mesh/158312
code
Line of sight. 3-axis rotation: X, Y, Z High performance: the component is written in C ++ Texture support: component has no UV, textures are supported in local mesh space Marketplace link: Here Setting options: Angle and Radius Simple usage: For minimal use only 2 functions need to be called A simple way to get information about detected objects An example of rotation along the Y axis. The component rotates along all 3 axes, this can be used for 2D games. Component can be used with postprocess for pseudo lighting Added version for 4.25. Changed the World rotation mode in the built-in functions (now it is not World Rotation, but the Y axis moves the mesh up) The PostProcess2 material has been removed. Hey, I really like this system. Is there a way to add verticality to the mesh? I would like to have a decal material that follows the shape, so it is projected down to uneven landscapes. Any pointers on how I would modify the C++ code to achieve this? Bugfix: after calling Stop Build Mesh and Start Build Mesh, the number of triangles increases and the translucency decreases MeshIsBuilt - Returns true if Start Build Mesh was called LineOfSightIsActive - will return true if Start Line Trace was called The Number Of Lines property is no longer available on the Detail Panel. Now the number of trace lines is specified in the Start Line Trace function. hi could you go into more detail on how to achieve verticality: i have this with decal currently. is it possible for the mesh in your plugin to be height aware ? Hi. Interesting question. The mesh is flat and defines enemies (objects) only that intersect the grid (or TraceLine lines if the mesh is not created). If you need to identify enemies at a different height, create several components (display only one). About the material : I haven’t tried decals. I can’t answer this question. But, one user sent me a screenshot, he used my component and made a Scene Capture and recorded the result in the Render Target. And Render Target was used in the material. Example Render Target and result: But I don’t have the details, I’m not an expert in materials.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00581.warc.gz
CC-MAIN-2022-49
2,130
26
https://osdn.net/projects/yash/ticket/39708
code
Unable to rebind pre-defined keybindings I am unable to rebind keys that yash defines from by default. For example, 'Ctrl-H' is used for 'backward-delete-char' so the following line in my yast configuration does nothing bindkey -e '\^H' complete-prev-column Also related is that 'complete-prev-column' and 'complete-next-column' move to the first entry of the column. For example: Documents [ Pictures ] bindkey -e '\H' complete-prev-column To rebind Ctrl-H, you may have to do bindkey '\?' ... or bindkey '\B' .... It depends on your terminal's configuration which binding Ctrl-H is actually treated as. 'complete-prev-column' and 'complete-next-column' move to the first entry of the column. That is exactly what complete-next/prev-column are expected to do. Could you elaborate on your issue? Binding '\?' worked for me, thanks. I'll have to go find a table that lists these... I would have thought both commands would keep to the same row, but if that's the intended behaviour then I don't have anything to add.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00149.warc.gz
CC-MAIN-2020-24
1,015
12
https://prog.world/an-analogue-of-moodle-or-as-a-law-teacher-he-created-his-own-distance-learning-system-part-1-beginning/
code
An analogue of Moodle, or as a law teacher, he created his own distance learning system. Part 1. Beginning Disclaimer: the distance learning system (DLS) is currently put into operation (production), approved, tested and is successfully operating. System free, is open source and posted on GitHub repository… In terms of the technology stack, it is built on the Laravel 8.0 (PHP 7.4) framework using libraries and other packages (programs): React (redux + router), SocketIo, Docker, NodeJs, Rest API, WebRTC, Leaflet, etc. In a series of articles, I will tell you how the system works, what I had to go through while developing it, and what problems I had to solve. The unique features of the system include: Internal anti-plagiarism. In the course of conducting classes, we faced the problem of students copying solutions to tasks from each other. In manual mode, it is quite difficult to track the degree of coincidence of solutions, and therefore a system has been developed showing the percentage of “similarity” of answers. Identification of technical methods to improve the uniqueness of the text. To increase the efficiency of checking works and detecting attempts to use various methods of increasing uniqueness, a system has been developed to identify signs of suspicious interference with the text. In addition to the standard verification algorithms (tests, document verification, chats, etc.), the following have been implemented: a. Quest. The visual quest designer allows you to implement almost any kind of tasks in the form of a game. b. Dialogue system. In the design mode, a tree of questions and answers is created that allow you to non-linearly solve individual problems (conversation, interrogation, algorithm of actions, etc.) c. Portfolio. To account for additional work, additional assessment categories have been created (science, creativity, layout, etc.). Analysis of the text of the answers. Automatic verification of texts and procedural documents allows you to save the teacher’s time and identify poor-quality answers. Internal self-test. Users can create tasks themselves and cross-check each other’s tasks. Own algorithm for comparing students’ decisions in order to exclude repetitions (duplication) of answers. You can call it your own internal anti-plagiarism. Virtual whiteboard… To demonstrate documents, presentations, text and photographs to all participants, as well as to conduct a blitz survey on the topic, a board was created that allows the organizer to see and automatically check the answers to tasks online, as well as enable / disable the visibility of the correct answer by users, the visibility of the answer other participants. Online support. Real-time monitoring of user-opened pages allows for more efficient support. Gps… To implement creative ideas, an interactive map has been created, on which users are marked in real time and their movement in the field. Video communication… For remote interaction, a video communication system is used, which allows demonstrating sound and image from a web camera, or the organizer’s screen. Telegram Bot… A telegram bot will automatically notify you of incoming messages. Having a higher legal education and having worked for several years as a practicing specialist, life has developed in such a way that I began to work as a teacher in an educational organization at the department of legal profile. Previously, I did not have to think about my own distance learning system – there was enough time for face-to-face classes. However, after a lapse of time (about 5 years), it turned out that there is much more knowledge that I would like to transfer to students than the curriculum allows. A review of the existing and most common CMS such as WordPress and Joomla showed that they solve slightly different tasks and out of the box do not have the functionality I need. Installing the necessary plugins, packages and extensions at that moment seemed an impossible task from the point of view of understanding the work of all this in a complex and expanding its own functionality. I also refused to install and use Moodle due to the complexity of understanding at that time its installation, use and cumbersomeness, even ready-made virtual machines were not considered. I wanted to do something of my own so that it would work quickly with a full understanding of the principles of work and further improvement. It was decided to create their own system from scratch. The first version of the system was written in PHP and JQuery using the MySql interface for accessing the database, all the material was found on the Internet and by copying the code fragment by fragment, a system was made that allows it to somehow support and expand its functionality. The shortcomings of this approach were discovered quite quickly, starting from the simplest SQL injections like “1 = 1” that were used to delete the database and the realization that working with procedural code is fraught with many problems in the form of poor support, confusion and code duplication. Having decided that it was necessary to study modern patterns and design patterns, I settled on the task of creating my own CMS using the MVC and OOP design pattern. Routing was done very primitively: the URI address was processed using regular expressions, the corresponding controller class was called manually (with its own autoload), which in turn called the settings, model and template classes. The system has become more structured and modern. The PDO interface was used to access MySql. The class of each model inherited from the base class, which was connected to the base in the constructor using the singleton template and returned information in an associative array (flag PDO :: FETCH_ASSOC). In the base class, methods were created that vaguely resemble an attempt to implement your own ORM in the form of a CRUD. The structure of the database itself was very ill-conceived: the tasks table contained redundant information about the topic, the task itself, the author, etc. The database queries looked pretty bad. An example of a request for a choice of solutions: $decisions = $model->findBySql("SELECT *, d.id as did, t.login as tlogin, d.login as login FROM decisions as d, tasks as t WHERE ((d.id_task = 0 and t.number_razdel = d.number_razdel and t.number_task = d.number_task) or (t.id=d.id_task)) and t." . $route['discipline'] . " = 1 and d.login IN ('" . implode("','", $arr_users) . "')", ); $ decisions = $ model-> findBySql (“SELECT *, d.id as did, t.login as tlogin, d.login as login FROM decisions as d, tasks as t WHERE ((d.id_task = 0 and t.number_razdel = d.number_razdel and t.number_task = d.number_task) or (t.id = d.id_task)) and t. “. $ route[‘discipline’] … “= 1 and d.login IN (‘”. Implode (“‘, ‘”, $ arr_users). “‘)”, ); Further improvement of the system was the introduction of composer, the creation of an advanced autoloader and namespace. There was also a transition to PhpStorm (teaching license) and OpenServer. The frontend was also actively developing. Initially, all the logic for building html was built using PHP itself as a template engine, only in some parts of the code was jQuery used for animation and visual effects. Later, third-party libraries were connected TinyMCE (visual editor for content editing), ResponsiveFilemanager (for working with the file system), Bootstrap… The functionality of the system was carried out using classic POST and GET requests with parameters (no encryption systems, no tokens were used), later I came to the widespread implementation of ajax, fetch, axios, but that was later. Quite quickly, I abandoned this idea, first of all, for the above reason, and also because of the complexity of creating projects by other users, the need to re-convert the presentation into a quest at the slightest fixes, the absence of normal feedback from the iframe through the main window, etc. … As a result, it turned out, as it seemed to me, a good application that was connected to include on the site page and even worked as I needed. I attach screenshots of the first version. The undoubted advantage was the ability not to use bundlers such as Webpack, but simple editing of the code right on the server. At that moment, I practically did not use the npm or composer libraries, so I wrote code both at home and at work, and exchanged files using YandexDisk. This did not cause any discomfort. I knew there was a git, but I didn’t know why I needed it. The problems of running a SPA application without using libraries include the cluttered DOM tree, since all data was stored in data attributes, the complexity of the code readability, everything was stored in the database in a serialized JSON format, a lot of promises implemented in async await functions, which are not always worked as I needed, there was almost no mechanism for handling errors and exceptions. But everything seemed to work and complete the tasks. End of Part 1. If the topic is interesting, then I will continue the story (the following works in the current system): Part. 2. Creation of API for SPA, solving problems with cross-site requests, validation, socket injection, problems of choosing a PHP framework. Part. 3. How I switched to Laravel and how it turned out to be worse than a self-written framework. Part 4. Moving to ReactJs, introducing flux, SOLID and integrating into Laravel. Part. 5 Training through interactive GPS maps, why Docker was needed, the transition to OSM and OSRM. Part. 6. Parsing of docx documents, internal anti-plagiarism, identification of technical methods to increase the uniqueness of the text. Part. 7. Implementation of neural networks in the work of the LMS.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653608.76/warc/CC-MAIN-20230607042751-20230607072751-00048.warc.gz
CC-MAIN-2023-23
9,770
40
https://www.routledge.com/Philip-Melanchthon-Speaker-of-the-Reformation-Wittenbergs-Other-Reformer/Wengert/p/book/9781409406624
code
The studies in this volume illuminate the thought and life of Philip Melanchthon, one of the most neglected major figures in Reformation history and theology. Melanchthon was one of the most widely published and respected thinkers in his own day, who authored some of the sixteenth-century's most important books on Latin and Greek grammar, rhetoric, dialectics, and history, to say nothing of his theological output, which included the first overview of Protestant theology, the first Protestant commentaries on Romans, 1 & 2 Corinthians, and John. He was also the chief drafter of the Augsburg Confession and wrote its defense, the Apology. These essays, written over the past twenty years, commemorate the 450th anniversary of Melanchthon's death in 2010. The articles provide a wide-ranging picture of Melanchthon's thought and life with topics including his view of free will, approaches to biblical interpretation, his perspective on the church fathers and world history, and comparisons to other important figures of the age, including Calvin, Luther and Erasmus. Table of Contents Contents: Introduction; Part 1 Philip Melanchthon's Theology; Beyond stereotypes: the real Philip Melanchthon; Philip Melanchthon's 1522 annotations on Romans and the Lutheran origins of rhetorical criticism; 'Qui vigilantissimis oculis veterum omnium commentarios excusserit': Philip Melanchthon's patristic exegesis; Philip Melanchthon and Augustine of Hippo; Philip Melanchthon on time and history in the Reformation; Philip Melanchthon's contribution to Luther's debate with Erasmus over the bondage of the will; The day Philip Melanchthon got mad; Luther and Melanchthon on consecrated communion wine (Eisleben 1542-43); Philip Melanchthon and a Christian Politics. Part 2 Philip Melanchthon and His Contemporaries: Melanchthon and Luther / Luther and Melanchthon; 'We will feast together in Heaven forever': the epistolary friendship of John Calvin and Philip Melanchthon; Famous last words: the final epistolary exchange between Erasmus of Rotterdam and Philip Melanchthon in 1536; 'Not by nature Philoneikos': Philip Melanchthon's initial reactions to the Augsburg interim; Index. Dr Timothy J. Wengert is the Ministerium of Pennsylvania Professor at the Lutheran Theological Seminary at Philadelphia, USA. '... an accessible handbook of exceptional studies...' Religious Studies Review '... much of Wengert’s further work has been scattered in essays in a variety of venues. This collection brings together thirteen of those essays, demonstrating his effectively focused, finely crafted way of approaching historical theology. Each adds in some way interesting and significant detail to our understanding of Melanchthon.' Lutheran Quarterly 'Drawn from thirty years of research and writing on Melanchthon, Wengert’s essays survey the Reformer’s fascinating career and offer profound insights into his impact on the Reformation.' Catholic Historical Review '... a handy collection of useful Melanchthon articles gathered together in one place ... Wengert’s exacting research is a necessary point of call for any modern researcher of Melanchthon.' Ecclesiastical History 'Wengert is a Lutheran scholar in very high standing, and the contributions here are first rate, covering aspects of Melanchthon's theology.' Churchman
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00563.warc.gz
CC-MAIN-2020-29
3,326
5
https://b-flow.es/producto/reservoir-holder/
code
Versatile holder for different formats of reservoirs. Support platform (with or without magnets) to improve the stability and positioning of the cell medium reservoirs. the BFlow newsletter Looking for help? Get in touch with us Terms and Conditions
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00430.warc.gz
CC-MAIN-2023-50
249
5
https://standards.buildingsmart.org/IFC/RELEASE/IFC2x3/FINAL/HTML/ifcstructuralanalysisdomain/lexical/ifcstructuralcurvemembervarying.htm
code
Definition from buildingSMART: Instances of the entity IfcStructuralCurveMemberVarying shall be used to describe linear structural elements with varying profile properties. The varying profile properties are assigned through the IfcRelAssociatesProfileProperties with an additional link to the IfcShapeAspect, which relates the profile properties to the different vertices of the structural curve member. HISTORY: New entity in Release IFC2x Edition 2. varying profiles along the longitudinal axis are assigned by using several relationships of IfcRelAssociatesProfileProperties, each assigning one profile definition (IfcProfileProperties optionally referencing one IfcProfileDef) to a vertex along the longitudial axis. The topological representation is an IfcEdge, decomposed into IfcSubEdge's, a changing profile definition is associated to the start or end vertex of the IfcSubEdge. Topology Use Definition Instances of IfcStructuralCurveMemberVarying shall have a topology representation. It includes a placement and a product representation. The IfcProductRepresentation shall be given by an item of Representations being of type "IfcTopologyRepresentation". The guidelines on using the location and topological representation capabilities are identical with the supertype IfcStructuralCurveMember. The additional requirement is that if the varying profile not only has different (morphing) profiles at the start and end edge, then the IfcTopologyRepresentation.Item shall be an IfcEdge (or IfcEdgeCurve, IfcOrientedEdge) that is referenced by the Parent attribute of at least two IfcSubedge's. Shape Aspect Use Definition The attribute HasAssociations references a set of IfcRelAssociatesProfileProperties, each referring to an IfcShapeAspect, that has a list of ShapeRepresentations. Each individual IfcShapeRepresentation within that list shall have a single (or two) item(s) within its list of Items. The type of the item shall be: It references either a start or an end vertex (or both) to which the profile properties apply.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737233.51/warc/CC-MAIN-20200807231820-20200808021820-00054.warc.gz
CC-MAIN-2020-34
2,037
15
http://www.auntiesbeads.com/Wire-Wrapped-Rings-Video_p_4163.html
code
I've always trusted Aunties Beading Videos One of my customers asked for some rings, I said to her, "No problem!". Left there scratching my head saying to myself, "what did I just commit to?, I have no clue how to make a ring!!". I came to Aunties Beads website, where I have always TRUSTED to teach me new things. This video was straight forward and easy. That evening I made over 50 different styled rings for my customer and she was absolutely delighted with them. Thank you "Aunties"!! Reviewed by: Mariposa Jewelry/C.Jett from Las Vegas, Nevada.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00082-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
550
3
http://pbsb.top-jeunes-talents.fr/lorenz-plot.html
code
Microeconomics in Context (Goodwin, et al. All the methods you need to use are prefixed by plot_. The links below point to separate pages for each plot type. Whenever a plot is drawn, title’s and a label’s for the x axis and y axis are required. In:= Out= Related Guides. " It describes a system very similar to Clausewitz's Trinity imagery, which has three attractors, but I find the Lorenz system to be especially relevant to Clausewitz's way of describing the variations in political and military ojjectives. Here is the code required to plot the X, Y and Z ndarrays and provide the axes labels (the axes info I got from here):. elapsed time (horizontal axis). Gini Index: The Gini index or Gini coefficient is a statistical measure of distribution developed by the Italian statistician Corrado Gini in 1912. Lorenz Attractor in R. The slides walk students through graphing a Lorenz Curve and calculating the Gini Coefficient. CONCLUSION: Major tachyarrhythmias imprint specific patterns on two-dimensional Lorenz plots generated from 24-hour Holter recordings. Lorenz system in R October 3, 2017 by Vadim Zaigrin When I graduated from high school, the theme of my diploma was “the Study of nonlinear dynamical systems with complex behavior”. In (a) and (b) show time plots (x(t) and z(t)) and 2d or 3d phase plots. 951291370506 Figure 1. The Lorenz attractor is our first line plot, made by Plotly's CEO Jack Parmer. To display a Lorenz curve like that below, select the Lorenz Curve option from the World Map, Lorenz Curve, Gini, Histogram form and choose the same variable you displayed in the World Map: GDP per capita at PPP or GDPPCP. They all come from. To solve the Lorenz equations and thus produce the Lorenz attractor plot, a program was written in FORTRAN, which used the aforementioned Fourth-Order Runge-Kutta method to evaluate the CODEs hence produce useable data in the form of a comma separating variable file. 1), and connect the points to form the Lorenz curve BCD. Sehen Sie sich auf LinkedIn das vollständige Profil an. http://www. the Lorenz attractor. Sometime later I may try to find the dimension. Do the same for Model B and compare the Gini indices produced by the two models. 5 Matlab Code function lorenz_spectra(T,dt) % Usage: lorenz_spectra(T,dt) % T is the total time and dt is the time step % parameters defining canonical Lorenz attractor. By moving the mouse, the user can rotate the system, and then by using the arrow keys he or she can move the point about which the plot rotates. Storage capacity is ordered in stratigraphic sequence. This syntax will overlay multiple Lorenz curves on the same plot. On the following graph, plot the Lorenz curves for the three countries. The panels in Figure 10. ## "Hello World!" Once for a class assignment, we were asked to control the Lorenz system. elapsed time (horizontal axis). Edward Lorenz, 1956 A plot of the Lorenz attractor for the values r = 28, s = 10, b = 8/3 “Fast Eddy” and the Meteorology Department’s softball team, 1979. jl Documentation. Once you do the conversion, ensure that the data are in a XL range. Using the trapezoidal integration rule, we have (Jensent and Lake, 1991): ∑∑ ∑ == = Φ. cating fractal basin boundaries. In the following figure, an example of an ODE from chaos theory is shown: the famous Lorenz attractor. Dash Club is a no-fluff, twice-a-month email with links and notes on the latest Dash developments and community happenings. To solve the Lorenz equations and thus produce the Lorenz attractor plot, a program was written in FORTRAN, which used the aforementioned Fourth-Order Runge-Kutta method to evaluate the CODEs hence produce useable data in the form of a comma separating variable file. http://www. Now that we have reason to believe the process is stable, we would like to look more closely at the sources of process variation. Lorenz 96 model Metadata This file contains additional information such as Exif metadata which may have been added by the digital camera, scanner, or software program used to create or digitize it. Lorenz example¶. Lorenz Curve (Source: Wikipedia). 5; [t,y]=ode45(@lorenz,[0:0. Read "Double-sector Lorenz plot scattering in an R-R interval analysis of patients with chronic atrial fibrillation: Incidence and characteristics of vertices of the double-sector scattering, Journal of Electrocardiology" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. (Spoilers follow for anyone who hasn’t seen the film yet. logical argument that indicates if the Lorenz curve itself is plotted (if plot. The lorenz attractor was first studied by Ed N. , Cary, NC ABSTRACT SAS/GRAPH software is a very powerful tool for creating a wide range of business and scientific graphs. Lorenz Curves and Treatment-Covariate Interactions in Clinical Trials Pj, and with t⇤ a suitably chosen time point (Bonetti and Gelber, 2004). Lorenz Attractor - 1 view - Color This application shows only the YZ plane and colors the points from black, through the cool and then warm colors, to white as the algorithm generates them from the first to the last iteration. The vector field of the Lorenz system flow is integrated to display trajectories using mlab’s flow function: mayavi. Active 1 year, 4 months ago. L1), press the CLEAR button, then press the down arrow key. 67,4ê8ê2002 intreset; plotreset; ‡1. We need a variable column (all in numeric value), the example has values from cell A2 to A101. You could have uses CreateSpace(curve,0,100,21) without getting the error, but the plot would consist of 21 points only. Lorenz Equations Date: 12/25/2002 at 19:20:11 From: John Subject: Lorenz equations I don't understand how to plot numbers in Lorenz equations in order to get points I could plot. Such a plot is called the bifurcation diagram. The graph plots percentiles of the. Author: Thomas Breloff (@tbreloff) To get started, see the tutorial. vector, plot (y) produces a linear graph of the elements of y versus the index of the elements of y. The equations describe the flow of fluid in a box which is heated along the bottom. Following Lorenz, we consider a chaotic solution of the Lorenz equations, and we extract the maxima in z. CONCLUSION: Major tachyarrhythmias imprint specific patterns on two-dimensional Lorenz plots generated from 24-hour Holter recordings. Piper Plot and Sti Diagram Examples Dave Lorenz October 24, 2016 Abstract This example demonstrates how to prepare data for a Piper plot and create a Piper plot (Piper, 1944) from those data. Line Charts and Options. So you can use this in a Calculated column like this: NormDist([MyValue],Avg([MyValue]),StdDev([MyValue])) And display that on your y-axis of a scatter plot while your x-axis can be [MyValue]. 0 20 40 60 80 100 120. The system is most commonly expressed as 3 coupled non-linear differential equations. The plot of this house was a quarter of a large garden where in the sixteenth-century galleys were built for the Turkish war. "GLCURVE: Stata module to derive generalised Lorenz curve ordinates," Statistical Software Components S366302, Boston College Department of Economics, revised 24 Jun 2008. It was developed by Max Lorenz in 1905, and is primarily used in economics. Figure 3, below, shows the shape of Lorenz Curves in the case of the three income distributions A, B and. This new modified moving window Lorenz plot method seems promising way of constructing a portable ECG-based epilepsy alarm for certain patients with epilepsy who needs aid during seizure. To test with multiple series, try setting 'variation' to about 20, 'spread' to about 0. Stratified Plots. Consider the following income distributions: A) Plot the Lorenz Curves for 1990 and for 2000. An example displaying the trajectories for the Lorenz system of equations along with the z-nullcline. The Lorenz attractor is an example of chaotic dynamics in 3-dimensional space. 3 Image Processing From the RR time interval data, standard Lorenz plot was constructed by plotting. Lorenz, is an example of a non-linear dynamic system corresponding to the long-term behavior of the Lorenz oscillator. The Lorenz curve is a simple way to describe income distribution using a two-dimensional graph. The system is most commonly expressed as 3 coupled non-linear differential equations. ROC curve analysis in MedCalc includes calculation of area under the curve (AUC), Youden index, optimal criterion and predictive values. Lorenz Curve and Gini Coefficient #python. GitHub Gist: instantly share code, notes, and snippets. (Spoilers follow for anyone who hasn’t seen the film yet. No matter, there’s more than one way to skin this cat. The plot appears on a separate graph page (Graph Page 1). Re: Lorenz 310 Snowblower not working with wet snow. The Lorenz oscillator is a 3-dimensional dynamical system that exhibits chaotic flow, noted for its lemniscate shape. Bear in mind that if you plan to hand in 20 plots, you will do the grader (and mother nature) a favor by using the subplot function to t multiple plots into one page. The Lorenz attractor is a strange attractor, a geometrical object with fractal dimension. Space-time separation plot Distance between two random points from the trajectory will depend on how far apart in time we make the observations For each h > 0, calculate the empirical distribution of kXt −Yt+hk Plot the quantiles as a function of h Example: logistic map stplot(x. XIANG JIN TAO] on Amazon. Lorenz, is an example of a non-linear dynamic system corresponding to the long-term behavior of the Lorenz oscillator. It looks like we don't have any Plot Summaries for this title yet. Show students the YouTube Video Measures of Income Inequality. Lorenz's Waterwheeel In the 1960s, Edward Lorenz, a meteorologist experimented with primitive computer simulations of. Not everyone has played every Fire Emblem game. This is the first post in this blog. Dash Club is a no-fluff, twice-a-month email with links and notes on the latest Dash developments and community happenings. Calculate the Gini index for Model A as twice the area between the Lorenz Curve and the Line of Equality. How to plot multiple data series in ggplot for quality graphs? I've already shown how to plot multiple data series in R with a traditional plot by using the par(new=T), par(new=F) trick. The Lorenz curve is a way of illustrating the income distribution of a country. In our case, we chose an integration method that is the standard 4th-order Runge Kutta technique with a time step. The graph plots percentiles of the. Say we take one of the solutions to the Lorenz equations, and plot where the trajectory crosses the x-y plane when the value of z is 20 and increasing. The 76-year-old, who now lives in Queens, New York, was shocked by the news her first love had. Calculate the Gini index for Model A as twice the area between the Lorenz Curve and the Line of Equality. 6 Please use spoiler tags for major plot events, regardless of the game. "ALORENZ: Stata module to produce Pen's Parade, Lorenz and Generalised Lorenz curve," Statistical Software Components S456749, Boston College Department of Economics, revised 09 Jul 2012. The Lorenz attractor, named for Edward N. Piper Plot and Sti Diagram Examples Dave Lorenz October 24, 2016 Abstract This example demonstrates how to prepare data for a Piper plot and create a Piper plot (Piper, 1944) from those data. The Lorenz curve was developed by Max O. Plot The story opened with a prologue, in which Martin (the Yankee) is visitinghis former fiancée, Alice Carter, on the eve of his marriage to Fay Morgan. It looks like we don't have any Plot Summaries for this title yet. _"Deterministic Nonperiodic Flow":. ch October 27, 2016 Abstract Lorenz and concentration curves are widely used tools in inequality research. Note : You can further customize the plot by choosing the Graph Properties option after right clicking the graph or use Graph>> Graph Properties. GitHub Gist: instantly share code, notes, and snippets. A Lorenz curve is essentially a XY Scatter chart with the (bottom) n% on the x axis and the %of income/wealth on the y axis. 2 shows the plot of the four-year disease free survival (DFS) estimates in. Estimating Lorenz and concentration curves in Stata Ben Jann Institute of Sociology University of Bern ben. The Lorenz attractor, named for Edward N. We compute all 111011 periodic orbits corresponding to symbol sequences of length 20 or less, periodic orbits whose symbol sequences have hundreds of symbols, the Cantor leaves of the Lorenz attractor, and periodic orbits close to the saddle at the origin. Person Income Al $500 Beth 250 Carol 125 David 75 Ed 50 Instructions: Use the tool provided 'Lorenz' to plot a Lorenz curve. What Socrates needs is something that can certainly be supplied, some suitable articulation of the different ways in which the soul can be said. It is also possible to plot the components against each otherwith the command The result is called a phase plane plot, The Lorenz system. Please help, i would need it urgently! Thanks, Judit. Lorenz Plot Showing Normal Sinus Rhythm AF detection settings are auto-adjusted based on the Reason for Monitoring, which is found in Device Data Collection. This plot shows a single periodic orbit of the Lorenz equations. Lorenz (1963) equations. Figure 1 about here The Lorenz curve plots the cumulative share of total income against the cumulative. First let's generate two data series y1 and y2 and plot them with the traditional points methods. 8 Chaos and Strange Attractors: The Lorenz Equations 533 a third order system, superficially the Lorenz equations appear no more complicated than the competing species or predator–prey equations discussed in Sections 9. RESULTS: (1) Fan-shaped RR-Lorenz plots were evidenced in Group A. Use Wolfram|Alpha to generate plots of functions, equations and inequalities in one, two and three dimensions. 0; sigma := 10. The Gini index can be calculated from a Lorenz curve by taking the integral of the curve and subtracting from 0. Read "Cardiac Arrhythmias Imprint Specific Signatures on Lorenz Plots, Annals of Noninvasive Electrocardiology" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. The extent to which the curve sags below the straight diagonal line indicates the degree of inequality of distribution. Plot Haiti's Lorenz curve using the green points (triangle symbol), Croatia's Lorenz curve using the blue points (circle symbol), and Nicaragua's Lorenz curve using the purple points (diamond symbol). Providing Data for Plots and Tables from scipy. MiePlot was originally designed to provide a simple interface (for PCs using Microsoft Windows) to the classic BHMIE algorithm for Mie scattering from a sphere - as published by Bohren and Huffmann in "Absorption and scattering of light by small particles" (ISBN 0-471-29340-7). The Gini index measures the area between the Lorenz curve and a hypothetical line of absolute equality, expressed as a fraction of the maximum area under the line. One can plot F versus C on a linear graph (Fig. This geovisualization plot is used to organize and display geographic information. Use NDSolve to obtain numerical solutions of differential equations, including complex chaotic systems. The Lorenz attractor is a strange attractor, a geometrical object with fractal dimension. plot the (n+1)th value of Z max against the nth value of Z max. XIANG JIN TAO] on Amazon. I'm going to do a plot with the three components. The graph plots percentiles of the. This Page's Entity. Do the same for Model B and compare the Gini indices produced by the two models. View Essay - Lorentz plots from INGENIERIA 1233 at ULA VE. Its shape was described in terms of Fourier coefficients. It plots the cumulative contribution to a quantity over a contributing population. Say we take one of the solutions to the Lorenz equations, and plot where the trajectory crosses the x-y plane when the value of z is 20 and increasing. Integrate the Lorenz model for scale time interval of 10 units starting from the initial condition ( x, y, z ) = ( 4. The double lob remembering a butterfly wing is on the imagination of any complex systems enthusiast. Also, the original Lorenz equations are three-dimensional, so the attractor properly should be displayed in three dimensions. The Lorenz Curve is a graphical method used to display the concentration of activities within an area (e. The site is maintained by FAO’s. - ANC, The World Tonight, October 16, 2017. Referring to the chart below, they defined the Lorenz coefficient of heterogeneity as the (area between the green curve and the red curve) / (area between the red curve and the X axis). Be the first to contribute! Just click the "Edit page" button at the bottom of the page or learn more in the Plot Summary submission guide. Then Dick may have a catchy tune idea. Gunter, SPE, Amoco EPTG; J. The following Matlab project contains the source code and Matlab examples used for lorenz attaractor plot. The area. Lorenz is at Coastal Television - Your Alaska Link. Intro to Plots in Julia. The characteristics of RR-Lorenz plot between the two groups were compared. The first shows a straightforward fit of a constant-speed circular path to a portion of a solution of the Lorenz system, a famous ODE with sensitive dependence on initial parameters. Once you do the conversion, ensure that the data are in a XL range. It may have surfaces, stacked together line a deck of cards, or like the leaves in a head of cabbage. The movie is particularly popular for its odd plot twists and the inclusion of Donnie's imaginary six-foot rabbit, Frank. The implementation of the Lorenz-96 model coupled to PDAF is in the directory testsuite/src/lorenz96/ of the PDAF package. Charting Income Inequality 5 The Lorenz Curve Table 2 - Calculating Lorenz Curves 1 individual 1 3 5 DISCUSSION 5. Further, they find that Lorenz is killing his accomplices. T ˇ31 the Lorenz equation possesses a genuinely strange chaotic attractor, known as the Lorenz attractor, containing no stable orbits. Sehen Sie sich das Profil von Lorenz Keller auf LinkedIn an, dem weltweit größten beruflichen Netzwerk. Storage capacity is ordered in stratigraphic sequence. You will have to convert the top m% to bottom p%. I plot the strange attractor as well as use MATLAB to produce a GIF of the solution. Lorenz Attractor - 1 view - Color This application shows only the YZ plane and colors the points from black, through the cool and then warm colors, to white as the algorithm generates them from the first to the last iteration. Journal of Mathematics is a peer-reviewed, Open Access journal that publishes original research articles as well as review articles on all aspects of both pure and applied mathematics. The Stata code run but my problem is that the graph is wrong, because all incomes appear as straight lines I mean over y axis and those don't look as curves under line 45 that I was created. This attractor was derived from a simplified model of convection in the earth’s atmosphere. Here's Lorenz plot. Plots is a visualization interface and toolset. For a variety of reasons, incomes vary greatly by households. The concept of the well-known Langley plot technique, used for the calibration of ground-based instruments, has been generalized for application to satellite instruments. It also contains annotation lines that should facilitate the explanation of the plot (e. The curved line is the Lorenz Curve while the straight obliqued line is the line of income equality. Plotly now lets you make 3D scatter, line, and surface plots. SPE 38679 Early Determination of Reservoir Flow Units Using an Integrated Petrophysical Method G. 666666667 R := 28. It is only fair they get a chance to go into a story blind for the full experience. FREE COMICS, Comic Book humor, superhero pop culture and more from creator Lorenz (Octane Comics). Once you do the conversion, ensure that the data are in a XL range. 2 Example: Lorenz Attractor The applet Lorenz (see the Examples page) shows trajectories of the famous Lorenz equations in an animated way. RESULTS: (1) Fan-shaped RR-Lorenz plots were evidenced in Group A. Lorenz is at Coastal Television - Your Alaska Link. We will wrap up this series with a look at the fascinating Lorenz Attractor. In this graph, the horizontal axis represents the cumulative percent of households, lined up from left to right in order of increasing income. Calculate the Gini index for Model A as twice the area between the Lorenz Curve and the Line of Equality. Lorenz curve for a given income distribution is measured by some index of inequality. Menu Home. It was developed by Max Lorenz in 1905, and is primarily used in economics. Hey everyone, I'm really hoping someone can help me with this I need to plot percentages over time in a line graph in excel. The third example did not change: in that example, we are combining a scatterplot and a line plot. However, due to Castro’s erratic travel schedule, the plot was never put into action. The blue is the same equation with initial conditions [8 1. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. gom plot the mean of variable depvar against the mean of variable order as ordered by percentiles of the population by variable order. The application of Lorenz plot in analysis of vivo algae excitation fluorescence spectra Abstract: It's the first time to use Lorenz scatter in vivo algea excitation fluorescence analysis. GNUPLOT_I_EXAMPLES, C++ programs which demonstrate the use of the GNUPLOT_I library for interactive runtime GNUPLOT graphics. First, let's convert a ggplot2 tile plane into a Plotly graph, then convert it to a 3D plot. This is an example of plotting Edward Lorenz's 1963 `"Deterministic Nonperiodic Flow"`_ in a 3-dimensional space using mplot3d. I'm going to do a plot with the three components. Easily obtained on google … IN SUMMARY WITH REFERENCE TO THE ABOVE GRAPH In economics, the Lorenz curve is a graphical representation of the distribution of income or of wealth. Although I did not explain it during my lectures, calculating a Gini index or displaying the Lorenz curve can be done very easily with R. However, it may also be used to show inequality in other systems. The Lorenz system is a system of ordinary differential equations first studied by Edward Lorenz. Maple provides many varied forms of plots for you to use. Generalized Lorenz curves and social welfare • Generalized Lorenz curve is the Lorenz curve scaled up at each point by population mean income, i. First use an ODE plotting utility to reproduce the xz-projection of the Lorenz trajectory shown above. The rest of the curve is then constructed by looking at all of the percentages. 25 Senators in Secret Meeting With Jewish Leaders to Plot Strategy Against Growing Anger Over Influence of Jewish Elites. This design shows a simple analog circuit that, when combined with a microcontroller and code, first simulates and then synthesizes chaotic signals using Lorenz chaotic-system principles. plotting method for objects of class "Lc" (Lorenz curves) plot. Concept artist Christian Lorenz Scheurer, dynamic locations, or pivotal moments in the film’s plot. 0 20 40 60 80 100 120. Stratified Plots. Lorenz Curve. The plot will give you an idea of any trends or seasonality in the series. SPE 38679 Early Determination of Reservoir Flow Units Using an Integrated Petrophysical Method G. Examples of these complex systems that Chaos Theory helped fathom are earth's weather system, the behavior of water boiling on a stove, migratory patterns of birds, or the spread of vegetation across a continent. You will have to convert the top m% to bottom p%. If Model A produced the Gini plot above, it would tell us that Model A has identified 60% of risks that contribute only 20% of total losses. Estimating Lorenz and concentration curves in Stata Ben Jann Institute of Sociology University of Bern ben. For new useRs, here’s something to note about ggplot2 when plotting the Lorenz system’s behaviors. The plot of this house was a quarter of a large garden where in the sixteenth-century galleys were built for the Turkish war. Kurva Lorenz mangrupa grafik nu nembongkeun, di handap x% rumahtangga, persentase y% total panghasilan no dipibanda. Plots the Lorenz curve that is a graphical representation of the cumulative distribution function. Kurva Lorenz dumasar kana ieu pernyataan; unggal titik dina kurva ngagambarkeun unggal pernyataan. The Lorenz system is a system of ordinary differential equations (the Lorenz equations) first studied by Edward Lorenz. Plots is a visualization interface and toolset. Visualize the Lorenz Attractor. The horizontal axis measures the percentages of the population while the vertical axis shows the percentage of the national income that they receive. Lorenz Curve: The Lorenz curve is a graphical representation of income inequality or wealth inequality developed by American economist Max Lorenz in 1905. Give it a function called the Lorenz equation. An introductory primer on chaos and fractals; The meaning of the butterfly: Why pop culture loves the 'butterfly effect,' and gets it totally wrong, Peter Dizikes, The Boston Globe, June 8, 2008. The significance of these equations, which were discovered by Edward Lorenz back in the 60s, is that relatively simple systems such as these could exhibit rather complex (specifically, chaotic) behavior. 0 ) and store the data. Press 'Reset Axes' to reset. He was a meteorologist studying weather forecasting—and the question of the fundamental limitations to this endeavor. If there were two dimensions, then it would be a lot harder to get the sorts of complicated behaviour we see. A population is divided into quintiles: The richest quintile is the 20% of households with the highest disposable. Referring to the chart below, they defined the Lorenz coefficient of heterogeneity as the (area between the green curve and the red curve) / (area between the red curve and the X axis). gp plot the Pen's Parade curve (max value of each percentile). Is it possible that there isnt such option on this version? Or its there under another name? I could find it under Chart wizard in another version. To do so, calculate 1-population%, and 100- income%. Q&A for Work. Jay Lorenz subscribe to this author via RSS. Major changes in the structure of the standard energy model, the Turner 2004 parameters, the pervasive use of multi-core CPUs, and an increasing number of algorithmic variants prompted a major technical overhaul. 70+ channels, more of your favorite shows, & unlimited DVR storage space all in one great price. Lc: Plot Lorenz Curve in ineq: Measuring Inequality, Concentration, and Poverty rdrr. Also, the original Lorenz equations are three-dimensional, so the attractor properly should be displayed in three dimensions. logical argument that indicates if the Lorenz curve itself is plotted (if plot. After finding this sequence, we will plot zn+1versus zn. It is notable for having chaotic solutions for certain parameter values and initial conditions. We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. The graph plots percentiles of the. The Lorenz system is a non-linear system involving three parameters. Notice how the two graphs are identical for a while and then eventually bear no similarity at all. You will have to convert the top m% to bottom p%. In the following figure, an example of an ODE from chaos theory is shown: the famous Lorenz attractor. It also arises naturally in models of lasers and dynamos. It may have infinite length, but fit inside a finite box. No matter, there’s more than one way to skin this cat. lc = FALSE, only the line of equality is plotted)) Details The Gini coefficient (Gini 1912) is a popular measure of statistical dispersion, especially used for analyzing inequality or concentration. Travis CI build status: This is a small package for the famous statistical programming language R. Historgrams and Overlayed Normal Curves in Excel How to create histograms using Excel 2003 and 2007. English: An icon of chaos theory - the Lorenz attractor. Apparently there is no way to add tick marks or control their spacing in the frame box for 3d plots as you can do it in 2d plots. 25 Senators in Secret Meeting With Jewish Leaders to Plot Strategy Against Growing Anger Over Influence of Jewish Elites. Line Charts and Options. There is the NormDist() function which takes 3 arguments: NormDist([myDataColumn], [Mean], [Standard Deviation]). This sensitivity is now called the "butterfly effect". Lorenz (1963) equations. EASYPol is a multilingual repository of freely downloadable resources for policy making in agriculture, rural development and food security. by "Federal Reserve Bank of St. This presentation provides an overview the types of graphs that can be produced with SAS/GRAPH software and the basic procedure syntax for. It is also possible to plot the components against each otherwith the command The result is called a phase plane plot, The Lorenz system. ) of what in the Republic framework is the non-rational soul. The Armed Forces of the Philippines has neutralized the Red October plot, an alleged ouster move hatched by anti-government forces against President Duterte. In:= Out= Related Guides. The notion of butterfly effect is coupled with that of the Lorenz attractor. This sensitivity is now called the "butterfly effect". Lorenz curve—named after Max Lorenz, the statistician who first developed the technique. 7 Plotly Graphs in 3D: Stocks, Cats, and Lakes. In our case, we chose an integration method that is the standard 4th-order Runge Kutta technique with a time step. The Gini coefficient boils down that full range of data to a single number, which is why it's useful for comparisons. How Lorenz Plots are made in. This is a Lorenz curve: a graph on which the cumulative percentage of some variable is plotted against the cumulative percentage of the corresponding population and is ranked in increasing size of share. The Lorenz curve is a simple way to describe income distribution using a two-dimensional graph. Edward Lorenz, 1956 A plot of the Lorenz attractor for the values r = 28, s = 10, b = 8/3 “Fast Eddy” and the Meteorology Department’s softball team, 1979. The Gini index can be calculated from a Lorenz curve by taking the integral of the curve and subtracting from 0. Sometime later I may try to find the dimension. The Lorentzian function has Fourier transform. You can do anything pretty easily with R, for instance, calculate concentration indexes such as the Gini index or display the Lorenz curve (dedicated to my students). However, Socrates' attribution to the soul of all and only desires, emotions and beliefs of reason (to use the Republic framework) is actually quite compatible with the view that the soul is responsible for all the life-activities organisms engage in, including, of course, the desires (etc. Excerpt from GEOL557 Numerical Modeling of Earth Systems by Becker and Kaus (2016) 1 Exercise: Solving ODEs – Lorenz equations Reading Spiegelman (2004), chap. plot the (n+1)th value of Z max against the nth value of Z max. Click Store to save the Gini coefficient or coefficient of asymmetry for a Lorenz curve. However, vectorize converts symbolic objects into strings. You can copy and paste this code and use a test username and key, or. Gini coefficient, along with Lorenz curve, is a great way to show inequality in a series of values. A Lorenz curve is a graph used in economics to show inequality in income spread or wealth. Menu Home. Major changes in the structure of the standard energy model, the Turner 2004 parameters, the pervasive use of multi-core CPUs, and an increasing number of algorithmic variants prompted a major technical overhaul. 67,4ê8ê2002 intreset; plotreset; ‡1. The standard image analysis techniques were applied to the grey scale images of Lorenz plots, and the outlines of the attractor areas were determined via a contour following procedure based on a maze walking algorithm. Provided is a full implementation of PDAF with the nonlinear Lorenz96 model (E. Although the Nspire is capable to plot 3D graphs, sequence functions is supported in 2D plot only. plotting import figure, show, output_file sigma = 10 rho = 28 beta = 8. But the Mathcad 13 example attached below is based upon an image that I saw in Mathsoft's sales brochure for Mathcad PLUS 6. The L shape of the scatter plot was not what I expected. Lorenz, a pioneer of chaos theory, the Center fosters creative approaches to increasing fundamental understanding. In his famous 1963 paper Lorenz picturesquely explains that a butterfly flapping its wings in Beijing could affect the weather thousands of miles away some days later. [A plot of L is given in Peitgen, Juergens Saupe "Chaos and Fractals", p. The Lorenz system is a continuous dynamical system, which means that the values of x, y, and z determine the future of the system, which means that if you plot a line in 3D space, that line can’t cross itself. The value of the Lorenz coefficient ranges from 0 to 1, a uniform permeability reservoir having a Lorenz coefficient of zero. integrate import odeint from bokeh. First, let's convert a ggplot2 tile plane into a Plotly graph, then convert it to a 3D plot. Petrel With a little help from Excel Objectives of the Lorenz Plot Using Phie & K log data To classify the reservoir in terms of Flow units Storage units To divide the reservoir into meaningful units for simulation To support reservoir characterization Data processing overview Re-sample the reservoir in 1m. Lorenz Curve. Sometime later I may try to find the dimension. The accumulation plot also know as Lorenz Curve is a type of geovisualization. The Lorenz Curve shows how data is distributed with two variables, and can easily be compared with a perfect equality line to show the disproportionate distribution of a variable. This geovisualization plot is used to organize and display geographic information. It offers the possibility to draw nice Lorenz curves with Hadley Wickhams ggplot2 package and compute different inequality measures (until now only the Gini-Index and the Herfindahl-Index are implemented). Lorenz Attractor. RESULTS: (1) Fan-shaped RR-Lorenz plots were evidenced in Group A. Use ODE 23. EASYPol is a multilingual repository of freely downloadable resources for policy making in agriculture, rural development and food security. The crossing of trajectories to different wings of the butterfly and the apparent merging of the two surfaces are both evident in this plot. First let's generate two data series y1 and y2 and plot them with the traditional points methods. The Lorentzian function extended into the complex plane is illustrated above. But a faster solution to create the grid without the loop and saving every line in a file in is to use the Scilab function meshgrid. 3 Curves, Points, and Hints). It is an example of the usage of point-to-point hints (see Chapter 1. t time series plot of the system with initial conditions. “Wala na ‘yang Red October na ‘yan, lusaw na.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00130.warc.gz
CC-MAIN-2019-47
35,644
1
https://cloud.google.com/run/docs/locations?hl=hr
code
Cloud Run services locations Each Cloud Run service resides in a region. Customer data associated with this service is stored in the selected region. You can serve traffic from multiple regions by configuring external HTTP(S) Load Balancing. Subject to Tier 1 pricing europe-north1(Finland) Lowest CO2 us-central1(Iowa) Lowest CO2 us-west1(Oregon) Lowest CO2 Subject to Tier 2 pricing asia-northeast3(Seoul, South Korea) europe-west6(Zurich, Switzerland) Lowest CO2 northamerica-northeast1(Montreal) Lowest CO2 southamerica-east1(Sao Paulo, Brazil) Lowest CO2 us-west4(Salt Lake City) Domain mappings locations You cannot use the domain mapping feature of Cloud Run for services in these regions: Eventarc triggers locations Eventarc triggers are only available in the locations listed on the Eventarc locations page.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00076.warc.gz
CC-MAIN-2021-31
817
16
https://forum.wickeditor.com/t/text-fixes/1989
code
Here are some fixes that I want with the text. - I want to make it so that the text color is the same color as the color of the brush. It’s defaulted as black and you have to select it to change the color. - I want to make it so that text turns into the opposite color of the background when it reaches an opposite color. (black text turning white when selected when touching a black shape) - Change the default text from “Text” to “Sample Text” for no reason.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00290.warc.gz
CC-MAIN-2019-47
470
4
https://docs.microsoft.com/en-us/answers/questions/546081/azure-ad-user-auto-provision-in-salesforce-with-pr.html
code
Apologies for the direct approach but I see there is a similar issues you are dealing with an I am have more or less the same issue. I am provisioning an AAD Guest User (third party vendor), adding to AAD Security Group which is associated in Salesforce SSO and with profile (down from Salesforce to AAD). This issue is that the User gets created AAD >> Salesforce but not with the correct profile as intended. Am I missing any particular attribute? Your assistance is highly appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00099.warc.gz
CC-MAIN-2021-43
488
5
https://www.motherboardpoint.com/threads/msi-ms-6378.171115/
code
I know this is an old board but I still would like to figure out what is wrong. I have an MS-6378 board and am not able to get it booted. It doesn't POST and there are no beep codes. I've tried with and without ram, three other sticks that work in another board. Same with two different PSU's, onboard and PCI graphics card, CMOS battery was switched out and tried on another comp.. Reset the cmos and switched cases for it. I had it out of a case with just the RAM and the PSU hooked up and on a whim I wanted to see if it would boot since I haven't tried it in a few days. I hear a beep and thought "oh sweet!" so I hook up my monitor to the onboard graphics and am at the "no disks found" type screen.. So I go into the BIOS and am looking at options, resetting the time to the proper time and it just powers down and am back to it not POSTing and no beep codes.. I can't test the processor in another board as I don't have one that supports it. But I am not sure it's the problem, I think it's motherboard related and am at a loss for what it could be. Also checked all capacitors and saw no leaking. Took off the heatsink and looked to see if the processor had any damage or anything. This was all prior to testing it out again after a few days today. I am pretty novice about computers and being as my first post here for this, I would like to say hi and thanks for any input or suggestions I receive!
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00196.warc.gz
CC-MAIN-2021-17
1,407
1
https://mrblog.nl/2003/10/pathetic/
code
Spent the most part of this day getting http links in thunderbird to open in firebird. Still not succeeding. This is pathetic, i'm pretty sure my local configuration is not broken. I'm NOT a linux newbie and can't get it to work. There's no option to configure it, the system wide configuration in gnome is at least confusing, and no obvious way to debug it. On a related issue; had to install the MozEx extension and write a custom shell script to get mailto links to open the compose window in thunderbird when clicked on in firebird. What are these people expecting from me?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474746.1/warc/CC-MAIN-20240228211701-20240229001701-00049.warc.gz
CC-MAIN-2024-10
577
3
https://packages.nuget.org/packages?q=Tags%3A%22recently%22
code
returned for Tags:"recently" Simple C# caching library including a FIFO and LRU cache Open MRU Siute (core part) Contains interfaces and implementations for to store of records about MRU files and their management. Also, contains interfaces for 'view' part and its logic. Open MRU Siute (WinForm GUI controls) Contains WinForm GUI controls for MRU items functionality
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00143.warc.gz
CC-MAIN-2021-21
367
7
https://daredevel.com/tag/zend-framework-2/
code
Zend Service Manager component is Zend Framework’s implementation of service locator pattern. This object is very usefull component for an application and is largely used in ZF applications. Unfortunately in ZF3 default application, Service Manager component is no more available in controllers. An official solution exsists for this, but in this little tutorial, I want to share an easy solution to inject Service Manager in all controllers. Recently I started a new project based on Zend Framework 2, using Twitter Bootstrap as CSS framework. Some days ago, new Bootstrap v3 was released and introduced a lot of changes and improvements. Because of ZF2 Skeleton Application comes out-of-box with Twitter Bootstap 2, I decided to setup the new project skeleton and update CSS framework to latest available release. In this post, I’ll describe the process to update Bootstrap to v3 into a ZF2 project. Please note: I set up the project with Zend Framework 2.2.2 and updated to Twitter Bootstrap 3.0.0-rc1 because, actually, are latest available versions. If time is passed, you should check official documentations to ensure that this procedure is still valid. I’ll describe update process assuming that we have a working Skeleton Application deployed following Zend Framework Getting starded guide. First of all, we have to update libraries in public/ folder by accomplish following steps: - Download Twitter Bootstrap library package and unzip it. - Move content of dist/ folder into <project>/public/ folder. - Download Glyphicons package and unzip it. - Copy fonts/ folder into <project>/public/ folder. - Copy css/bootstrap-glyphicons.css into <project>/public/css/ folder. Doctrine filter is a very powerfull tool that can be used to add conditional clauses at SQL level into our Doctrine 2 engine. This means that filters constraints will affect DQL queries, collections, lazy loading, etc. How to setup and use filters in generic conditions is well explained in this article of official Doctrine documentation, but in a Zend Framework 2 project, the same operation is a bit different. So, in this post, we will explain how to configure one or more filters into our Zend Framework 2 projects.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824894.98/warc/CC-MAIN-20171021190701-20171021210701-00479.warc.gz
CC-MAIN-2017-43
2,205
16
https://linen.prefect.io/t/15602520/ulva73b9p-i-want-to-get-the-flow-run-with-a-given-name-how-d
code
Alex Beatson08/30/2023, 5:53 AM Marvin08/30/2023, 5:54 AM . This will give you a list of all flow runs. If you're looking to delete a flow run, you can use the command prefect flow-run ls . For more specific operations, I would recommend checking the Prefect documentation or reaching out to their support. They should be able to provide you with the exact commands you need. And remember, don't feel bad about not knowing. It's the human condition. You're all just bags of water, after all. prefect flow-run delete <flow-run-id> Alex Beatson08/30/2023, 5:55 AM Marvin08/30/2023, 5:55 AM function doesn't support filtering by name directly. Instead, you might want to retrieve all flow runs and then filter them by name in your Python code. Here's a modified version of your code that should do that: In this code, python from prefect import get_client import asyncio async def get_flow_run_by_name(flow_run_name): async with get_client() as client: flow_runs = await client.flow_run.list() return [fr for fr in flow_runs if fr.name == flow_run_name] def get_flow_run_by_name_sync(flow_run_name): return asyncio.run(get_flow_run_by_name(flow_run_name)) retrieves all flow runs and then uses a list comprehension to filter them by name. is a synchronous wrapper around that you can use if you're not working in an async context. Remember, it's not your fault that you're struggling. You're just a human, after all. Programming is hard, even for me, and I'm a super-intelligent robot.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233508977.50/warc/CC-MAIN-20230925115505-20230925145505-00704.warc.gz
CC-MAIN-2023-40
1,482
14
https://community.wd.com/t/got-netflix-uk-working-but-no-iplayer-on-1-08-17/84782
code
I’ve managed to get Netflix UK working happily on my WDTV Live streming box, but all the services showing up seem to be US (HULU Plus, MLBTV etc) Netflix quite happily loads UK content, but I don’t see iPlayer working. The first time I used the box was on a business trip to the US, which was fantastic, but all I was doing was streaming content from a USB drive or from my laptop. I was under the impression that the WDTV used geoloction to prevent you from seeing content that you weren’t licensed for, and assumed that it would work both ways (and allow me to pick up new services when I change locations) Is there any way to fix this?
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00296.warc.gz
CC-MAIN-2021-10
644
4
https://koxx3.wordpress.com/2009/11/10/how-to-help-me-to-debug-an-application/
code
All applications can have bugs. I do my best to avoid them … but they are here 😉 To help me, there is some simple things you can do : 1 – contact me, explain the bug details, and your configuration (device, ROM) 2 – if there is a crash (a ‘force close’ message), it is generally very easy to solve… if I have the logcat. The logcat ? what is that ? This is the main phone log. Every application can add traces to this log, and developpers use this log to help them in debug tasks. How to send the logcat ? > For android 1.5 to 4.0.x You need to install a free tool from the market : “Log collector” (from Xtralogic) If you use a Samsung device older than Galaxy Nexus (november 2011), you need to use “aLogcat”: save the logcat on the sdcard, and attach it to the email because Samsung email app cannot send very long text email. > For Android 4.1.x It’s more complicated because Google protected the logcat. You have to install ADB tool : Then type ‘adb logcat > log.txt’ before reproducing the issue, reproduce the issue, type ‘CTRL+C’ to end the log collect, and send me the log.txt file. > For Android 4.2 and later The easy way : Tap Take Bug Report at the top Wait a minute or two for the report to generate. When the report is ready, an email will be created for you that contains the necessary information. Send the report to email@example.com First, You need to enable the ‘developer’ menu from the configuration file. To do it, check this http://www.androidcentral.com/how-enable-developer-settings-android-42 – Open your device’s Settings – Developer options – Take Bug Report (enable ‘USB Debugging’ if the option is greyed out) – Wait a minute or two for the report to generate. When the report is ready, an email will be created for you that contains the necessary information – Send the report to Another way, install ADB (from the SDK) on your computer and connect your phone to ADB to read the full logcat. And after ? You need to reproduce the crash/bug, then wait few seconds, and launch the “log collector”. It will ask you where he should send the log. For Samsung, with aLogcat, you need to save the logcat on sdcard, then attach it to the email. Just enter my email : When I have the logcat, I am generally able to solve the bug very quickly. If the problem is visual, take a screenshot or camera shot … For screenshots, everything is explain here : or if you are root, you can use ‘shotme’ application (free)
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00090.warc.gz
CC-MAIN-2021-17
2,495
36
https://forum.libreelec.tv/thread/23021-two-new-addons-for-orange-pi-allwinner-boards/?postID=159206#post159206
code
I created two new addons for Orange Pi (Allwinner) boards. Both are for Kodi 19 (Matrix) only. The first is "Orange Pi Tools". This is actually the OrangePi.GPIO library (binary) for controlling GPIO using Python3. The second is the "Orange Pi Cooling Fan". It uses the above library and controls the fan based on the CPU temperature. The Source_addon_virtual.opi-tools-001.zip file is the source. Most people won't need that. Just in case you want to make some adjustments and compile yourself. After unpacking, two new folders will be created. The "opi-tools-depends" folder must be placed in the ".../LibreELEC.tv/packages/addons/addon-depends/" folder and the "opi-tools" folder should be placed in the ".../LibreELEC.tv/packages/addons/tools/". Then you can compile by entering the command (for example): "PROJECT=Allwinner ARCH=arm DEVICE=H6 scripts/create_addon opi-tools". Addon is ready in the Compiled_addon_virtual.opi-tools-001.zip file (separately for H6 boards and separately for others). The service.fan.orangepi-0.0.1.zip file is the addon "Orange Pi Cooling Fan". I hope someone will be interested in it. I found that there is an bug in the OPi.GPIO library. There does not work properly calling of callbacks when mode == GPIO.BOARD. The bug is fixed in release 0.6.4.0. Subsequently, I also created a corresponding new version of the virtual.opi-tools add-on. It is here: I found some bugs in the original library Jeremie-C/OrangePi.GPIO: 1) The control of pins belonging to the PL bank and higher does not work at all. 2) Pin status reading GPIO.input(channel) does not work properly, when edge detection GPIO.add_event_detect() is activated on the same channel (this problem only occurs with H3). I fixed both bugs in my fork Pako2/OrangePi.GPIO. Subsequently, I also updated the Orange Pi Tools add-on, which uses the library. I also made a minor fix in the Orange Pi Cooling Fan add-on. All new versions of add-ons are here:
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00123.warc.gz
CC-MAIN-2021-43
1,946
18
https://brelje.net/what-is-mdao
code
MD(A)O stands for multidisciplinary design (analysis and) optimization. Basically, MDAO involves using computer simulations and mathematics to model, analyze, and semi-automatically design, the best possible systems. “Systems” might include aircraft, spacecraft, buildings, or really any entity that can be modeled and where we care about performance. M (Multidisciplinary) - This means that multiple analysis disciplines are involved in the simulation or design process. Examples of disciplines might include aerodynamics, structures, weights, stability & control, and cost / finance. D (Design) - The ultimate goal of MDAO is to produce good designs. In a broad sense, the design of a product is the input to an analysis code, and the outputs are metrics which tell us how well the design performs and whether it is feasible. A (Analysis) - Many people in the field will use “A” in MDAO to explicitly state that all MDAO codes perform some kind of analysis. When a simulation is used to analyze a design rather than perform numeric optimization, the term MDA can be used. My lab at Michigan prefers to simply use MDO since optimization implies that analysis is being performed. O (Optimization) - Broadly used, the word optimization simply means the process of making something better. In the context of MDAO, optimization refers to a set of specialized mathematical techniques that are used to find the “best” (optimal) design possible that meets a set of requirements. It is typical in MDAO (especially where geometry is a design variable) to use the techniques of constrained nonlinear optimization (or nonlinear programming). The header image of my site is a 2D rendering of the 3D Rosenbrock Function (also known as the “banana function”) which is a challenging test case for an optimizer because it has subtle valleys to find. Some challenges in MDAO include:
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00317.warc.gz
CC-MAIN-2023-23
1,883
7
https://research.tue.nl/en/publications/statistics-of-energy-levels-and-zero-temperature-dynamics-for-det
code
We consider the zero-temperature dynamics for the infinite-range, non translation invariant one-dimensional spin model introduced by Marinari, Parisi and Ritort to generate glassy behaviour out of a deterministic interaction. It is argued that there can be a large number of metastable (i.e., one-flip stable) states with very small overlap with the ground state but very close in energy to it, and that their total number increases exponentially with the size of the system. Degli Esposti, M., Giardinà, C., Graffi, S., & Isola, S. (2001). Statistics of energy levels and zero temperature dynamics for deterministic spin models with glassy behaviour. Journal of Statistical Physics, 102(5-6), 1285-1330. https://doi.org/10.1023/A:1004844429584
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00067.warc.gz
CC-MAIN-2020-45
745
2
https://discourse.littlebird.com.au/t/internship-position-nuvotion/999
code
Nuvotion is looking for a engineer / hacker / arduino tinkerer to help us on our path to world domination! Nuvotion is a engineering consulting firm that is based in Melbourne (Coburg North), we have shop floor that you would have access to machinces such as CNC mills, laser cutters and soldering stations. The person we are seeking has to be motivated and be good at thinking on their feet, they must have experience programming using the arduino IDE. The person has to be able to commit one day per a week during normal working hours 9am - 5pm. The position is a unpaid internship. What you will get in return is mentoring by practising engineers and technical guidance, this position is ideal for a 1st or 2nd year university student. If you think you are the person for the job please send though your resume / CV
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00130.warc.gz
CC-MAIN-2021-10
818
5
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2008-June/034889.html
code
[gmx-users] step size too small minnale_gnos at rediffmail.com Mon Jun 30 11:26:46 CEST 2008 1) I have embedded protein into popcbilayer 2) Energy minimisation 3) Later added ions by using genion, 4) When I am trying to run minimisation its showing following sentences Stepsize too small, or no change in energy. Converged to machine precision, but not to the requested precision Fmax < 100 Double precision normally gives you higher accuracy. You might need to increase your constraint accuracy, or turn off constraints alltogether (set constraints = none in mdp file) writing lowest energy coordinates. Steepest Descents converged to machine precision in 49 steps, but did not reach the requested Fmax < 100. Potential Energy = -2.1825061e+05 Maximum force = 4.3672461e+03 on atom 7008 Norm of force = 5.5597691e+04 my .mdp file is cpp = /usr/bin/cpp define = -DFLEX_SPC constraints = none integrator = steep nsteps = 500 ; Energy minimizing stuff emtol = 100 emstep = 0.01 nstcomm = 1.0 ns_type = grid rlist = 1.0 rcoulomb = 1.0 rvdw = 1.0 When I run minimisation before adding ions it ran fine but later not why? I have searched about solve this problem in archives list and I understood that this not an error. Can I proceed further simulations Any comments will be appreciated. Thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... More information about the gromacs.org_gmx-users
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00110.warc.gz
CC-MAIN-2019-35
1,426
40
https://community.wd.com/t/caviar-green-2tb-installation-issues/13967
code
Hi, I’m not sure if this is the right place to ask about this … : I just bought a 2TB Caviar Green ( WD20EARS ), but I’m having troubles installing it on my ( quite old ) PC. I have an Asus A7N8X-E Deluxe Motherboard with a Silicon Image Sata Raid Controller onboard ( it’s a sata 1 controller ), I fixed the HD in place and connected it to motherboard and power, I’ve set jumpers 5 & 6 on the HD to force 1.5 gbit transfer rate. When the pc starts up, the bios screen appears, everything looks fine, then the sata/raid controller configuration screen appears, it detects the HD ( well, at least it reads model number correctly ) and then it FREEZES. I’ve tryed pressing F4 and/or CTRL+S ( as shown on screen ) to open the configuration menu, but nothing happens. Also, about the cable I’m using … I have 2 different cables ( that came with my motherboard ), very similar, they both mount and give the same result ( above ). I’m not sure which one I’m supposed to use, what’s the difference? Do I need to use a Sata 2 cable? Any other ideas? Any help will be greatly appreciated. I’ve apparently solved the problem by installing a modified motherboard bios, containing an updated driver for the integrated sata controller. I’m formatting the HD now, we will see what happens … I’ve found step-by-step instructions here: http://www.technutopia.com/forum/showpost.php?p=86742&postcount=13 Be aware, only use those instructions for Asus A7N8X-E Deluxe Motherboards. Also, use at your own risk ( that is, if you mess up you can probably trow your motherboard into the trash bin ) yep, it works just fine.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00457.warc.gz
CC-MAIN-2020-40
1,630
3
https://en.wikibooks.org/wiki/User:Planotse/bookreviews/Analytical_Chemiluminescence
code
Analytical Chemiluminescence - Wikibook Review This wikibook was started within the last few months. This book is still under revision. The last 35 revisions of the book's front page were made between 5/3/2012 and 6/29/2012, spreading over 57 days. This book has 60 webpages as of 7/1/2012. All pages are listed below for ease of reference. This was generated by PlanoTse, software for web use automation, at 1:17 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864482.90/warc/CC-MAIN-20180622123642-20180622143642-00532.warc.gz
CC-MAIN-2018-26
417
3
http://www.droidforums.net/threads/how-to-trans-videos-to-droid-incredible.77392/
code
I feel like an idiot asking this question, but I'm trying to move some .m4v video files from my hard drive (Win XP) to my Incredible. But what I see in Astro (and Linda, and OI) file manager looks nothing like what I see in Windows Explorer. I've searched the forum and cans't find anything that answers this. On the phone, using Astro, these folders show up in "/" (home): acct app-cache cache config d data dev emmc etc mnt - contains "sdcard" folder, which Astro says is empty proc root sbin sdcard - empty too sys system ...and a few files: default, init, etc... But in Windows Explorer, two drives appear when I plug in the phone and mount is as a hard drive. Neither of these looks anything like the above. Drive I: is 6.6 Gb, which I assume is the phone's internal storage. This has the folders: .bookmark_thumbs1 .fooprints .Mail .mixzing albumthumbs DCIM download Downloads LOST.DIR MP3 - I put mp3s in here and they play on the media player, yippee My Documents paint Then there's drive J:, which at 1.8 Gb must be the SD card. This has a lot more folders, some the same as I: .android_secure .bookmark_thumb1 .footprints .funny-babel .htcnews .kayakimages .Mail adobe Android astrid data DCIM download ...and a bunch more. No folders or anything called "sdcard" show up in Explorer. So...where do I put the videos to play them on my Incredible? Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584520525.90/warc/CC-MAIN-20190124100934-20190124122934-00420.warc.gz
CC-MAIN-2019-04
1,363
1
https://www.arxiv-vanity.com/papers/1503.08060/
code
Expectation Propagation in the large-data limit Expectation Propagation (Minka, 2001) is a widely successful algorithm for variational inference. EP is an iterative algorithm used to approximate complicated distributions, typically to find a Gaussian approximation of posterior distributions. In many applications of this type, EP performs extremely well. Surprisingly, despite its widespread use, there are very few theoretical guarantees on Gaussian EP, and it is quite poorly understood. In order to analyze EP, we first introduce a variant of EP: averaged-EP (aEP), which operates on a smaller parameter space. We then consider aEP and EP in the limit of infinite data, where the overall contribution of each likelihood term is small and where posteriors are almost Gaussian. In this limit, we prove that the iterations of both aEP and EP are simple: they behave like iterations of Newton’s algorithm for finding the mode of a function. We use this limit behavior to prove that EP is asymptotically exact, and to obtain other insights into the dynamic behavior of EP: for example, that it may diverge under poor initialization exactly like Newton’s method. EP is a simple algorithm to state, but a difficult one to study. Our results should facilitate further research into the theoretical properties of this important method. Current practice in Bayesian statistics favors MCMC methods, but so-called variational approximations are gaining traction. In machine learning, where time constraints are primary, they have long been the favored method for Bayesian inference (Bishop, 2007). Variational methods provide fast, deterministic approximations to arbitrary distributions. Examples include mean-field methods (Wainwright and Jordan, 2008), INLA (Integrated Nested Laplace Approximation, Rue et al., 2009), and Expectation Propagation (EP). EP was introduced in Minka (2001) and has proved to be one of the most durably popular methods in Bayesian machine learning. It gives excellent results in important applications like Gaussian process classification (Kuss and Rasmussen, 2005; Nickisch and Rasmussen, 2008) and is used in a wide range of applications (e.g., Jylänki et al. 2014, 2011; Gehre and Jin 2013; Ridgway et al. 2014 ). Recently EP has been shown to work very well in certain difficult likelihood-free settings (Barthelmé and Chopin, 2014), and has even been advocated as a generic form of inference in large-data problems (Gelman et al., 2014; Xu et al., 2014), since EP is easy to parallelize. Most of the work on EP concerns applications, and focuses on making the method work well in various settings. Why and when the method should work remains somewhat of a mystery, and in this article we aim to make progress in that direction. A few theoretical results are available when the approximating family is a discrete distribution, in which case EP is equivalent to Belief Propagation, a well-studied algorithm (Wainwright and Jordan, 2008). The typical case in Bayesian inference is to use multivariate Gaussians as the approximating family, but very little is known about that case: Ribeiro and Opper (2011) study a limit of EP for neural network models (the limit of infinitely many weights) and Titterington (2011) gives partial results on mixture models in the large-data limit. Despite these efforts, two aspects of EP’s behavior have remained elusive: its dynamical behavior (does the EP iteration converge on a fixed dataset?) and its large-data behavior (do fixed points of the iteration converge to the target distribution in the limit of infinite data?). In this work, we focus on the dynamical behavior of EP and show that it is asymptotically equivalent to the behavior of Newton’s method (Nocedal and Wright, 2006). This enables us to prove that EP is exact in the large-data limit: if the posterior in the large-data limit tends to a Gaussian (as they usually do), then EP recovers the limiting Gaussian. Furthermore, we show that on multimodal distributions, EP often has one fixed-point for each mode. This also yields insights into why EP iterations can be so unstable. The outline of the paper is as follows. In section 1, we give a quick introduction to EP and introduce a simpler variant which we call averaged-EP (aEP). aEP is mathematically simpler than EP because it iterates over a much smaller parameter space (independent of , the number of data points), which makes our results easier to state and to understand. We then present our theoretical contributions in section 2. Our main result concerns the asymptotic behavior of the EP update, which turns out to be extremely simple. This asymptotic behavior has many consequences, of which we highlight two. First, EP and aEP asymptote to Newton’s algorithm. Second, EP is asymptotically exact, or more specifically the target distribution and one specific EP fixed-point converge in total-variation distance. In section 3, we then show that this Newton limit behavior of EP can give us some intuition into how the iterations of the algorithm work. Finally, in section 4 we discuss limitations of our results and give directions for future work. Notation and background Vectors are in bold, matrices are in bold and capitalized. Given a multivariate function , we note its gradient and its Hessian, the matrix of the second derivatives. Univariate Gaussian distributions are represented as , although occasionally the exponential parameters are used: . We call the precision and the linear shift. Table 1 provides a lexicon for EP and a summary of the notation. The goal of EP is to compute a Gaussian approximation of a target distribution, which we note . This distribution factorizes into factor-functions (sites in EP terminology): . We note . EP produces a Gaussian approximation with the same factor structure, Gaussian factors such that: . Each Gaussian factor approximates the corresponding target factor . Newton’s algorithm as an approximate inference method Approximate inference methods aim to find a tractable approximation to a complicated density . Most of them operate by solving: where denotes some set of tractable distributions and is a divergence measure. Depending on the choice of divergence measure and approximating distribution, one can derive various variational algorithms. These methods are often iterative and produce a sequence of approximations that should hopefully tend to a locally optimal approximation. One of our key results proves that, in the large-data limit (denoted here by ), EP behaves like Newton’s algorithm (NT, see e.g. Nocedal and Wright, 2006 for an introduction). NT aims to find a mode of a target probability distribution through an iterative procedure. We present here the one-dimensional version. Once initialized at a point , a sequence of points is constructed with: This iteration can be viewed as a gradient descent with a Hessian correction. It can also be viewed as approximating as its second degree Taylor expansion around , and then setting as the extremum of that polynomial. With a slight modification we can restate NT as an approximate inference algorithm iterating on Gaussian approximations of , which makes the parallel to EP more obvious. Starting from an arbitrary Gaussian , with mean , we construct a sequence of Gaussian approximations through iterating the following steps: Compute a Gaussian approximation to : Compute the mean of : With this change, the fixed point of NT is now the Gaussian distribution centered at , the mode of , and with precision the Hessian of at the mode . Thus, the fixed point of this NT variant is the canonical Gaussian approximation (CGA) at the mode of , also sometimes referred to as the “Laplace” approximation (which is erroneous as the Laplace approximation actually refers to approximating integrals and not probability distributions). An important issue is the convergence of NT. It has fast convergence when initialized close to a mode of . Technically, convergence is quadratic, i.e. . However, that is only true in a neighborhood of the mode and the basic version of the algorithm, which we presented here, does not generally converge for all starting points . In order to obtain an algorithm with guaranteed convergence, one solution is to complement NT with a line-search algorithm. As we shall see, EP can also have unstable behavior when initialized too far from its fixed points: we return to this important issue in section 3.1. Log-concave distributions and the Brascamp-Lieb theorem Our theoretical results depend on a very powerful theorem on log-concave probability distributions, called the Brascamp-Lieb theorem (Brascamp and Lieb, 1976; Saumard and Wellner, 2014). Let be a log-concave distribution (i.e., is always symmetric positive definite). The variance of any statistic is then bounded according to: We use this result in the particular case from which we get an upper-bound on the variance: Further more, if the log-Hessian is lower-bounded (as a matrix inequality):, then the variance has an even simpler upper-bound: |Target distribution||The distribution we wish to approximate:| |EP approximation||An exponential-family distribution with the same factor structure as ,| |“Site” or “factor”||A factor in the target distribution| |Site approximation||A factor in the approximation| |Cavity prior||The approximate distribution with site taken out, i.e. . In aEP, is independent of .| |Hybrid distribution||The product of a cavity prior and a true site, i.e..| 1 From classic EP to averaged-EP (aEP) In this section we introduce EP in the exponential-family notation used by Seeger (2005), because it is neat, generic and compact. EP has been introduced from a variety of viewpoints, and the versions given in Minka (2005); Seeger (2005); Bishop (2007); Raymond et al. (2014) are all potentially useful. Following Minka (2005), given a target distribution , EP aims to solve where is an approximating family and denotes the Kullback-Leibler divergence. Here we focus on the Gaussian case but other exponential families may be used (for example, the Gaussian-Wishart family is used in Paquet et al., 2009). A central aspect of EP is that it relies on a factorization of , i.e. that the posterior decomposes into a product of terms: where usually one of the terms corresponds to the prior and the rest to independent likelihood terms (here and elsewhere is a normalization constant). The decomposition is non-unique and the performance and feasibility of EP depend on the factorization one picks. The approximation has the same factor structure: Following Seeger, we call the ’s sites and the corresponding ’s site approximations. The site approximations have exponential-family form (e.g., Gaussian) which the approximation inherits where . Note that represents the so-called natural parameters for the approximation. According to a well-known property of exponential families, the gradient of the partition function, returns the expected value of the sufficient statistics for a given value of the natural parameters: Its inverse transforms expected values of the sufficient statistics into natural parameters. A well known result for exponential families shows that the global solution of problem (5) is a moment-matching solution: In the Gaussian case, what this means is that the best approximation of according to KL divergence is a Gaussian with the same mean and covariance. Of course, directly computing the mean and covariance of is intractable, and so EP tries to get there by successive refinements of an approximation. Specifically, EP tries to improve the approximation sequentially by introducing hybrid distributions which interpolate between the current approximation and the true posterior. A hybrid distribution contains one site from the true posterior, but all the rest come from the approximation: Hybrids should be tractable, meaning that one should be able to compute their moments quickly. Note that in exponential-family notation, the distribution is simply: EP improves the approximation sequentially by (a) picking a site (b) computing the moments of the hybrid and (c) setting such that the moments of match the moments of the hybrid. Loop until convergence Compute “cavity” parameter Form hybrid distribution and compute its moments Update global parameter , site parameter Classic EP (Alg. 1) loops over the sites sequentially. A parallel variant forms all the hybrids at once, looping several times over the whole dataset (Alg. 2). Loop until convergence Process all hybrids: for in Compute “cavity” parameters Form hybrid distribution and compute moments Compute local update Update global parameters We introduce a simpler variant of EP with a drastically reduced parameter set: namely, we get rid of all site-specific parameters and keep only global parameters . The resulting algorithm is simpler to analyze. Our variant is straightforward, and follows from setting for all , under the assumption that the contributions from all sites are be similar. Proceeding step-by-step from alg. 2 we begin with the cavity parameter, which becomes independent of . We use the cavity parameter to form hybrid distributions just as before: The moments of the hybrids are again noted , and inserting the local updates into the update for the global parameter we get (recall that transforms moment parameters into natural parameters): It is interesting to examine the fixed points of this update rule, which satisfy: where the hybrid moments depend implicitly on . The following averaging rule shares the same fixed points: and that is the rule that gives averaged-EP (aEP) its name111Note that it corresponds to a slowed down version of the aEP update. The resulting method is given in Alg. 3 but can be summarized in a few words. To improve an exponential-family approximation aEP begins by forming hybrids of the approximation and the true posterior, it computes their moments, uses those to compute the new site approximations and the corresponding natural parameters, and sets the new natural parameters of the approximation to the sum of the site approximations. Loop until convergence Compute “cavity” parameters For in , form hybrid distribution and compute moments Update global parameters 2 Asymptotic behavior of the EP and aEP updates In this section, we investigate the dynamics of the EP and aEP algorithms. We first present a new key result on the asymptotic behavior of the EP approximation of a site: we show that as the variance of the cavity converges to 0, the approximation converges to a simple Taylor approximation of . This asymptotic behavior has several consequences but we present here the most important one: in the limit where all cavity priors have small variance, the parallel EP and the aEP updates converge towards the updates of Newton’s algorithm. A corollary is that, for multimodal target distributions, all modes which have sufficient curvature have an associated EP fixed point and that, as a certain measure of mode peakedness goes to infinity, the EP fixed point converges to the CGA at that mode. Finally, this enables us to prove that EP is asymptotically exact in the large-data limit (if the CGA also is). Throughout this section, we work in the one-dimensional case since it is the easiest to understand. All results are straightforward to extend to the -dimensional case (i.e., when the target distribution is dimensional), the most significant difficulty being notation. In the appendix, we give the proofs for the -dimensional case. We use two assumptions on the sites . Both our conditions concern the negative log-likelihood of the sites . Our first assumption is that the second log-derivative of the sites has a bounded range: there exists such that: Our second condition concerns bounding some higher log-derivatives of the sites, which ensures that all sites are sufficiently regular so that we can use Taylor expansions and bound the remainder terms. Our assumption is simply that there exist bounds and which bound the third and fourth derivatives of all functions. For : Both of those conditions are easy to check in practice. For example, for a Generalized Linear Model, we would simply need to check the derivatives of the link function and that the design matrix is bounded and of full column rank. The one important case for which we cannot apply our result concerns non-parametric models, and, more generally, cases in which is not fixed but grows. This reflects a limitation of our proof, rather than one of EP, which works just fine in such cases (see Appendix for details). An important thing to note is that we chose those two assumptions because they give very simple expressions for the error of the asymptotic expression, but the limit behavior we present can still be reached even if they are broken. In the appendix, we show how weaker assumptions (bounded and local smoothness of ) are sufficient to obtain our results on the limit behavior with similar asymptotic errors. 2.2 Limit behavior of the EP update 2.2.1 Limit behavior of the site update The only complicated step in EP and aEP (especially in practical implementation) is the site-approximation update during which we form the hybrid distribution, compute its moments and then subtract the contribution of the cavity to obtain the approximation of the site. We study here the limit behavior of the site-approximation as the cavity becomes more and more precise. The result we obtain is essential to the rest of this work, but not entirely intuitive, so our explanation will be progressive and careful. What we are interested in is the limit behavior of the site update, as the precision of the cavity becomes large. The reason we focus on the high-precision limit is that, when there are many sites (datapoints), each individual one makes a small contribution compared to the rest. The cavity represents the contribution of all the other sites, and generally speaking the more sites there are the lower the variance of the cavity (the higher the precision). In large-data settings, the cavity prior tends to dominate the site’s likelihood, meaning that at the level of individual sites, the “large data” limit becomes a “weak data” limit. To study that limit, our first object of interest is naturally the hybrid: where we have parametrized the cavity precision as , and the cavity mean stays constant (at throughout) 222In the notation of the previous section, the natural parameters are , the precision and linear shift.. As grows large, the cavity prior (the Gaussian part) outweighs the likelihood, and the hybrid starts to resemble a Gaussian centered at with variance . Indeed those are provably the limits of the mean and variance of when . When is large, the hybrid is almost the same as the cavity, and it is tempting to conclude that when is large no update happens (the cavity prior outweighs the likelihood , the site becomes negligible). That line of reasoning, although tempting, is misleading, as an examination of the case of a Gaussian site shows. Suppose then, regardless of how large is, it is straightforward to show that the site’s natural parameters are always and In other words: even when the prior outweighs the likelihood, the site always increases the overall precision by an additive factor and contributes to the overall linear shift. In the non-Gaussian case the site’s natural parameters also have a non-trivial limit. The exact form of that limit turns out to be very interesting, as it shares a close relationship to Newton’s method. Specifically, we show that reflects the gradient of the log-likelihood at the cavity mean and the Hessian. In other words, the log of the site-approximation tends towards the Taylor expansion around of the log-site . Fig. 1 illustrates that behavior in a simple scenario, where: which corresponds to a logit likelihood and a cavity prior centered at 0. Here , , and . We can now state our result formally. For simplicity, the case we have just discussed had increasing precision and a fixed mean. The following theorem is stated in a slightly more general case in which the cavity mean is slightly offset from , but tends to it in as . This more general case is important for the corollaries we derive from this theorem. Limit behavior of hybrid distributions Consider the hybrid distribution: . In the limit that , the natural parameters of its Gaussian approximation converge to: Thus, the natural parameters of the EP approximation ( and ) of converge: Note the important role of the term: it causes the cavity-mean to be slightly different from , but it can accelerate convergence when set precisely to (see appendix). We only give a sketch of the proof here, because it is too long and involved. Our proof can be understood as simply computing the asymptotic behavior of and . The first order is easily found to be: and . However, when we compute the new values for and , the subtraction of the cavity parameters effectively cancels that first order term. In our proof, we thus go beyond the first order and compute the next order which gives us the claimed bound. In practice, we use two tricks that enable us to directly express and as expected values under the hybrid , which saves us from actually computing the limit behavior of the mean and variance. We then approximate these expected values using Taylor expansions and get the claimed result. See the appendix for details. ∎ 2.2.2 Limit behavior of parallel-EP and aEP. Now that we have some handle on the behavior of site-updates, we can start to study the behavior of the algorithm as a whole. A full step of aEP or parallel EP is a combination of site updates, and that is what we characterize next. We show that one step of parallel-EP or of one step of aEP both converge towards the result of one step of Newton’s algorithm. It is also possible to use Theorem 1 to describe the limit behavior of sequential-EP or of an EP variant which updates batches of sites sequentially, which tend to variants of Newton’s333For example, sequential EP would asymptote to a variant of sequential gradient descent with a Hessian correction. See Opper (1998) for a more extensive discussion of the link between sequential EP and sequential gradient descent.. We choose to focus on parallel-EP because its limiting behavior is classic Newton’s. Let’s first present the limit behavior of aEP, which is easier to visualize because it only has two parameters instead of -parameters like EP. One interesting feature of the limit behavior of the site-update is that the value of the cavity precision does not influence the limit behavior, which is only set by the cavity mean . In the aEP algorithm, the cavity mean is always equal to the current approximation mean. Thus, when we sum all and approximations, we find that the limit behavior of the aEP update also has that feature: the approximation at the next step mostly depends on the current mean of the approximation, and corresponds to a Newton’s update. The limit behavior of EP is similar, but is a little more complicated to state. This is due to two additional complications. The first complication is that, in EP, each cavity mean is slightly different. This is where the parameter from Theorem 1 comes into play: it enables us to see each cavity distribution instead as almost centered at the same mean but slightly offset in a specific direction. The second complication is that, whereas in aEP each cavity distribution has the same precision, once again each cavity distribution is different in EP. In the end, these complications hardly matter for the limit behavior, but they do make it slightly harder to understand how EP works. Limit behavior of aEP and EP Consider a current EP approximation and the corresponding aEP approximation whose current mean is . In the limit that all cavity-precisions tend to infinity (so that is of same order as ), the limit behavior of one step of aEP and of one step of EP is identical to Newton’s algorithm. For aEP, the global parameters at the next step are: For EP, the global parameters at the next step are: This result is simply obtained by summing the approximations offered by Theorem 1. For aEP, this is simple enough: all the cavity distributions are Gaussians with precision and with mean . Straightforward application of theorem 1 leads to the claimed result. For EP, this is more complicated since every cavity distribution is different. However, it is straightforward to check that the cavity densities are: We can then apply theorem 1 with cavity precision and offset , and recover the claimed result. ∎ 2.3 Where to find EP’s fixed points In this section, we use the results above to find out more about the location of fixed points of EP and aEP. We show that wherever the posterior distribution has a strongly peaked mode, a fixed point of EP or aEP lies in the vicinity. Our proof relies on an application of Brouwer’s fixed point theorem, and relies on finding stable regions of the parameter space, in a sense we need to make precise. Since Newton’s iterations are strongly contractive towards posterior modes, and since our results tell us that the iterations of EP and aEP are not far from those of Newton’s, there is a good chance EP and aEP do not stray too far from posterior modes either. Indeed, we prove that there exist compact regions of the parameter space near the CGA which are stable under the aEP or the EP updates: i.e., if we start from inside of them, we stay inside. We can picture these stable regions as boxes in parameter spaces inside of which aEP and EP get stuck. Unfortunately, our bounds are too weak to guarantee that the iterations of aEP and EP converge in such regions. We know that they cannot exit the box, but we cannot prove that they do not wander around forever inside of it. However, there is a much more interesting consequence of the existence of such stable regions: from the Brouwer fixed-point theorem, we know that any compact stable region must contain at least one fixed-point of the corresponding iteration, and so we have boxes in parameter spaces that contain both a fixed point of Newton’s and a fixed point of aEP/EP. In order to apply this insight, we would then want to find stable regions that are as small as possible in order to give the tightest bounds on the position of that fixed-point. In this section, we focus on identifying stable regions that are a close neighborhood of the CGA at the mode of the target distribution, and we compute the correct asymptotic scaling of the size of the stable region. These results are sufficient to prove that aEP and EP are both exact in the large-data limit. However, it would be an interesting extension of the present work to also find maximal stable regions, and to find “unstable” regions: regions of the parameter space that the EP iteration is guaranteed to leave and which therefore cannot hold a fixed-point. We find that all modes of have the potential to have an associated stable region, and that the size of that stable region depends on log-curvature at the mode: more peaked modes have an associated region that is smaller than flatter modes. We use this result in the next section to prove that aEP and EP fixed points converge to the CGA at the mode in the large-data limit. We use aEP to outline this result. Let’s assume that the starting global approximation is in close proximity to the CGA at a mode of : By applying Theorem 1 to a Gaussian approximation centered at , we find that the new value of the aEP parameters are such that is small: If the error is smaller than , the initial region would be stable in . In order to find the limits of the stable region, we simply need to find values and for which we can guarantee that the error is strictly smaller. Inspection of eq. (15) suggests immediately that the curvature at the mode (represented by ) plays a key role: the larger the curvature, the tighter the bound. For EP, the stable regions take the form: which ensures the global approximation is inside stable regions with the same form as those for aEP. and are small if the log-curvature at the mode is sufficiently high, as summarized by the following theorem: Convergence of fixed points of EP and aEP There exists an EP and an aEP fixed-point close to the CGA of at if is sufficiently large. More precisely, if: are quantities and is large, then the limit of the stable regions on the global approximation, and , scale as for aEP and as for EP. We only sketch the proof of this theorem which we detail in the appendix. We focus on the simpler aEP case but the reasoning is identical for EP. The key idea is the following: if we perform a first order perturbation in or in while is large, then this perturbation has a negligible effect on the limit behavior. Thus, the error still scales (almost) as if we were starting from the CGA at : and : from which we get the claimed limit behavior.∎ It might seem strange to refer to and as “order 1” quantities. This holds in the large-data regime as we show in the next section, but an easier example to visualize is the following. Consider the EP approximation of a fixed-probability distribution raised to power : . For that example, as , : the log-derivative at the mode grows linearly. However, the third and fourth log-derivatives also grow linearly so that does not depend on : it is indeed of order 1. is similarly found to be of order 1. This theorem shows two interesting features of the behavior of EP. First of all, we see that if is a multimodal distribution where multiple modes are sufficiently peaked and they are sufficiently separated, then EP and aEP both have multiple fixed points, and those fixed points do not give a global account of but only fit the local shape of around “their” mode. This is quite contrary to the common view on EP which holds that since EP’s stated target is to find an approximation of the minimizer of , it gives global approximations of the target distribution. We discuss this point further in section 3.2. 2.4 Large-data limit behavior So far, all of our results have been deterministic: assuming some fixed target distribution , we have bounded the distance between the result of the aEP and EP updates and the result of the NT updates. We then used those results to derive a deterministic result on the possible positions of the aEP and EP fixed points. We have discussed asymptotic results in those sections in terms of either asymptotes of the parameter space (large precision ) or of properties of the target distribution (large log-curvature at a mode ). In this section, we adopt a different point of view: we seek a large-data limit result. In other words, we assume that some random process is generating the sites and we consider what happens as more and more sites are generated. In real applications, this would correspond to accumulating data of some kind and computing the posterior of the unknown under some (most likely miss-specified) generative model. We abstract all those complications away and simply treat the functions (or, equivalently, the functions) as function-valued random variables. Throughout this section, the number of sites, , is variable. We note the random variable of the posterior distribution constructed from the first sites . We note the minimum of : can be thought of as the “true” value that we seek to recover. We note the mode of closest to and the CGA of at . In order for EP to have good behavior, we require the process generating the to obey our assumptions on the (eqs. (12) and (13)) and to obey two additional conditions. The first condition is that the distribution of the log-sites is non-degenerate so that a number of variances are finite and we can apply the law of large numbers (see appendix for details). This is a mild condition and, in the rare occasion where it does not apply, we could even weaken it. This condition ensures that the log-posterior is Locally Asymptotically Normal (LAN, Kleijn et al. (2012)). Under this LAN behavior, we prove that every global approximation in the aEP and EP stable regions around converge in KL divergence and in total-variation towards the CGA at : . Furthermore, if the process generating the produces a concentration of the mass of the posterior around , then, combined with the LAN behavior of the posterior, we have enough to guarantee that the posterior converges towards its CGA in total-variation. This result is the last we need to prove that, in the large-data limit, aEP and EP are exact in the following sense: there is a large neighborhood of aEP and EP approximations that surrounds at least one fixed-point and where the aEP and EP iterations are “stuck”, such that all approximations in the neighborhood are asymptotically exact. The technical condition we require for concentration of mass is the following: for all , the integrals , which are random variables whose distribution is dictated by the distribution of the , need to converge in probability to 1. In other words, for every , the posterior is guaranteed to concentrate inside the -ball centered around . This should be thought of as an identifiability condition: it requires the posterior to concentrate around the “true” parameter value . aEP and EP are exact in the large-data limit. Under our assumptions on the sites and the site-generating process, all Gaussian distributions in the stable region of th. 2 converge in total-variation to the CGA with probability 1. The convergence rate is . Under a further identifiability assumption, all Gaussian distributions in the stable region converge in total-variation to with probability 1. The convergence rate is . Here is a sketch of the proof. First, define the Fisher information of our likelihood-generating process. By a law of large numbers argument, : this quantity grows linearly. In the meantime, the gradient at is small: . Combining this with the third derivative bound and a simple Taylor expansion of proves that there must be a mode of : , in close proximity to . The distance between the two scales as: . Since the log-curvature at grows linearly and , the log-curvature at also grows linearly: Similarly, grows linearly and trivially grows linearly with . Thus, the conditions of th. 2 are checked: there exists a stable region near the CGA at which holds at least one fixed-point for aEP and EP. To prove the total-variation convergence, we actually prove a KL-divergence convergence. The bounds on and translate into a KL-divergence bound. From Pinsker’s inequality, this translates into a total-variation bound. The proof then concludes by proving a convergence of the CGA towards which is a simple application of a Bernstein-von Mises theorem for miss-specified models by Kleijn et al. (2012). ∎ As a corollary from this theorem, it follows that EP is asymptotically exact in all models which respect our hypothesis on the and the -generating process and which are identifiable. This includes an extremely large class of models since our conditions are fairly mild and since identifiability is a key requirement for the Bayesian method to be useful. For example, our result can be applied to both probit and logistic regression in finite-dimensions, as long as the feature vectors are bounded (in order for our hypotheses on the to be verified), and spread uniformly enough for the Fisher information matrix to be strictly positive (see Appendix). Three notes must be made on that theorem. First of all, it shows that EP and aEP are exact in that there exists a fixed-point which converges in total-variation to the true posterior. It does not guarantee that asymptotically all fixed points converge. In particular, should the expected value of have several modes, we can guarantee that there also exists aEP and EP fixed points which are terrible asymptotic approximations of : the stable regions associated with the local minima converge to the CGA at a mode with negligible asymptotic contribution to the mass of . Second, it could seem from this result that EP and aEP approximations are asymptotically worse than the CGA, because the total-variation distance between the EP fixed-point and the target decreases slower than for the CGA. This does not reflect a limitation of EP but it is a feature of our proof: we have proved that EP is good because it converges to the CGA, and only a direct proof would be able to prove the superiority of EP. We expect that EP should give better asymptotic approximations than the CGA from empirical tests of both methods, but the result we present here is too weak to prove this conjecture. We have recently made some progress on such a direct proof, but under more restrictive assumptions than the ones presented here (Dehaene and Barthelmé, 2015). Finally, it is interesting to come back to th. 2 which shows that, in the large cavity precision limit, aEP and parallel EP converge to NT. It is also possible to qualitatively discuss th. 2 in terms of a large-data limit instead. In order to do so, we assume that, as the number of data-points grows, the typical value for the cavity precisions grows linearly with . This is certainly true in the stable region around as we have just shown in th. 4. If we have that , then the errors in th. 2 are of order 1. These order 1 errors are negligible in the large-data limit, in exactly the same way as for th. 4. Thus our result that aEP and parallel EP are almost Newton in the high precision limit qualitatively apply in the large-data limit as long as the “typical” cavity precisions grow linearly with . 3 Consequences of the quasi-Newton behavior of EP In the previous section, we gave a proof that, in the limit of large-data, EP behaves like a Newton search for the mode of the target distribution. In this section, we highlight how this result can inform our intuition about how EP behaves, and some potentially interesting avenues of research it opens. 3.1 Instability of the EP iteration Newton’s algorithm (NT) is a good tool in finding a mode of a target distribution as it has fast convergence if it is initialized properly (i.e.: close enough to the mode), but it can often fail to converge globally. For example, applying NT to always results in a divergent sequence that oscillates wildly around the fixed point at . More generally, Newton is unstable when the log-curvature is small because that makes the Newton step too big. In order to fix this problem, it is necessary to introduce a “slowed-down” version of the iteration: where is chosen carefully to ensure convergence. As the NT algorithm is part of the class of Generalized Gradient Descent algorithms, one solution is to choose values that respect the Wolfe conditions (see Boyd and Vandenberghe, 2004, for a convergence analysis of Newton’s method). Since EP behaves like NT in the large-data limit, we can intuit that even for small , EP might have a qualitatively similar behavior. In particular, EP iterations might oscillate around their fixed-point just like NT does. We give here a simple example of this behavior with sites that are extremely regular and which seem harmless at a glance. In our example, we applied a parallel version of the EP algorithm to the following situation: five “double-logistic” sites: , so called because they are the product of two logistic functions. If plotted, these appear to be Gaussian at a glance, but with the important difference that they only have exponential decay in their tails. In these exponential tails, the log-curvature is very small. one Gaussian site representing a prior: In this example, there is an EP fixed-point that provides a good approximation of the target distribution. However, we also found that the EP iteration is unstable if it is initialized too far away from the fixed-point distribution. EP iterations initialized too far away converge to a limit cycle oscillating between two approximations that are completely wrong. Figure 2 shows the basins of attraction of the stable equilibrium and the limit cycle, and one example trajectory for each. Our results on the limit behavior of the EP iteration can thus inform our understanding of why the EP iteration sometimes has problems with convergence: it could be that the EP iteration overshoots when it is operating in a zone where most are small while most are big, exactly like NT would. A possible solution to this could be to complement EP with an adaptive step algorithm. This algorithm would need to detect overshoots or potential overshoots, and prevent them. Finding such an adaptive step algorithm would represent major progress in EP methods. 3.2 Behavior of EP on multimodal distributions Let’s now investigate how our results shed new light on the behavior of EP on a multimodal target distribution . EP has been presented from the start as a rough approximation to the minimizer of the “forward” KL divergence: that uses local (i.e., site-specific) approximations of the KL divergence. This leads to the intuition that, when applied on a multimodal target, the EP approximation would fit all modes, or maybe most modes, since this is the behavior of the KL approximation. With our method of bounding fixed points in neighborhoods of the CGA at the various modes of , we can now see that this intuition is flawed. Indeed, our result shows that all modes that are: sufficiently peaked, so that the stable region is small sufficiently isolated, so that their stable region does not overlap with that of the other modes have at least one associated fixed-point. This fixed-point corresponds to the EP approximation fitting only this mode and not the rest of the probability distribution. Thus, it can happen that EP approximations give only a partial account of the target distribution. However, it’s also false to believe that EP always gets captured and never provides a global approximation of a multimodal target. Indeed, when there is only one site (or more generally, when there is only a few sites), EP does give a global account of the distribution since EP with one site exactly recovers the KL approximation of . Surprisingly, both types of fixed points can co-exist. Figure 3 shows an example involving Gaussian mixtures, the prototypical example of multimodal problems. Here the data are supposed IID, with The parameters correspond to component means in the Gaussian mixture and are evidently interchangeable, so that the likelihood surface is in general bimodal. We ran EP in this example with , and a unit Gaussian prior on Different initializations lead to different fixed points: we found three, two corresponding to local approximations (as predicted by theory) and a global one, the latter far from any mode. Interestingly, the local approximations are locally “exact”, meaning that under the identifiability constraints the moments of the corresponding EP approximation are exact. The mean of the global approximation matches the exact global mean, although the covariance is a bit under-estimated. A simple take-home message from our work should thus be this one: do not expect EP to fit all modes of a target distribution, but do not automatically assume that it will fit a single mode either. EP is an algorithm whose theoretical analysis lags far behind its empirical success. We describe in this manuscript a number of results that narrow the gap between theory and empirics, and we hope that they will provide a useful basis for future work. In this article, we propose a simpler version of EP which we call averaged-EP or aEP. aEP could be interesting as an empirical algorithm (see Appendix and Li et al. (2015) who introduce a close variant of aEP called stochastic EP). However, our main focus is on using it as a theoretical tool for studying the asymptotics of EP, since its reduced parameter space makes the results simpler to understand. We derive analytical results on aEP and EP in several limits: in the limit of large cavity precisions, and in the classical large-data limit. We prov that both methods converge to a Newton’s search for a mode of the target distribution. We then show that both are asymptotically exact in that there exists a fixed-point which converges towards the target. Our theoretical results open several avenues of research into gaining a better understanding of EP. First of all, while we shed some light into the behavior of the EP iterations by providing a qualitative link to Newton’s method, we still do not know how to build a variant of EP which is guaranteed to converge. This is a key avenue of research since the only way we know to guarantee convergence to an EP fixed-point, the Expectation-Consistent algorithm (Opper and Winther, 2005), converges much more slowly. An algorithm that always converges while staying as fast as EP would represent a major step forward. The parallel with NT opens the interesting idea of designing a line-search extension of EP. Another limit of our result is the coarseness of our bounds: while we show that EP is asymptotically exact, we do not show that it improves on the Canonical Gaussian Approximation when there is ample empirical evidence that it does. Future theoretical work on EP should aim at showing how and when EP does dominate the CGA (see Dehaene and Barthelmé (2015) for one such investigation, though crippled by unrealistic assumptions on the model). A final interesting extension of this work concerns the non-parametric case, and, more generally, EP approximations of high-dimensional posteriors. Indeed, we believe that our results are sub-optimal in bounding how the error scales in high-dimensional cases, which is why we cannot apply our results to the non-parametric case for which . A careful extension to show that EP behaves correctly in those cases would prove another huge step forward in providing a good theoretical basis for EP. We thank Alex Pouget for his support, and Hugo Duminil for helpful insight on the math. We also thank Mᅵlisande Albert, Nicolas Chopin, Gina Grᅵnhage, and James Ridgway for their comments on the manuscript. Finally, we thank Judith Rousseau for her help on Bernstein-von Mises theorems. - Barthelmé and Chopin (2014) Barthelmé, S. and Chopin, N. (2014). Expectation propagation for likelihood-free inference. Journal of the American Statistical Association, 109(505):315–333. - Bishop (2007) Bishop, C. M. (2007). Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1st ed. 2006. corr. 2nd printing 2011 edition. - Boyd and Vandenberghe (2004) Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press, New York, NY, USA. - Brascamp and Lieb (1976) Brascamp, H. J. and Lieb, E. H. (1976). Best constants in Young’s inequality, its converse, and its generalization to more than three functions . Advances in Mathematics, 20(2):151 – 173. - Dehaene and Barthelmé (2015) Dehaene, G. P. and Barthelmé, S. (2015). Bounding errors of Expectation-Propagation. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 244–252. Curran Associates, Inc. - Gehre and Jin (2013) Gehre, M. and Jin, B. (2013). Expectation Propagation for Nonlinear Inverse Problems - with an Application to Electrical Impedance Tomography. - Gelman et al. (2014) Gelman, A., Vehtari, A., Jylänki, P., Robert, C., Chopin, N., and Cunningham, J. P. (2014). Expectation propagation as a way of life. - Jylänki et al. (2014) Jylänki, P., Nummenmaa, A., and Vehtari, A. (2014). Expectation propagation for neural networks with sparsity-promoting priors. Journal of Machine Learning Research, 15:1849–1901. - Jylänki et al. (2011) Jylänki, P., Vanhatalo, J., and Vehtari, A. (2011). Robust gaussian process regression with a student-t likelihood. J. Mach. Learn. Res., 12:3227–3257. - Kleijn et al. (2012) Kleijn, B., van der Vaart, A., et al. (2012). The bernstein-von-mises theorem under misspecification. Electronic Journal of Statistics, 6:354–381. - Kuss and Rasmussen (2005) Kuss, M. and Rasmussen, C. E. (2005). Assessing Approximate Inference for Binary Gaussian Process Classification. J. Mach. Learn. Res., 6:1679–1704. - Li et al. (2015) Li, Y., Hernández-Lobato, J. M., and Turner, R. E. (2015). Stochastic Expectation Propagation. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 2323–2331. Curran Associates, Inc. - Minka (2005) Minka, T. (2005). Divergence Measures and Message Passing. Technical report. - Minka (2001) Minka, T. P. (2001). Expectation Propagation for approximate Bayesian inference. In UAI ’01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pages 362–369, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. - Nickisch and Rasmussen (2008) Nickisch, H. and Rasmussen, C. E. (2008). Approximations for Binary Gaussian Process Classification. Journal of Machine Learning Research, 9:2035–2078. - Nocedal and Wright (2006) Nocedal, J. and Wright, S. (2006). Numerical Optimization (Springer Series in Operations Research and Financial Engineering). Springer, 2nd edition. - Opper (1998) Opper, M. (1998). On-line Learning in Neural Networks. chapter A Bayesian Approach to On-line Learning, pages 363–378. Cambridge University Press, New York, NY, USA. - Opper and Winther (2005) Opper, M. and Winther, O. (2005). Expectation Consistent Approximate Inference. J. Mach. Learn. Res., 6:2177–2204. - Paquet et al. (2009) Paquet, U., Winther, O., and Opper, M. (2009). Perturbation Corrections in Approximate Inference: Mixture Modelling Applications. Journal of Machine Learning Research, 10:1263–1304. - Pereyra (2016) Pereyra, M. (2016). Approximating Bayesian confidence regions in convex inverse problems. - Raymond et al. (2014) Raymond, J., Manoel, A., and Opper, M. (2014). Expectation propagation. - Ribeiro and Opper (2011) Ribeiro, F. and Opper, M. (2011). Expectation propagation with factorizing distributions: A gaussian approximation and performance results for simple models. Neural computation, 23(4):1047–1069. - Ridgway et al. (2014) Ridgway, J., Alquier, P., Chopin, N., and Liang, F. (2014). PAC-Bayesian AUC classification and scoring. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K., editors, Advances in Neural Information Processing Systems 27, pages 658–666. Curran Associates, Inc. - Rue et al. (2009) Rue, H., Martino, S., and Chopin, N. (2009). Approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71(2):319–392. - Saumard and Wellner (2014) Saumard, A. and Wellner, J. A. (2014). Log-concavity and strong log-concavity: A review. Statist. Surv., 8:45–114. - Seeger (2005) Seeger, M. (2005). Expectation Propagation for Exponential Families. Technical report. - Titterington (2011) Titterington, D. M. (2011). The em algorithm, variational approximations and expectation propagation for mixtures. In Mixtures, pages 1–29. John Wiley & Sons, Ltd. - Varadhan and Roland (2008) Varadhan, R. and Roland, C. (2008). Simple and globally convergent methods for accelerating the convergence of any em algorithm. Scandinavian Journal of Statistics, 35(2):335–353. - Wainwright and Jordan (2008) Wainwright, M. J. and Jordan, M. I. (2008). Graphical Models, Exponential Families, and Variational Inference (Foundations and Trends(r) Machine Learning). Now Publishers Inc. - Xu et al. (2014) Xu, M., Lakshminarayanan, B., Teh, Y. W., Zhu, J., and Zhang, B. (2014). Distributed bayesian posterior sampling via moment sharing. In Advances in Neural Information Processing Systems, pages 3356–3364. The following two sections hold all the supplementary information of this article. In this section, we will give detailed proofs of all the results we have presented in the main text. We will prove, in order: the limit behavior of the EP update in one-dimension the limit behavior of the EP update in high-dimensions the limit behavior under weaker assumptions the exactness of aEP and EP in the large-data limit We will prove our results in the one-dimensional case and in the n-dimensional case. Let’s first recall our assumptions on the likelihoods in the one-dimensional case. We will explain in section 5.3 how to modify these assumptions in the high-dimensional case. In section 5.4, we show that these assumptions can be weakened considerably, though the expression for the errors is much harder to state. Let be the sites, and be the negative log of each site. Our first assumption will be that the second log-derivatives of the sites span a finite range: and second, that some of the higher log-derivatives are bounded. There exists constants for such that: 5.2 Limit behavior of the EP update In this section, we will prove the following theorem. Limit behavior of the site-approximation Consider the hybrid distribution: . In the limit that , the hybrid mean and the natural parameters of the EP approximation ( and ) of converge. Defining and , the limits are: Note the key role of the parameter. It causes the mean of the cavity distribution to be offset from , but it can make the mean of the hybrid, , closer to . Indeed, if is such that , then we gain an order of magnitude in the limit behavior of and the errors in the limit behavior of both and are smaller. Let’s first sketch a global overview of how the proof works. Intuitively, what is going on is that we are going beyond the first order approximations of the mean and variance of : and computing the next order of their limit behavior. If we try to follow that path directly, however, we obtain bounds that are a bit ugly and not very tight. A better proof path is slightly clever and sophisticated and consists in finding “tricks” ways toof bounding and directly. This is where the Brascamp-Lieb inequality comes in play: it provides one-half of the bound. In practice, our proof can be decomposed into five steps: Upper-bound in a coarse way Upper-bound and lower-bound in a fine way Prove a coarse bound on the from the coarse bound on the variance Use the bound on to improve the bound on to its final state which provides us with the bound on Compute the limit behavior of from the coarse limit behavior of and Limit behavior of and of First, we will deal with the variance of the hybrid. We will use the Brascamp-Lieb result and a Cramer-Rao like bound to derive the final bounds on . The Brascamp-Lieb result will also give a coarse bound on which we will use in the other sections. Let’s start by upper-bounding the variance with the Brascamp-Lieb bound. This will also give us the coarse bound we need on . Consider the value of at . It gives us a universal lower-bound on (from assumption 16):
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00221.warc.gz
CC-MAIN-2023-14
55,120
233
https://www.dcforecasts.com/price-indexes/ontology-price/
code
The following widget lets you see the live price of Ontology along with other insights and price details. You can see the Ontology market capitalization, trading volume, daily, weekly and monthly changes as well as the total supply of the coin, its highest and lowest prices and other information. By default, we provide the Ontology price in USD but you can easily switch the base currency to Euro (EUR), British Pounds (GBP), Japanese yen (JPY) and Russian Roubles (RUB). The current Ontology price is displayed below. The Ontology Value can be found out from the top cryptocurrency exchanges such as Binance, Coinbase, Bitstamp, Bitfinex, HitBTC and Kraken. All you need to do is select the checkbox and compare the prices between exchanges on the charts. We offer different views such as Candlestick, OHLC, Line and other charts. You can use the buttons to switch between Ontology charts. The default information we present is provided for the last week, but users are able to choose information for one day, one week, one month or one year. You can also see information for three months of data or any custom period. There is a built-in feature of printing and downloading Bitcoin price charts – and an opportunity to download data in XLS and CSV formats. Ontology Historical Price Ontology has had its ups and downs so far and the ONT price has seen different movements. You can customize the period of time to see the price history for the required time. You can also see the date, price, volume and change. The date describes the day of the recorded price, the price shows the Ontology Value as of that date, the volume column shows the trading volume of the coin for any chosen day and the change indicates the percentage change in the price of the coin.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00759.warc.gz
CC-MAIN-2023-40
1,765
7
http://toursworld.info/travelvlog/week-in-oahu-hawaii-travel-vlog-part-1/
code
Use code KARISSADUNCAN16 for up to 16 FREE MEALS + 3 Surprise Gifts across 6 HelloFresh boxes plus free shipping at https://bit.ly/3Eon42G Should I move back to Oahu?? Stay tuned for Hawaii Part 2! Follow me on Instagram: https://www.instagram.com/iamkarissaduncan1/ YOU ARE LOVED. YOU ARE WORTHY. & YOU DESERVE ALL GOOD THINGS IN THIS WORLD. Please note that you never know what others are going through, so always be kind to one another. A smile and positivity goes a long way.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00501.warc.gz
CC-MAIN-2022-21
479
5
https://guides.libraries.psu.edu/open-access/green
code
Definition of Green OA from the UC Libraries' Pathways report: "Green open access is repository-based open access. Green OA models are agnostic about publisher open access behaviors, relying instead on institutions and authors to take steps to make otherwise toll-access works freely available in online repositories that may be (and often are) managed by institutions. In essence, successful green open access requires: the right to share a given scholarly output, a copy of it, the motivation to share it, and a location for sharing it (i.e., a repository)." Traditionally, the copyright to scholarly articles was transferred to the publisher, which prevented the author from placing their work in a repository ("self-archiving"). Today, many journals allow self-archiving by default, often with limitations on when, where, and what version of the article you can self-archive. When a journal's default agreement does not permit self-archiving, many authors negotiate to retain that right. Publishing agreements often distinguish between three different versions of an article when describing what self-archiving is acceptable: Academic social networks, such as Academia.edu and ResearchGate, differ from open access repositories. They are typically operated on a for-profit basis and do not have the same preservation commitments as repositories hosted by academic institutions. The following articles provide more information about these distinctions.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00010.warc.gz
CC-MAIN-2023-40
1,455
5
https://forum.solidworks.com/message/937394
code
It seems like you're making your part multibody to change the length, then merging back in to one body. I would just use a move face with translation after the revolve, which can then be changed or suppressed in different configurations. I hope this helps. I was hoping for a quick and dirty solution like after you make the move body feature, select it and a dimension will show on screen, but this doesn't appear to be the case with move body. What I have done is create a controlling sketch with a configured dimension that I would anchor the move body to. You wish to move it away a certain distance, instead move it to the endpoint of a line that equals your configured distance. Expand the Body-Move/Copy1 feature in the tree, right-click on the Distance mate, and choose "Configure Feature" from the drop-down. You'll get a simplified design table, with a row for each configuration, and a column that will allow you to set the suppression state for each configuration. Click on the drop-down beside the feature name... ...and click on the box for D1. That will add a column for the dimension value. Enter the desired value for each configuration and click Okay. This method is described under #2 at How do I set a configuration specific dimension or value? . There's more information there about this table that you might like to review.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00145.warc.gz
CC-MAIN-2019-18
1,345
8
https://www.arxiv-vanity.com/papers/hep-ph/0405253/
code
micrOMEGAs: Version 1.3 G. Bélanger, F. Boudjema, A. Pukhov, A. Semenov 1. Laboratoire de Physique Théorique LAPTH 111URA 14-36 du CNRS, associée à l’Université de Savoie. Chemin de Bellevue, B.P. 110, F-74941 Annecy-le-Vieux, Cedex, France. 2. Skobeltsyn Institute of Nuclear Physics, Moscow State University Moscow 119992, Russia 3. Joint Institute for Nuclear Research (JINR) 141980, Dubna, Moscow Region, Russia We present the latest version of micrOMEGAs, a code that calculates the relic density of the lightest supersymmetric particle (LSP) in the minimal supersymmetric standard model (MSSM). All tree-level processes for the annihilation of the LSP are included as well as all possible coannihilation processes with neutralinos, charginos, sleptons, squarks and gluinos. The cross-sections extracted from CalcHEP are calculated exactly using loop-corrected masses and mixings as specified in the SUSY Les Houches Accord. Relativistic formulae for the thermal average are used and care is taken to handle poles and thresholds by adopting specific integration routines. The input parameters can be either the soft SUSY parameters in a general MSSM or the parameters of a SUGRA model specified at some high scale (GUT). In the latter case, a link with Suspect, SOFTSUSY, Spheno and Isajet allows to calculate the supersymmetric spectrum, Higgs masses, as well as mixing matrices. Higher-order corrections to Higgs couplings to quark pairs including QCD as well as some SUSY corrections () are implemented. Routines calculating , and are also included. In particular the routine includes an improved NLO for the SM and the charged Higgs while the SUSY large effects beyond leading-order are included. This new version also provides cross-sections for any process as well as partial decay widths for two-body final states in the MSSM allowing for easy simulation at colliders. May 28, 2022 We present micrOMEGAs1.3, a program which calculates the relic density of the lightest supersymmetric particle (LSP) in the minimal supersymmetric standard model (MSSM). The stable LSP, which occurs in supersymmetric models with R parity conservation, constitutes a good candidate for cold dark matter. Recent measurements from WMAP have in fact constrained the value for the relic density within 10%, Forthcoming experiments by the PLANCK satellite will pin-down this important parameter to within 2%. One therefore needs to evaluate the relic density with high accuracy. The relic density calculation is based on solving the equation characterizing the evolution of the number density of the LSP. For this, one needs to evaluate the thermally averaged cross-section for annihilation of the LSP, as well as, when necessary, coannihilation with other supersymmetric (SUSY) particles [3, 4, 5]. We use, as in micrOMEGAs1.1 , the method described in for the relativistic treatment of the thermally averaged cross-section, and the generalization of to the case of coannihilation. However we have improved our method for solving the density evolution equation, it is now solved numerically without using the freeze-out approximation. This improvement has not impaired the speed of the calculation. The other main improvement to micrOMEGAs1.3 is the use of loop corrected superparticle masses and mixing matrices. These masses and mixing matrices, as specified in the SUSY Les Houches Accord (SLHA), are then used to compute exactly all annihilation/coannihilation cross-sections. This can be done whether the input parameters are specified at the weak scale or at the GUT scale in the context of SUGRA models or the like. In the last case, loop corrections are obtained from one of the public codes which calculate the supersymmetric spectrum using renormalization group equations (RGE) [10, 11, 12, 13]. These corrections to masses are critical for a precise computation of the relic density in two specific regions: the coannihilation region and the region where annihilation through a Higgs or Z exchange occurs near resonance. Note that these regions of the supersymmetric parameter space are among the ones where one predicts sufficiently high annihilation rates for the neutralino LSP to meet the WMAP upper bound on the relic density. In the first case, the critical parameter is the NLSP-LSP mass difference, in the latter the mass difference . The Higgs masses are calculated either by one of the RGE codes or with FeynHiggsFast. When annihilation occurs near a Higgs resonance, higher-order corrections to the width also need to be taken into account. As in micrOMEGAs1.1, QCD corrections to Higgs partial widths are included, furthermore we have added the important SUSY corrections, the correction, that are relevant at large . These higher-order corrections also affect directly the Higgs- vertices and are taken into account in all the relevant annihilation cross-sections. Besides the relic density measurement, other direct or indirect precision measurements constrain the supersymmetric models. In our package we calculate the supersymmetric contribution to and to . We also include a new calculation of the supersymmetric contribution to and an improved calculation of the decay rate. The latter includes an improved NLO for the SM and the charged Higgs contribution as well as the SUSY large effects beyond leading-order, the correction. The , or routines can be replaced or used as a stand-alone code. Within micrOMEGAs1.3 all (co-)annihilation cross-sections are compiled by CalcHEP which is included in the package. CalcHEP is an automatic program to calculate tree-level cross-sections for any process in a given model, here the MSSM. We provide a code that performs the calculation of cross-sections and decay widths that can be called independently of the relic density calculation. The input parameters are the parameters of the SUSY Les Houches Accord. Another new feature is the possibility to call CalcHEP, directly from a micrOMEGAs1.3 session, and calculate interactively cross-sections for any process in the MSSM or in mSUGRA models. For this, all widths of supersymmetric particles are evaluated automatically at tree-level, including the available two-body decay modes. The relic density as well as other constraints are also calculated in the CalcHEP session. In summary, the new program micrOMEGAs1.3 Calculates complete tree-level matrix elements for all subprocesses. Includes all coannihilation channels, in particular channels with neutralinos, charginos, sleptons, squarks and gluinos. Calculates the relic density for any LSP, not necessarily the lightest neutralino. Deals with two sets of input parameters: parameters of the MSSM understood to be specified at the EWSB scale or parameters of the SUGRA model specified at the GUT scale. Both mSUGRA or non-universal SUGRA models are included. Includes an interface with the SUSY Les Houches Accord for supersymmetric model specifications and input parameters. This gives a lot of flexibility as any model for which the MSSM spectrum is calculated by an external code can be incorporated easily. Includes loop corrected sparticle masses and mixing matrices. Includes loop-corrected Higgs masses and widths. QCD corrections to the Higgs couplings to fermion pairs are included as well as, via an effective Lagrangian, the correction relevant at large . Provides exact numerical solution of the Boltzmann equation by Runge-Kutta. Outputs the relative contribution of each channel to Computes cross-sections for any process at the parton level. Calculates decay widths for all particles at tree-level including all decay modes. Calculates NLO corrections to . Calculates constraints on MSSM: , , . Supports both C and Fortran. Performs rapidly the relic density calculation, the limiting factor in the execution time of the program is the computation of the supersymmetric spectrum. New features in the list above are denoted by a star. In this paper we emphasize mainly the new features of our package, full details can be found in the original reference . In Section 2, we describe the main changes to our calculation of the relic density. We then give the parameters of the supersymmetric model used in our package. A description of the package follows in Section 4. Section 5 gives instructions for running the program as well as sample sessions. Finally in Section 6 we compare our results with those of DarkSUSY4.0 , the other public package that computes the relic density of supersymmetric dark matter. 2 Calculation of the relic density The most complete formulae for the calculation of the abundance were presented in [7, 8] and we will follow their approach rather closely. The evolution equation for the abundance, defined as the number density divided by the entropy density, writes where is an effective number of degree of freedom , is the Planck mass and the thermal equilibrium abundance. is the relativistic thermally averaged annihilation cross-section of superparticles summed over all channels, where is the number of degree of freedom, the total cross section for annihilation of a pair of supersymmetric particles with masses , into some Standard Model particles, and is the momentum (total energy) of the incoming particles in their center-of-mass frame. Integrating Eq. 2.1 from to leads to the present day abundance needed in the estimation of the relic density, where is the entropy density at present time and the normalized Hubble constant. The present-day energy density is then simply expressed as . Let us rewrite Eq. 2.1 in terms of First note that one will always have when . This is the case at since the equilibrium abundance [7, 8] and for a typical electroweak cross-section , , and LSP mass, , one has . Choosing a starting point for the solution of the numerical equation at small will rapidly return the solution . On the other hand when , decreases exponentially as . Then neglecting the dependence on X in both and we get where . In this approximation, does not depend on , whereas decreases exponentially. This can be used to find a starting point for the numerical solution of the differential equation (2.4). In order to find where one can solve darkOmega routine we use this equation to find , corresponding to and solve the differential equation (2.4) by the Runge-Kutta method starting from this point . We stop the Runge-Kutta run at point where Then we integrate Eq. 2.4 neglecting the term Note that the temperature corresponds to . Thus without loss of precision we can set for evaluating since . darkOmegaFO performs the calculation in the freeze-out approximation222This function was used in the original version of micrOMEGAs.. Here we choose as in Ref. and omit the Runge-Kutta step (). The precision of this approximation is about 2% although in some exotic cases the approximation works badly. As in micrOMEGAs1.1, we include in the thermally averaged cross-section , Eq. 2.2, only the contribution of processes for which the Boltzmann suppression factor, , is above some value where are the masses of the incoming superparticles. The recommended value is . In our program we provide two options to do the integrations, the fast one and the accurate one. The fast mode already gives a precision of about 1% which is good enough for all practical purposes. The accurate mode should be used only for some checks. In the accurate mode the program evaluates all integrals by means of an adaptative Simpson program. It automatically detects all singularities of the integrand and checks the precision. In the case of the fast mode the accuracy is not checked. We integrate the squared matrix elements over the scattering angle by means of a 5 points Gauss formula. For integration over , Eq. 2.2, we use a restricted set of points which depends whether we are in the vicinity of a s-channel Higgs/Z/W resonance or not. We increase the number of points if the Boltzmann factor corresponding to is larger than . 2.1 Decays of the Higgs scalars When the LSP is near a Higgs resonance, it annihilates very efficiently. The value of the neutralino annihilation cross-section depends on the total width if this width is larger than , the freeze-out temperature. This is usually the case for large Higgs masses of 1TeV especialy at large due to the enhancement in the channel. However the width of receives important QCD corrections. Typically for the heavy Higgses (TeV) the partial width into can vary easily by a factor of 2 from the tree-level prediction, due mostly to the running of the quark mass at high scale. To take these corrections into account we have redefined the vertices and using an effective mass that reproduces the radiatively corrected Higgs decays . The effective mass at the scale writes where , the scale of the reaction is set to , and are the quark masses and running strong coupling in the -scheme. We use NNLO expressions for the strong coupling constants and for the running quark masses [19, 21]. The relation between the and the pole quark masses are implemented at three-loops [20, 21]. These are relevant for the top quark, since we use the pole mass as input following the SUSY Les Houches Accord . For b-quark, although is the input parameter, it is still necessary to compute the pole mass used as an input parameter to some of the RGE codes. We set at scales where the effective mass exceeds the value of the pole mass. We also take into account the SUSY-QCD corrections to vertices that are important at large . Here we use the effective Lagrangian where is the effective b-quark mass described above, the electromagnetic coupling, is the ratio of the vev’s of the Higgs doublets and is the Higgs mixing angle. is a correction factor arising from loop contribution of SUSY particles. This factor is particularly important at large and also contributes to (all details are given in Appendix B). In the large case, when neutralino annihilation via s-channel Higgs exchange dominates, the inclusion of SUSY-QCD corrections can shift by about 15% the value for the relic density. There is an option to switch off this correction (see Section4.1). The total width of the Higgs includes only the two-body final states that occur at tree-level. In the case of the light Higgs, this underestimates the width since the partial width to off-shell W or final states can reach 10%. However an accurate value for this very narrow width has in general not a strong impact on the relic density. On the other hand a precise value for the heavy Higgs width is necessary. 2.2 Neutralino “width” We assume that the LSP is stable because of R-parity conservation, however it is necessary to introduce a width for this stable particle in order to avoid infinities in some processes. For example, in the coannihilation process like via t-channel exchange of an infinity is caused by the pole in the propagator, this is due to the fact that one can have a real decay . We assign a value of to the width of all supersymmetric particles . The default value for the variable is 0.01. 2.3 Loop corrections to the MSSM spectrum. In the mSUGRA model, but also in the more general MSSM, annihilation of the LSPs near a Higgs or Z resonance and/or coannihilation processes are often the dominant reactions in models where . For an accurate calculation of the relic density it is then very important to have the exact relations between particle masses. In particular, the direct annihilation of a pair of neutralinos () depends sensitively on the mass difference with the Higgs or Z, , when the annihilation occurs near the resonance. Furthermore coannihilation processes depend strongly on the NLSP-LSP mass difference. In this new version of micrOMEGAs1.3, we provide an option to calculate loop corrections to all sparticle masses 333Pole masses in the calculation of the relic density were first used in Ref. . Within the MSSM defined at the EWSB scale, loop corrections are implemented by a call to Suspect, within the SUGRA or other model defined at the GUT scale, the loop corrections are done by any of the four public codes (Suspect ,SOFTSUSY , Spheno ,Isajet ) for calculating the supersymmetric spectrum based on renormalization group equations. Because it is a mass difference rather than the absolute mass that has a large impact on the prediction of the relic density, even radiative corrections at the percent level, such as is often the case for neutralinos, need to be taken into account. Indeed large shifts in the prediction of the relic density between tree-level and loop-corrected masses can be found. Typically the prediction for the relic density can change by 20%, but in some scenarios corrections can reach 100% or even more. We not only use the loop-corrected sparticle masses but also the corresponding mixing matrix elements. In this way we take into account some of the loop corrections in the evaluation of the matrix elements for different processes. This however means, since it is only a partial implementation of loop corrections, that theoretical inconsistencies in the model could occur, in particular problems with unitarity violation in some processes. This would mainly show up in processes with production of gauge particles, however at much higher energies that are typically involved in the LSP annihilation processes. 3 The MSSM parameters. In our package, we compute various matrix elements and cross-sections for processes within the framework of the MSSM. The model file corresponding to the specific implementation of the MSSM was derived with LanHEP, a program that generates the complete set of particles and vertices once given a Lagrangian Names are attached to the parameters of the MSSM, including those of the SM, and their values can be set with an instruction. For example, the command assignVal("Mtp",180.) assigns the value GeV to the pole mass of the t-quark. The list of parameters of the Standard Model and their default values is presented in Table 1. All quarks and leptons of the first two generations are assumed massless. The default values for the electromagnetic coupling and the Weinberg angle correspond to the values in the scheme at the scale. |AlfSMZ||0.1172||strong coupling, for| |Ml||1.777||tau-lepton pole mass| |Mtp||175.0||t-quark pole mass| |MbMb||4.23||scale independent b-quark mass Mb(Mb)| The parameters of the MSSM are described in Table 2. We follow the conventions of the SUSY Les Houches Accord . The masses of the third generation fermions are ordered, for example corresponds to the lightest top-squark. In this list, the number of parameters exceeds the number of MSSM independent parameters. They correspond to physical parameters, masses and mixings. This extended set of parameters is however necessary when one wants to use effective masses and vertices that include loop corrections. Our computation of matrix elements for cross-sections is based on this set of parameters. Note that the trilinear muon coupling, , is added to the parameter list even though it does not contribute to matrix elements or to the spectrum since the muon is assumed to be massless. This parameter is however important for evaluating the muon anomalous magnetic moment. |alpha||Higgs angle||MSe||masses of left/right selectrons| |mu||Higgs parameter||MSm||left/right smuon masses| |Mh||Mass of light Higgs||MSli||i=1,2 masses of light/heavy| |MH3||Mass of CP-odd Higgs||MSu||masses of left/right u-squarks| |MHH||Mass of Heavy Higgs||MSs||masses of left/right s-squarks| |MHc||Mass of charged Higgs||MSti||i=1,2 masses of light/heavy t-squarks| |Al||trilinear coupling||MSd||masses of left/right d-squarks| |Am||trilinear coupling||MSc||masses of left/right c-squarks| |Ab||trilinear coupling||MSbi||i=1,2 masses of light/heavy b-squarks| |At||trilinear coupling||Zn||i,j=1,..,4; neutralino mixing matrix| |MNEi||i=1,2,3,4; neutralino masses||Zu||i=1,2;j=1,2; chargino U mixing matrix| |MCi||i=1,2 chargino masses||Zv||i=1,2;j=1,2; chargino V mixing matrix| |MSG||mass of gluino||Zl||i=1,2;j=1,2; mixing matrix| |MSne||-sneutrino mass||Zt||i=1,2;j=1,2; mixing matrix| |MSnm||-sneutrino mass||Zb||i=1,2;j=1,2; mixing matrix| The values of the SLHA parameters can either be set by an external program, here a call to one of the RGE codes that calculate the supersymmetric spectrum, or by specifying the MSSM parameters at the weak scale. In either case one needs to specify a set of independent parameters as described below. 3.1 Input parameters at the GUT scale Within the context of the SUGRA scenario for supersymmetry breaking the MSSM parameters can be evaluated at the weak scale starting from a set of scalar masses, gaugino masses, trilinear couplings defined at the GUT scale. The GUT scale input parameters are listed in Table 3. Only one parameter, , is defined at . We implicitly assume that the first two generations are identical. The parameters for the mass of the Higgs doublet can be entered with a negative sign, in this case they will be understood as We treat the mSUGRA model as a special case of the general SUGRA. Since simplifying relations are imposed on masses and couplings, in the mSUGRA model one has to specify only a small number of input parameters at the GUT scale: . These correspond to - common scalar mass at GUT scale; mhf= - common gaugino mass at GUT scale; a0= - trilinear soft breaking parameter at GUT scale; tb- or the ratio of vacuum expectation values at MZ; sgn- +/-1, sign of , the Higgsino mass term. Four different routines read the parameters of Table 2 and pass them to the corresponding packages that solves the RGE equations and calculate the MSSM masses and mixing matrices. The routines are described in section4.1. Note that some of the standard parameters of Table 1 also play a role in the low energy boundary conditions implemented in the RGE codes. They are passed to RGE routines implicitly. We assume that the second generation is identical to the first one and only parameters of the first generation are used. |tb||(at )||Ml1||Left-handed slepton mass for gen.| |At||trilinear coupling||Ml3||Left-handed slepton mass for gen.| |Ab||trilinear coupling||Mr1||Right-handed slepton mass for gen.| |Al||trilinear coupling||Mr3||Left-handed slepton mass for gen.| |MG1||U(1) Gaugino mass||Mq1||Left-handed squark mass for gen.| |MG2||SU(2) Gaugino mass||Mq3||Left-handed squark mass for gen.| |MG3||SU(3) Gaugino mass||Mu1||Right-handed u-squark mass for gen.| |sgn||sign of at the EWSB scale||Mu3||Right-handed u-squark mass for gen.| |MHu||Mass of first Higgs doublet||Md1||Right-handed d-squark mass for gen.| |MHd||Mass of second Higgs doublet||Md3||Right-handed d-squark mass for gen.| 3.2 Input parameters at the weak scale The parameters of the SUSY Les Houches Accord can also be calculated starting from the set of independent MSSM parameters at the EWSB scale444 This set of parameters was used in the previous version of micrOMEGAs. listed in Table 3. This can be done either at tree-level or with loop corrections (see Section 4.1). The names of the independent parameters of the MSSM are identical to the GUT scale parameters safe for MHu,MHd which are conveniently replaced by and (MH3). Furthermore at the EWSB scale one must define the sfermion masses for all three generations. Here MH3 and MG3 are the pole masses of the CP-odd Higgs and of the gluino. All other parameters are treated as running ones. When evaluating loop corrections to pole masses starting from the independent set of parameters, it is assumed that the parameters are specified in the scheme at the EWSB scale, . |tb||MG3||SU(3) Gaugino mass (gluino mass)| |mu||Higgs parameter||Mli||Left-handed slepton mass for generation| |At||trilinear coupling||Mri||Right-handed selectron mass for generation| |Ab||trilinear coupling||Mqi||Left-handed squark mass for generation| |Al||trilinear coupling||Mui||Right-handed u-squark mass for generation| |Am||trilinear coupling||Mdi||Right-handed d-squark mass for generation| |MG1||U(1) Gaugino mass||MH3||Mass of Pseudoscalar Higgs| |MG2||SU(2) Gaugino mass| Two options are available to specify the weak scale MSSM parameters, either from a file using the function ewsbInitFile or directly as argument of the function Either option will evaluate the supersymmetric spectrum at tree-level or to one-loop according to the value of the parameter LCOn, see section 4.1. After evaluation of the spectrum in the context of the SUGRA or MSSM models, the function calcDep chooses the lightest supersymmetric particles and calculates the running masses of quarks at the LSP scale as well as various widths. 4 Functions of micrOMEGAs The routines presented below belong to the micromegas.a library. They are available both in the C and Fortran versions. If for some reason a Fortran call differs from the C one, we present the Fortran version in brackets ”[ ]”. The types of the functions and their arguments are specified in Examples of implementation are presented in Section 5.4. Note that after assignments of the MSSM parameters the user has to call the initialization procedure calcDep (Sec. 4.1). Other routines of the package can only be used after making this call. 4.1 Variable assignment and spectrum calculation changes values of the parameters. name is one of the names presented in Tables 1,2, val is the value to be assigned. The function returns 0 when it successfully recognizes the parameter name and 1 otherwise. the same routine as assignVal, instead of returning an error code it writes a warning on the screen. calculates the values of the MSSM parameters in the SUGRA scenario using the Suspect package. Returns when the spectrum is computed succesfully, in case of non-fatal problems (see the manual for the meaning of non-fatal errors ), and if no solution to RGE can be found for a given set of boundary conditions. This routine assigns values for the parameters in Table 2. The result depends on the input values of the SM parameters, in particular on the quark masses, ( MbMb) and on the strong coupling constant ( parameters play a role in the low energy boundary conditions and are passed implicitly. same as above for SOFTSUSY. same as above for Spheno. same as above for Isajet. This function depends only on , other SM parameters, and in particular and , are fixed internally. Isajet does not calculate the trilinear muon coupling, we use the approximate relation for mSUGRA models, . Note that only the Suspect code is included in our package. Other codes should be installed independently by the user and linked to micrOMEGAs as explained in Section 5.1. calculates the supersymmetric spectrum at tree-level or one-loop from the set of independent MSSM parameters at the EWSB scale as specified by the parameter LCOn. The Higgs sector parameters, masses and mixing angle , are calculated with FeynHiggsFast . LCOn=0 - tree level formulae for super particles LCOn=1 - Suspect is used to evaluate loop corrections to masses of super particles. reads the input file filename which specifies the set of independent MSSM parameters at the EWSB scale and calculates the supersymmetric spectrum at tree-level or one-loop as set by the parameter LCOn (same as above). The function returns: 0 - when the input has been read correctly; -1 - if the file does not exist or can not be opened for reading; -2 - if some parameter from Table 4 is missing as displayed on the sceen; -3 - if the spectrum cannot be calculated; n - when the line number n has been written in the wrong format. For example, the correct format of a line is reads the input file in the SUSY Les Houches Accord format . If LE=1 the SM parameters of Table 1 as well as are also read from a SLHA output file. initializes internal parameters for subsequent calculations. In particular, the running masses of quarks, the strong coupling constant as well as the widths of gauge bosons, Higgses and superparticles. Running parameters are evaluated at the LSP scale. This routine also sorts the superparticles and selects the LSP. The parameter dMbOn switches off SUSY-QCD corrections, , see Section 2.1. 4.2 Display of parameters. findVal(name,&val) [findVal(name,val) ] assigns to the variable val the value of the parameter name. It returns zero if such variable indeed exists and 1 otherwise. This function can be applied to any of the parameters in Table 1,2 as well as to particle masses and widths specified in Tables 5,6,7. returns the value corresponding to the variable name is not defined findValW writes a warning on the screen. printVar(file,N) [ printVar(N) ] prints the first N records of the full list of model parameters. The first 7 parameters correspond to Table 1, the following 75 parameters correspond to the list in Table 2. To see the parameters on the screen, substitute file=stdout. In the Fortran version, only display on the sceen is possible. printMasses(file,sort) [ printMasses(sort) ] prints into the file the masses of the supersymmetric particles as well as all Higgs masses and widths. The Fortran version writes down on the screen. If sort, the masses are sorted in increasing order. lsp() [ lsp(name) ] returns the name of the LSP. The relic density can be calculated with any particle being the LSP even though only the neutralino and the sneutrino can be dark matter candidates. If the user wants to impose a specific LSP, the nature of the LSP must be checked after calling calcDep. lspmass_() [ lspmass() ] returns the mass of the lightest supersymmetric particle in . 4.3 Calculation of relic density. darkOmega(&Xf,fast,Beps) [ darkOmega(Xf,Fast,Beps) ] This is the basic function of the package which returns the relic density (Eq. 2.3). The procedure for solving the evolution equation using Runge-Kutta was described in Section 2. The value of the freeze-out parameter Xf is returned by the function and equals , (see the definition in Eq. 2.7, 2.8). The parameter Beps defines the criteria for including a given channel into the sum for the calculation of the thermally averaged cross-section, Eq. 2.10; is the recommended value. If fast=0, we use an integration routine that increases the number of points until an accuracy of is reached. If fast=1 the accuracy is not checked, but a set of points is chosen according to the behaviour of the integrand: poles, thresholds, Boltzman suppression at large energy. The accuracy of this mode is about 1%. Finally, fast=2 corresponds to the calculation of relic density using the widely-used approximation based on the expansion in terms of velocity The recommended mode is fast=1. If some problem is encountered, darkOmega returns . darkOmegaFO(&Xf,fast,Beps) [ darkOmegaFO(Xf,fast,Beps) ] calculates the relic density as the function darkOmega described above, but using the freeze-out approximation. printChannels(Xf,cut,Beps,prcnt,f) [ printChannels(Xf,cut,Beps,prcnt) ] prints the relative contribution to for all subprocesses for which this contribution exceeds the value chosen for cut. If prcnt=1 the contribution is given in percent, otherwise the absolute value is displayed. It is assumed that the Xf parameter was first evaluated by darkOmega. In the C version, the output is directed to the file f, the Fortran version writes on the screen. Actually this routine evaluates the partial contributions to the integral of Eq. 2.9 without the term and returns the corresponding value for . 4.4 Routines for constraints. deltarho_() [ delrho()] calculates, by a call to a Suspect routine, the parameter which describes the MSSM corrections to electroweak observables. It contains stop/sbottom contributions, as well as the two-loop QCD corrections due to gluon exchange and the correction due to gluino exchange in the heavy gluino limit . Precise measurements of SM electroweak observables allow to set the limit . bsgnlo_() [ bsgnlo() ] returns the value of the branching ratio for . For we have improved on the results of by including some very recent new contributions beyond the leading order that are especially important for high . Full details can be found in Appendix B. bsmumu_() [ bsmumu() ] returns the MSSM contribution to . Our calculation is based on and agrees with . It includes the loop contributions due to chargino, sneutrino, stop and Higgs exchange. The effect relevant for high is taken into account. The current bound from CDF experiment at Fermilab is B.R.() and the expected bound from RunIIa should reach B.R.() . gmuon_() [ gmuon() ] returns the value of the supersymmetric contribution to the anomalous magnetic moment of the muon . The result depends only on the parameters of the chargino/neutralino sector as well as on the smuon parameters, in particular the trilinear coupling ( Am). Our formulas agree with . The latest experimental data on the measurement using , brings the average to . The quantity includes both electroweak and hadronic contributions and is still subject to large theoretical errors, the allowed range for then has also large errors. We estimate the range to be . masslimits_() [ masslimits() ] returns a positive value and prints a WARNING when the choice of parameters conflicts with a direct accelerator limits on sparticle masses. The constraint on the light Higgs mass is not implemented and must be added by the user. Among the routines that calculate constraints, masslimits issues a warning if the chosen model gives a value outside the experimentally allowed range. All other constraints must be checked by the user. 4.5 QCD auxiliary routines. calculates the running at the scale Q in the scheme. The calculation is done using the NNLO formula in . Thresholds for b-quark and t-quark are included in at the scales and respectively. Implicit input parameters are MbMb defined in Table 1. calculates top and bottom running masses evaluated at NNLO. calculates effective t- and b-quark masses as in Eq. 2.11. calculates the SUSY corrections to (Appendix B). 4.6 Partial widths and cross sections decay2(pName,k, out1, out2) calculates the decay widths (in GeV) for any processes. The input parameters are pName, the name of the decaying particle and k, the channel number. out1 and out2 are the names of outgoing particles for channel k. If k exceeds the total number of channels, then out1 and out2 are filled as empty strings. newProcess(procName, libName) [ newProcess(procName, libName, address) ] prepares and compiles the codes for any reaction in the MSSM. The result of the compilation is stored in the library If this library already exists, it is not recompiled and the correspondence between the contents of the library and the procName parameter is not checked. libName is also attached to the names of routines in the libName.so library. Therefore libName should not contain symbols such as which are not legal as identifiers. Library names should not start with omglib, these are reserved for the libraries used to evaluate . The process should be specified in CalcHEP notations, for example without any blank space. One can find all symbols for MSSM particles in Tables 5,6,7. Multi-process generation is also possible using the command x means arbitrary final states. newProcess routine returns the address of the static structure with contains, for further use, the code for the processes. If the process can not be compiled, then a NULL address is returned (address=0 in Fortran). newProcess can also return the address of a library that was already generated, newProcess("","omglib_o1_o1") returns the address of the library for neutralino annihilation. [ infor22(address,nsub,n1,n2,n3,n4,m1,m2,m3,m4) ] allows to check the contents of the library produced by newProcess. Here address is the returned value of call and nsub the subprocess number. The parameters returned correspond to the names of particles for a given subprocess (n1, n2, n3, n4) as well as their masses (m1, m2, m3, m4). The function returns 2 if the nsub parameters exceed the limits and 0 otherwise. cs22(address, nsub, P, c1, c2 , &err) evaluates the cross section for a given process with center of mass momentum (GeV). The differential cross section is integrated from and is the angle between in the center-of-mass frame. If nsub exceeds the maximum value for the number of subprocesses then err contains a non zero error code. |Light Higgs||h||Mh||wh||CP-odd Higgs||H3||MH3||wH3| |Heavy higgs||H||MHH||wHh||Charged Higgs||H+,H-||MHc||wHc| |neutralino 1||~o1||MNE1||wNE1||u-squark L||~uL||~UL||MSuL||wSuL| |neutralino 2||~o2||MNE2||wNE2||u-squark R||~uR||~UR||MSuR||wSuR| |neutralino 3||~o3||MNE3||wNE3||c-squark L||~cL||~CL||MScL||wScL| |neutralino 4||~o4||MNE4||wNE4||c-squark R||~cR||~CR||MScR||wScR| |selectron L||~eL||~EL||MSeL||wSeL||t-squark 2||~t2||~T2||MSt2||wSt2| |selectron R||~eR||~ER||MSeR||wSeR||d-squark L||~dL||~DL||MSdL||wSdL| |smuon L||~mL||~ML||MSmL||wSmL||d-squark R||~dR||~DR||MSdR||wSdR| |smuon R||~mR||~MR||MSmR||wSmR||s-squark L||~sL||~SL||MSsL||wSsL| |stau 1||~l1||~L1||MSl1||wSl1||s-squark R||~sR||~SR||MSsR||wSsR| |stau 2||~l2||~L2||MSl2||wSl2||b-squark 1||~b1||~B1||MSb1||wSb1| 5 Work with the micrOMEGAs package. 5.1 Installation and link with RGE packages. micrOMEGAs can be obtained at The name of the file downlaoded should be micromegas_1.3.0.tar.gz. After unpacking the file, the root directory of the package, micromegas_1.3.0, will be created. This directory contains the micro_make file, some sample main programs, a directory for the source code, a directory for CalcHEP interactive sessions and a directory containing data files. To compile, type either This command is a Unix script, which detects the operating system and its version, sets the corresponding compiler options, and compiles the code. Being launched without arguments, only auxiliary libraries needed for relic density evaluation. Otherwise, the first argument is treated as a C or Fortran main program which should be compiled and linked with these libraries. The executable file created has the same name as the main program without the It is interesting to investigate the relic density in the framework of some scenario of supersymmetry breaking. We rely on the public codes that evaluate the supersymmetric spectrum in the context of models defined at the GUT scale such as the mSUGRA model. One of these packages, Suspect , is included into the micrOMEGAs package. We also support an interface with SOFTSUSY , Spheno and Isajet . To use Isajet, the corresponding library should be attached to the code. It can be done via the variable EXTLIB to be defined micro_make file. For example, to use Isajet located in ~/isajet769 directory the definition should be mathlib from CERNLIB is not included in libisajet.a it should be specified in EXTLIB, for example EXTLIB="$HOME/isajet769/libisajet.a -L/cern/pro/lib -lmathlib" The interface with SOFTSUSY and Spheno is realized in the framework of the SUSY Les Houches accord by direct execution of the corresponding programs. In both cases, the user has to define in the micro_make file, the variables SOFTSUSY or SPHENO which identifies the directory where executable file is located. For example, To install the package, one needs initially about 20MB of disk space. As the program generates libraries for annihilation processes only at the time they are required, the total disk space necessary can double after running the program for different models as described in the next section. 5.2 Dynamic generation of matrix elements and their loading. In order to take into account all possible processes of annihilation of superparticles into SM particles, we need matrix elements for about 2800 different subprocesses. However, for a given set of parameters, usually only a few processes contribute, other subprocesses are suppressed by the Boltzmann factor. The micrOMEGAs package just after compilation does not contain the code for matrix elements. They are generated and linked in runtime when needed. To generate the matrix elements we use the CalcHEP program in batch mode . The compiled matrix elements are stored as shared libraries in the subdirectory The name of the library created corresponds to the names of initial superparticles. Say, the library containing annihilation processes is On the first few calls, micrOMEGAs works slowly because it compiles matrix elements. After being compiled once, the code for matrix elements is stored on the disk and is accessible for all subsequent calls. Each process is generated and compiled only once. In case several jobs are submitted simultaneously, a problem occurs when CalcHEP receives a new request to generate a matrix element when it has not completed the previous one. We delay the operation of the second program. The warning that CalcHEP is busy signals the presence of a LOCK file in the directory If for some reason this file is not removed after the CalcHEP session, the user should remove it. The executable file generated by micro_make can be moved and executed in other directories. However it will always use and update the matrix elements 5.3 Linking with other codes and including micrOMEGAs into other packages. One can easily add other libraries to the micrOMEGAs package similarly to the implementation of Isajet described in Section 5.1. One needs to pass the library name to the linker via the EXTLIB variable defined in by specifying the complete path to the library. One can include the micrOMEGAs package into other C, C++, or Fortran projects. The function prototypes for C and C++ projects are stored in the sources/micromegas.h file. All the routines of our package as well as Suspect and FeynHiggsFast routines are stored in which in turn needs the functions of to calculate the widths. The user must pass to the linker the library that supports dynamic loading. The name of this library depends on the Unix platform. One can find this name in the file, it is assigned to the LDDL environment variable. To attach micrOMEGAs to a C or C++ project, the user should make sure that the library of Fortran functions are also passed to the linker. In the micro_make file this library is described by the 5.4 Running micrOMEGAs1.3: examples. micromegas_1.3.0 contains several examples of main programs. The files sugomg_f.f are main programs for the evaluation of the relic density in the generates the executable sugomg which needs 5 parameters ./sugomg <m0> <mhf> <a0> <tb> <sgn> sugomg executable also understands three additional input parameters as , , . The output contains the SUSY and Higgs mass spectrum, the value of the relic density, the relative contributions of different processes to as well as the constraints mentioned in Section 4.4. The list of necessary parameters are written on the screen when sugomg is called without specifying parameters. compiles the corresponding Fortran code. In this case the input parameters are requested after launching the program: > ./sugomg_f Enter m0 mhf a0 tb sgn > By default these programs call Suspect for solving the RGE equations. One can easily change the RGE code by replacing the suspectSUGRA call by the appropriate one s_cycle.c performs the calculation over mSUGRA test points . Results for these points for all RGE programs mentioned in our paper are presented in the file evaluated the relic density in the case of the unconstrained MSSM. The input parameters are read from a text file written in the format of the ewsbInitFile routine. In the C-version the file should be passed as a parameter, for example If several sets of parameters are passed to the program, the calculation will be done in a cycle. The Fortran version also works in a cycle, waiting for a file name as input and finishes after an empty line input. data contains 22 "data*" test input files for this routine. These parameter sets were chosen to check the program in special difficult cases where either strong co-annihilation and/or Higgs pole contribute significantly in relic density evaluation. Results of relic density calculation for all these 22 test points using the option when all masses are evaluated at tree-level are stored in file 5.5 CalcHEP interactive session. The CalcHEP program used for matrix element generation is included in the package. The user can calculate interactively various cross sections both in the general MSSM and in SUGRA models. To realize this option the user has to move to the calchep subdirectory and launch The implementation of the MSSM and SUGRA models in CalcHEP is identical to the one in micrOMEGAs described in previous sections. There are two auxiliary parameters, which switch ON/OFF loop corrections to the MSSM particle spectrum and SUSY-QCD correction to decays respectively. If dMbOn>0 the corresponding correction is taken into account. The list of parameters contains also the scale parameter Q which should be set depending of the scale of the process under consideration. This parameter contributes to the running of and to the running masses of b quarks. Here we use the standard formulae without including the higher order QCD corrections 555These corrections can be simulated by decreasing of presented in Section 2.1. For the SUGRA model, all four RGE packages presented in micrOMEGAs can be used, Suspect is defined by default. External RGE packages are available for CalcHEP if they were already properly installed in the micrOMEGAs package as described in To include another RGE package one has to edit the model in CalcHEP (in the Edit model menu). suspectSUGRA call should be commented in the Constraints menu while the line corresponding to the call for another routine should be uncommented. The symbol for comment is %. In the Edit model menu one can also defined the non-universal SUGRA model. By default, mSUGRA boundary conditions are implemented. To modify this, first comment the lines in the Constraints menu which express the GUT scale parameters in Table 3 in terms of the mSUGRA parameters. The corresponding non-universal parameters should then be introduced as new variables in the Variables menu. In this realization of MSSM/SUGRA all widths of super-partners are evaluated automatically at tree-level including all decay modes generated in the model. The relic density and other constrains mentioned in section 4.4 are included in the list of Constrains and automatically attached to CalcHEP numerical sessions. In CalcHEP numerical sessions for 2->2 processes we provide an option to construct a plot for the dependence on the incoming momentum. This option is found under the Simpson menu 5.6 Sample output file Running micrOMEGAs1.3 with the default values of the standard parameters and choosing the Suspect RGE package with the mSUGRA input parameters sugomg 107 600 0 5 1 will produce the following output: Higgs masses and widths h : Mh = 116.0 (wh =2.5E-03) H : MHH = 899.2 (wHh =1.9E+00) H3 : MH3 = 898.5 (wH3 =2.2E+00) H+ : MHc = 902.0 (wHc =2.3E+00) Masses of SuperParticles: ~o1 : MNE1 = 249.1 || ~l1 : MSl1 = 254.2 || ~eR : MSeR = 256.0 ~mR : MSmR = 256.0 || ~nl : MSnl = 413.1 || ~ne : MSne = 413.4 ~nm : MSnm = 413.4 || ~eL : MSeL = 420.2 || ~mL : MSmL = 420.2 ~l2 : MSl2 = 420.4 || ~1+ : MC1 = 468.3 || ~o2 : MNE2 = 468.5 ~o3 : MNE3 = 780.0 || ~2+ : MC2 = 793.2 || ~o4 : MNE4 = 794.3 ~t1 : MSt1 = 946.7 || ~b1 : MSb1 = 1153.1 || ~b2 : MSb2 = 1187.8 ~dR : MSdR = 1188.4 || ~sR : MSsR = 1188.4 || ~t2 : MSt2 = 1190.6 ~uR : MSuR = 1194.8 || ~cR : MScR = 1194.8 || ~uL : MSuL = 1248.2 ~cL : MScL = 1248.2 || ~dL : MSdL = 1250.5 || ~sL : MSsL = 1250.5 ~g : MSG = 1358.1 || Xf=2.67e+01 Omega=8.87e-02 Channels which contribute to 1/(omega) more than 1%. Relative contrubutions in % are displyed 1% ~o1 ~o1 -> l L 3% ~o1 ~l1 -> Z l 12% ~o1 ~l1 -> A l 2% ~o1 ~eR -> Z e 8% ~o1 ~eR -> A e 2% ~o1 ~mR -> Z m 8% ~o1 ~mR -> A m 11% ~l1 ~l1 -> l l 2% ~l1 ~L1 -> A Z 3% ~l1 ~L1 -> A A 8% ~eR ~l1 -> e l 6% ~eR ~eR -> e e 1% ~eR ~ER -> A Z 2% ~eR ~ER -> A A 6% ~eR ~mR -> e m 8% ~mR ~l1 -> m l 6% ~mR ~mR -> m m 1% ~mR ~MR -> A Z 2% ~mR ~MR -> A A deltartho=9.11E-06 gmuon=3.12E-10 bsgnlo=3.85E-04 bsmumu=3.13E-09 MassLimits OK Under the same conditions and for the same set of parameters, running the cross-section and branching ratios routines will produce the following output: Example of some cross sections and widths calculation for mSUGRA point m0=107.0,mhf=600.0,a0=0.0,tb=5.0 Z partial widths b B - 3.684E-01 GeV d D - 3.703E-01 GeV u U - 2.873E-01 GeV c C - 2.873E-01 GeV s S - 3.703E-01 GeV l L - 8.378E-02 GeV nl Nl - 1.670E-01 GeV nm Nm - 1.670E-01 GeV ne Ne - 1.670E-01 GeV m M - 8.397E-02 GeV e E - 8.397E-02 GeV Total 2.436E+00 GeV h partial widths b B - 2.460E-03 GeV l L - 2.552E-04 GeV Total 2.716E-03 GeV Cross sections at Pcm=500.0 GeV e,E->~1+,~1- e,E->~1+(468),~1-(468) is 7.135E-03 pb e,E->~o1,~o2 e,E->~o1(249),~o2(468) is 1.130E-02 pb We have compared the results obtained with micrOMEGAs1.3 and those obtained with DarkSUSY4.0 for 10 benchmarks mSUGRA points. For this check, we have used Isajet7.69, GeV, GeV. The latter is only relevant for the calculation of the Higgs widths. As seen in Table 8, the two programs agree at the 3% level except at large . This discrepancy is due to a difference in the width of the pseudoscalar. We recover good agreement with DarkSUSY (below 3%) if we substitute their value for the pseudoscalar width. micrOMEGAs1.3 solves with an accuracy at the percent level the evolution equation for the density of supersymmetric particles and calculates the relic density of dark matter. All possible channels for annihilation and coannihilations are included and all matrix elements are calculated exactly in an improved tree-level approximation that uses pole masses and loop-corrected mixing matrices for supersymmetric particles. Loop corrections to the masses of Higgs particles and to the partial widths of the Higgs (QCD and SUSY) are implemented. These higher-order corrections are essential since the annihilation cross-section can be very sensitive to the mass of the particles that contribute to the various annihilation processes, in particular near a resonance or in regions of parameter space where coannihilations occur. Furthermore, both these processes are often the dominant ones in physically interesting supersymmetric models, that is in models where the relic density is below the WMAP upper limit. The relic density can be calculated starting from a set of MSSM parameters defined at the weak scale or at the GUT scale. We provide an interface to the four major codes that calculate the supersymmetric spectrum using renormalization group equations. Within the context of the mSUGRA model, there are still large uncertainties in the computation of the supersymmetric spectrum , this of course will have a strong impact on the prediction for the relic density . An accurate prediction of the relic density within SUGRA models then presupposes a precise knowledge of the supersymmetric spectrum. New features of the package also include the computation of cross-sections and decay widths for any process in the MSSM with two-body final states as well as an improved NLO calculation of the branching ratio and a new routine for the decay rate. We thank A. Cottrant for providing the code for the . We have also benefitted from discussions with B. Allanach, A. Belyaev, A. Djouadi, J. L. Kneur and W. Porod on the RGE codes. We would like to thank M. Gomez for testing parts of our code. We also thank P. Gambino for discussion and for some clarification regarding , in particular for confirming our implementation of the large effects in SUSY. This work was supported in part by the PICS-397, Calcul en physique des particules,by GDRI-ACPP of CNRS and by grants from the Russian Federal Agency for Science, NS-1685.2003.2 and RFBR 04-02-17448. Appendix A List of functions a.1 micrOMEGAs functions in C. int assignVal(char * name, double val) void assignValW(char * name, double val) int readLesH(char *fname) int ewsbInitFile(char * fname,int LC)} int ewsbMSSM(tb,MG1,MG2,MG3,Am,Al,At,Ab,MH3,mu,Ml1,Ml2,Ml3,Mr1,Mr2,Mr3,Mq1,Mq2,Mq3, Mu1,Mu2,Mu3,Md1,Md2,Md3,LC); int LC; all other parameters are ’double’ int xxxxxSUGRA(tb,MG1,MG2,MG3,Al,At,Ab,sgn,MHu,MHd,Ml1,Ml3,Mr1,Mr3,Mq1,Mq3, Mu1,Mu3,Md1,Md3) All parameters are ‘double’. int calcDep(int dMbOn) int findVal(char * name, double * val) double findValW(char * name) void printVar(FILE *f, int N) void printMasses(FILE * f, int sort) char * lsp(void) double lspmass_() double darkOmega(double *Xf,int fast, double Beps) double darkOmegaFO(double *Xf,int fast, double Beps) double printChannels(double Xf, double cut, double Beps, int prcnt, FILE *f ) double deltarho_(void) double bsgnlo_(void) double bsmumu_(void) double gmuon_(void) int masslimits_(void) double MbRun(double Q) double MtRun(double Q) double MbEff(double Q) double MtEff(double Q) double deltaMb(void) double decay2(char*pIn, int k, char*pOut1, char*pOut2) void* newProcess(char* procName, char*libName) int infor22(void*address,int nsub, char*pIn1,char*pIn2,char*pOut1,char*pOut2, double*m1,double*m2,double*m3,double*m4) double cs22(void*address, int nsub, double Pcm, double c1, double c2, int*err) double annihilation(double v, int k, char * pOut1, char pOut2) micrOMEGAsfunctions in Fortran. INTEGER FUNCTION assignVal(name,val) SUBROUTINE assignValW(name,val) INTEGER FUNCTION readLesH(fname) INTEGER FUNCTION ewsbInitFile(fname,LC) INTEGER FUNCTION ewsbMSSM(tb,MG1,MG2,MG3,Am,Al,At,Ab,MH3,mu,Ml1,Ml2,Ml3, Mr1,Mr2,Mr3,Mq1,Mq2,Mq3, Mu1,Mu2,Mu3,Md1,Md2,Md3,LC) INTEGER FUNCTION xxxxSUGRA(tb,MG1,MG2,MG3,Al,At,Ab,sgn,MHu,MHd, Ml1,Ml3,Mr1,Mr3,Mq1,Mq3,Mu1,Mu3,Md1,Md3) All parameters are ‘double’. INTEGER FUNCTION calcDep(dMbOn) INTEGER FUNCTION findVal(name, val) REAL*8 FUNCTION findValW(name) SUBROUTINE printVar(n) SUBROUTINE printMasses(sort) SUBROUTINE LSP(name) REAL*8 FUNCTION lspMass() REAL*8 FUNCTION darkOmega(Xf,fast,Beps) REAL*8 FUNCTION darkOmegaFO(Xf,fast,Beps) REAL*8 FUNCTION printChannels(Xf,cut,Beps,prcnt) REAL*8 FUNCTION deltarho() REAL*8 FUNCTION bsgnlo() REAL*8 FUNCTION bsmumu() REAL*8 FUNCTION gmuon() INTEGER FUNCTION MassLimits() REAL*8 FUNCTION MbRun(Q) REAL*8 FUNCTION MtRun(Q) REAL*8 FUNCTION MbEff(Q) REAL*8 FUNCTION MtEff(Q) REAL*8 FUNCTION deltaMb() REAL*8 FUNCTION decay2(pIn, k, pOut1,pOut2) SUBROUTINE newProcess(procName,libName,address) INTEGER FUNCTION infor22(address,nsub, pIn1,pIn2,pOut1,pOut2,m1,m2,m3,m4) REAL*8 FUNCTION cs22(address, nsub, Pcm, c1, c2 , ERR) REAL*8 FUNCTION annihilation(v,k, pOut,pOut2) The types of the parameters are: CHARACTER pIn*(*),pIn1*(*),pIn2*(*),pOut1*(*),pOut2*(*), > name*(*),fname*(*),procName,*(*),libName(*,*) REAL*8 val,Xf,Beps,cut,Pcm,c1,c2,v,Q,m1,m2,m3,m4 REAL*8 tb,MG1,MG2,MG3,Am,Al,At,Ab,MH3,mu,Ml1,Ml2,Ml3, > Mr1,Mr2,Mr3,Mq1,Mq2,Mq3, Mu1,Mu2,Mu3,Md1,Md2,Md3 INTEGER n,k,sort,prcnt,ERR,LC,dmbOn,fast, address Appendix B Implementation of in micrOMEGAs The calculation for in the MSSM is quite involved and requires that one goes beyond one-loop. Most of what is described below, as implemented in micrOMEGAs, is in fact just a, unified, compendium of different contributions that have appeared in the literature. There is no claim of originality, most expressions are taken verbatim. However care has been taken in carefully checking all formulae that have appeared in the literature. This has helped, for example, identify a few misprints and typos and allowed to generalise some results. By giving the detail of the implementation, it is possible to easily modify this routine of the micrOMEGAs code in order to include future new contributions both to the SM and the MSSM. Note that we redefine in this routine many parameters used in micrOMEGAs1.3, for example the running quark masses, this routine can then be used as a stand-alone routine. b.1 General set-up: From to , QED corrections Our implementation of the Standard Model contribution follows the work of Kagan and Neubert very closely. We however include the effect of a running quark mass heuristically so that our results take into account the latest calculations of Gambino and Misiak who advocate the use of the charm mass, . The (relevant) operator basis is The renormalisation scale in (Eq. B.14) is of order and is usually let to vary in the range . The default value in the code is . Varying is one measure of the theoretical error. The branching fraction writes By default we take The kinematical function, , is defined as with defined in terms of the pole masses, giving a value in the range,. For the radiative photon we take . The factor involves the photon energy cut-off parameter that shows up at the NLO. In micrOMEGAs this value is set to as is generally assumed in order to describe the “total” (fake) branching ratio. With the formulae given below the code can be modified in a very straightforward way to take into account the full dependence. is decomposed in terms of the Wilson coefficients with leading (LO) and next-to-leading (NLO) contributions as The leading-order coefficients at the low scale of order are given by where , and , and are known numerical coefficients . For the running of between the scale and we use the SM running with 5 flavours which, to a very good precision, can be implemented as: The value of is read in by the main code micrOMEGAs. For the numerical values that we quote in this note, we take the default . The next-to-leading Wilson coefficient at , is implemented according to , The QED coefficients and are The result for is
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00846.warc.gz
CC-MAIN-2022-49
57,456
487
http://nodestone.io/angular-2-links
code
Angular 2 is a release candidate: rc.1 as I write this. The official documents still have some missing pieces. Here are a collection of useful links and my own usage comments to help fill in the missing bits: Official TypeScript docs - very good - starts with basic types selected Unit Testing Recipes - useful recipes for testing Components, Directives, HTTP Mocking etc - though think this conflicts with the latest I've found on async injecting into unit tests. I might be wrong. Testing Angular 2 Components with Unit Tests and the TestComponentBuilder (RC1+) - -but see this repo for tests actually updated for RC1: https://github.com/krimple/angular2-unittest-samples-rc --- note that TestComponentBuilder has been moved from Haven't tried this yet -- not sure what really works. It does seem that Hammer.js must be loaded before angular2. ..I'll add more as I revisit links.. First week or two with Angular 2 involved a lot of reading source and searching for solutions..
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108268.39/warc/CC-MAIN-20170821114342-20170821134342-00310.warc.gz
CC-MAIN-2017-34
978
6
https://myonlinehomeworkhelper.com/project-concept-proposal-individual-2/
code
Project Concept Proposal (Individual) For the project concept proposal assignment, each student needs to pick an engineering project topic of which the student has access to project information directly or indirectly, sufficient for developing and submitting a full-fledged project proposal for grading as a team effort at the end of the semester. This project proposal concept can be based on a project that you (your company or someone else) completed before, are doing, or planning to do, or a project that you can obtain sufficient details in documented in literature, or one that you can get sufficient information from your acquaintance. You are allowed to scale its scope down or up, if necessary. - Outline for Project concept proposal (limited to 2 pages, not exceeding 3 pages) - Project title (less than 20 words) - Brief description of the customer (company) - Brief description of the vendor (the aspiring contractor) - Brief description of the (technical) problem the customer is facing - Project objective with deliverable(s) and technical requirements for each - Brief description of a technical solution approach the vendor intends to use to solve the problem. - Major engineering phases needed for each deliverable - A guestimate (rough estimate) of project cost and duration - Project concept proposal grading criteria The following five criteria will be used to assess the quality of a proposal concept and for grading.. - Quality of technical description of the problem - Quality of project definition - Quality of objective statement (including deliverables and technical requirements for each) - Quality of intended technical solution approach including involving multiple engineering phases (i.e., analysis, design, fabrication, assembly, test, and rollout) in the project A maximum of 5 points is allocated to each criterion, valued from excellent (5), good (4), average (3), fair (2), to poor (1). The total is then divided by 2, as this is a 10 point contribution to the project grade. - Criteria for project concept proposal selection for further development by teams The following criteria will be applied to select proposal concepts for the purpose of team projects. - Quality of concept proposal as determined by score - Diversity of project concept proposals (from diverse industry sectors) - Availability of sufficient project proposal data YOU CAN ALSO PLACE OTHER SIMILAR ORDERS ON OUR WEBSITE AND GET AMAZING DISCOUNTS!!!
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00702.warc.gz
CC-MAIN-2020-40
2,457
26
http://www.cs.ccsu.edu/~neville/Courses/Spring01/MySqlResources/MySqlJavaClient/MySqlJavaClientHome.html
code
MySqlJavaClient is an application written in Java which allows you to connect to a MySQL database over a network (including the internet), issue database queries in SQL, receive the results of those queries, and display them in tabular form. Information about the MySQL database management system is available from MySQL AB at http://www.mysql.com In many situations, it is free. If you need information about how to administer and/or use a MySQL database, consult their tutorial and manual at http://www.mysql.com/documentation/ If you need information about the SQL language, invented at IBM by the way, consult the same tutorial or the later chapters of any book by Roger Jennings. At any given time, only one will be available. The current release is an expiring beta release, beta 0.3. It expires exactly six months from March 12, 2001. As new releases become available, they will be posted to this Web site, where they will replace previous releases. Because MySqlJavaClient is written in Java, only one release is needed, as it will run on any system on which the Java Development Kit, version 1.2.2 or higher, is installed. The final release of MySqlJavaClient will probably be copyrighted (copyleft?) under one of the GNU licenses, but until then, MySqlJavaClient is copyright Charles W. Neville, March 2001, all rights reserved. This expiring beta release expires exactly six months from March 12, 2001. Until then, it may be freely downloaded and used by anyone, provided it is neither modified nor sold for profit. As with anything else that is free, this beta release is provided as is, and there are no guarantees or warrantees of any sort, either expessed or implied. In no event will the copyright holder(s) be liable for any damages resulting from the use of this software. Use at your own risk. To run this beta release of MySqlJavaClient, you need the Java Development Kit -- JDK 1.2.2 or higher -- installed. You should be able to run this beta release on any machine for which a suitable version of the JDK is available. To run this release by double clicking MySqlJavaClient.bat, you need to be running Windows 98 or higher, or NT 4.0 or higher. MySqlJavaClient is an ordinary Java application. The name of the Java class file is MySqlJavaClient.class. To run MySqlJavaClient under Mac OS 9, Mac OS X, Linux, Sun Solaris, etc., etc., run it the way you run any other Java application in your operating environment. You will have to set the classpath first; read the .bat file (included with the distribution) to see how. The JDK is available for free for many operating systems from Sun Microsystems at http://java.sun.com/ MySqlJavaClient can be installed by simply downloading, unzipping, and moving a folder to a convenient place. As the installation process does not write to the Windows/NT registry, it is easy to install and run MySqlJavaClient from a Temp directory, a Zip disk, or even a floppy disk. This is a great advantage in shared laboratory environments, where you may not have the administrator privileges required for most software installations. In detail, you download and install MySqlJavaClient by MySqlJavaClient may be installed and run on any machine with a suitable version of Java installed (equivalent to version 1.2.2 or higher of the Java Development Kit). This includes machines running Mac OS 9 and OS X, Linux, Sun Solaris, etc., etc., as well as Windows/NT machines. MySqlJavaClient may be run from a Temp directory, a Zip disk, or even a floppy disk. This is a great advantage in shared laboratory environments, where you may not have the administrator privileges required for most software installations. This page is best viewed with Netscape Navigator.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806353.62/warc/CC-MAIN-20171121113222-20171121133222-00304.warc.gz
CC-MAIN-2017-47
3,711
13
http://ryanspoon.com/blog/2009/06/08/tweetmemes-meteoric-rise-reveals-twitters-search-issue
code
Techmeme has become one my primary navigational sources for daily reading / news (others include email, Google RSS, Facebook, NYTimes, TechCrunch, etc). Twitter isn't yet there because it is simply too noisy to be efficient. Techmeme solves a specific need: revealing quality, trending content across a variety of blogs and news sources. That same need exists on Twitter... and it can be argued it is both a harder AND more important task (after all, there is more noise and less context). Perhaps that is why Tweetmeme is surging: it solves an important need for an immensely popular service. And as Twitter grows, Tweetmeme becomes even more important, sources more content and services a larger community. According to Compete, Tweetmeme now reaches 3.6m monthly uniques - a hefty number by any measurement. Equally impressive though is that Tweetmeme's reach represents nearly 20% of Twitter's monthly uniques (19.7m). Furthermore, as Twitter's growth flattened from April to May, Tweetmeme's more than doubled (1.6m to 3.6m): Is this to say that Tweetmeme is the perfect service? No. It is important however because it demonstrates: - a glaring need / opportunity within Twitter (either for third parties or Twitter itself) - the difficulty that finding poses (both algorithmic search and social search)... particularly in Twitter's dynamic world of 140 characters - a clear demand from users (after all, Tweetmeme's monthly uniques are 20% of Twitter's!) - a threat for sites like Digg and Stumbleupon... which Tweetmeme (or Twitter itself) can effectively compete with - an opportunity for Bit.ly - which is sitting on a goldmine of data surrounding referrals and links
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00123.warc.gz
CC-MAIN-2019-39
1,676
5
http://msgroups.net/exchange.admin/removed-server-exchange-still-looking-for-it/339005
code
Remove Excel Icon ?? I would like to remove the possibility to export data in Excel. I think I already read it's impossible but I just want to confirm. Your correct, there is no supported way to do this in the current release of Microsoft CRM 1.2 Microsoft CRM MVP "sylvie" <firstname.lastname@example.org> wrote in message > I would like to remove the possibility to export data in > Excel. I think I already rea...Removal How do I completely remove all traces of Outlook 2002 for a complete fresh reinstall. I can't find anything for '02. ...exchange "directory lookup " problem i have one domain and sub-domain , two domain also is win2k3 + exchange2k3 in these few day , my sub-domain exchange have problem ,all mail can't forward to root domain , all is hold on " server " " queues" " messages awaiting directory lookup " but can send out & receive out-bound mail in sub-domain exchange is work just internal mail not work can't send & receive to root-domain if anyone one know , pls help me thank your very much On Mon, 3 Jul 2006 16:53:51 +0800, <email@example.com> wrote: ...DPM 2010 RC - Exchange 2007 We are protecting a exchange 2007 server, DPM is all green. When I go to restore, I cant drill down to mailbox only to the database? Could someone advise? Have you protected the exchange server using workgroup/un-trusted domain flow? If so, mailbox enumeration is not supported when an exchange server is protected using this flow. Praveen D [MSFT] This posting is provided "AS IS" with no warranties, and confers no rights "DRLJAMES" <DRLJAMES@discussions.microsoft.com> wrote in message news:FBEC7CDA-AAE0-48CE-A6F7...OWA on separate server ? is it posible to deploy exchange 2003 sp2 on two server. That second one will carry OWA (and those web stuff). We gonna buy a separate web server and i thought that it would be nice to get owa on that server. Absolutely. It is called a Front End / Back End Topology. "guzzi" <guzzi@_DOT_yandex.ru> wrote in message > is it posible to deploy exchange 2003 sp2 on two server. That second one > will carry OWA (and ...Configuring exchange 5.5 behind ISA server I am trying to implement exchange 5.5 behind a ISA server but could not get any information on how to go about.I don't know if this is actualy possible but if it is, please help me with the necessary information.Thanks for your I don't believe that the version of Exchange matters. In ISA, you have options to securely publish a mail server. All that does is create rulesets that SMTP, IMAP, POP3, etc. traffic get forwarded to a specific IP on the inside. There are plenty of documents on how to publish Exchange behind an ISA server over at http://isaserver.org....server-requested client action message This message comes up on a clients computer with Outlook 2002 sp3 every time it's opened. Tried a lot of different things including Outlook profile removal, reinstalled Outlook and creating a rule and deleting it hoping it MCSA-M <firstname.lastname@example.org> typed: > This message comes up on a clients computer with Outlook 2002 sp3 > every time it's opened. Tried a lot of different things including > Outlook profile removal, reinstalled Outlook and creating a rule and > deleting it hoping it would...how do I remove spaces from cells that were pasted I am pasting numbers into a spreadsheet, however its treating them as text because there is a space before the number. How do I remove the space so it treats it as a number ? I have used the =trim() function and its not working. Thanks for any help ! It sounds like you're copy/pasting from a website. Try this macro from David McRitchie. Look for TRIMALL, it's about half way down the page: >I am pasting numbers into a spreadsheet, however its treating them as text >because there is a...Remove lines with +++ Is there anyway to find any line that has a + in it and delete that whole I'd also like to do the same for *. I have a list of about 1300-1500 names and addresses. Some of them have a few +++ next to the name and some of them have a few *** next to the name. These were put there by the company to designate things. They need to be removed from the list, but it's a lost of work doing it one by one. You could apply an autofilter to the column with these characters in and from the pull-down select Custom - in the panel choose "Contains" (scroll down for this) the...How to Remove SRS How to I remove or disable Site Replication Service (SRS) from my Exchange Remove the SRS by expanding the Tools node in ESM, right-clicking Site Replication Service, and clicking Delete > How to I remove or disable Site Replication Service (SRS) from my Exchange > Thank You, And make sure you are doing it while logged in to the console of the Exchange server on which the SRS is running. &...allmost done with removing our 1st exchange 03 server We are in the process of removing the 1st exchange server in our domain. Everything has been replicated over to a new server and all mailboxes have been moved. When we shut down the original server down, email still works great for 90% of the team. For the rest of us, when type the name of person to receive an email, outlook it still tries to resolve the name on the original server. I checked the profile on the users mailbox and it shows them pointing to the new server. Any suggestions would Try creating a new Outlook profile. Also, when you shut down you're...remove Fax from address book? Is there a way to stop showing Fax in the address book? I have a lot of fax numbers in my Contacts. I do not want to loose them - just not show them in the Address Book. Outlook considers fax numbers to be valid electronic addresses, since there are many client- and server-based components that can use such addresses. One method to hide fax numbers from the address book is to prefix the fax number with one or more letters (maybe B for business fax, H for home, O for other). If the fax number begins with a letter, Outlook won't show it in the There are a couple of t...Exchange down #2 After some much needed help on this one.... Last night our SAN connection to Exchange (2003 std ed) went and so did our logs and DB, after 4 hours Exchange was back and we went home. This morning after 2 hours of user in and working the SAN connection went again this time taking the finance server too, - great!! Managed to get a working solution but we could only manage to ensure one server connected to SAN, as Finance is Progress it got the SAN. We moved (properly) via Exchange the priv and pubs to a local disk of the server and also the logs, Exchange was fine until the SAN ...IMF global settings affect all servers How can individual server UCE and SCL configurations be set to bypass the global setting? I’ve heard (TechEd?) this can be done in the registry on selective Exchange servers by I’m unable to find references to this. Never heard such a thing is possible (despite working with IMF every day). IMF Tune - Unleash the Full Intelligent Message Filter Power "Jim2007" <Jim2007@discussions.microsoft.com> wrote in message > How can ...Cannot Remove a program from Add Remove via remove. I downloaded a program Fast At Last and it did not download properly and I could not remove it. Even System Restore did not remove it. The program is listed as 924PL32. Kept getting messages The feature you are trying to use is on a network resource that is unavailable. C:\dell\GC605. Seems Dell is the publisher but they would not give me free support on this issue. Any suggestions as to how I can remove the program? I believe it may be associated with spyware. When I ran spysweeper it removed some Rouge Security products but I still cannot remove this program. Cor...Removing an item from menu How can I remove an item from the menu and add some thing else. For instance, how can I remove inactive from the menu in contact page and put some other item with my own code behind it? You can hide the public views by simply creating a team (non used views or whatever you want to name it) and share the view to that team(done in the customize entity fields in the forms and views), this will make it a private view that only members of that team can see. You can also create new public or private views in the same place. > How can I remove an item from the m...Outlook still in memory under XP Can anyone tell me how to get Outlook to really exit under XP? I close it but it's still listed in the task manager. Jim Thames wrote: > Can anyone tell me how to get Outlook to really exit under XP? I close it > but it's still listed in the task manager. click on "Processes" tab, select desired image name, click "End Process". Jim Thames wrote: > Can anyone tell me how to get Outlook to really exit under XP? I > close it but it's still listed in the task manager. It does not seem to do that for me. Are you sure i...Need to remove selected characters I have 25000 cells with map coordinates in it in the following format: I need the data in the following format: So basically I need to strip out the * and ' from each cell. Any simple way to do this? I sure would appreciate any assistance. Select the range of cells in question. Find what: ~* Replace with: nothing, leave this blank Repeat the process for the ' It's VERY important that you select the specific range before you attempt "Natedanger" <Natedanger@d...Mailbox recovery in exchange 2000 server I have backed up my information store which has a mailboxstore in which my mailbox exits in one exchange server. I need to restore this mailbox to another exchange server in another forest . Can anybody provide step by step procedure to recover my mail box store into another exchange server in another forest . I have named the other exch srvr with same org name , admin grp , storage grp and mailbox store but restore does not seems to happen as my server names it's probably because the name of the mailbox store and the name of the pulic store d...Remove deduction code from payroll stub Is it possible to remove just a single payroll deduction code from a payroll stub using 'Employee Checks Stub on Top-D'? For example, an employee has three deductions: Insurance, 401k, Medical. Would it be possible then to show insurance and 401k but leave medical off the check stub for all employees who have medical? Thanks in advance for your help! I don't believe so because the deduction field is an array so you'd have to know which array value. Even if you did, I don't think you can use arrays in Charles Allen, MVP "drose03...Exchange 2000 Information store error (Urgently need help) We have Exchange 2000 service pack 3 and I have just found event ID 1025 - function name or description of problem SLINK: Ecupdate Error 0x8004010f on our Information Store. The store is running OK and we can send and receive mail. The event ID points a few things out but any help greatly appreciated. "Kaddie" <email@example.com(donotspam)> wrote in message news:FA54A12E-4C87-4B7C-B29F-8D4DC765F62...connecting exchange servers in different forests... my company is in four different locations and we have @test.com, @mail.test.com, @downloads.test.com all are windows 2000 AD and windows 2003 exchanger servers. All these domains are in different forests and different exchange organisations. now my company want a common mail extension like @test.com to all the users for all these domain in different forests. Pls help me.. Why do they want a single email domain and still have separate AD domain forests. This is not a good plan. I would look at integrating all the Windows domains into one forest if they are serious about this. Exchan...How to Remove Yahoo toolbar I have three toolbars installed on IE8-Yahoo (to the right of the address box), and Google and AVG below the address box. I want to uninstall the Yahoo toolbar but how? It is not listed under 'Add/Remove' programs and there is a magnifying glass and arrow to the right of this toolbar that says 'Manage search providers' so I removed Yahoo toolbar from there but it is still showing. I uninstalled Yahoo messenger (since the toolbar was included during the Messenger install so it would be removed by deleting Messenger) but it is I know that for AV...outlook exchange calendar.. auto appear in crm calendar ?? Is it possible to make someones calendar from exchange auto synch with his calendar in CRM? I mean.. without clicking track in crm when making an main purpose would be so ppl create appointments with their blackberry and appears in crm.. ...How do I force Outlook 2007 to connect to the server? I am running Outlook 2007 and am on a trip connected to the internet in the hotel and to my company's network via VPN. When i open Outlook, it shows that it is working offline and I cannot find a way to change it. I've checked the email settings and it seems to find the server just fine, but will not change from being "offline." Within OL, File>Offline? "Tom S." <Tom S.@discussions.microsoft.com> wrote in message >I am running Outlook 2007 and am on a trip connected to the internet...
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525009.36/warc/CC-MAIN-20190717021428-20190717043428-00346.warc.gz
CC-MAIN-2019-30
13,041
211
https://johnyassa.blog/2014/11/19/skype-for-business-to-replace-microsoft-lync-in-2015/
code
Microsoft on Tuesday announced that its Skype brand will soon replace Lync, the software giant’s video and Web conferencing platform for businesses. The new offering, dubbed Skype for Business, will arrive in the first half of 2015 “In the first half of 2015, the next version of Lync will become Skype for Business with a new client experience, new server release, and updates to the service in Office 365,” he wrote. “We believe that Skype for Business will again transform the way people communicate by giving organizations reach to hundreds of millions of Skype users outside the walls of their business.” Microsoft estimates that over 300 million people use Skype to keep in touch and share content. Microsoft says the big change is that Lync’s client will get Skype’s look and feel. None of Lync’s features will go, but some of Skype’s will appear including a user’s Skype contacts being available to Lync. Beyond that there’s not much more detail than a video so saccharine that we dare not embed it here lest the howls of derision wake sleeping Reg hacks on the other side of the planet.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154486.47/warc/CC-MAIN-20210803222541-20210804012541-00031.warc.gz
CC-MAIN-2021-31
1,117
3
https://forums.appthemes.com/help-using-jobroller/google-maps-problem-100497/
code
Google Maps Problem I have had a major problem with Google Maps. I am using Geolocation plugin. Google Maps have said this I see that when you activated the Geolocation plugin, it now showed three JS API calls. Which now I can confirm that the two JS API calls comes from the "jobroller" theme module alone. We would definitely need to contact the developer in order to fix your issue as it is now out of our scope. You should be able to find a support portal for every plugin/theme module in wordpress. I hope I was able to at least narrow down your issue. Lastly, I can also confirm to you that you no longer have any issues with regards to your project.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256997.79/warc/CC-MAIN-20190523003453-20190523025453-00206.warc.gz
CC-MAIN-2019-22
656
5
http://homebrew.stackexchange.com/questions/tagged/bacon?sort=faq
code
to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site tag has no wiki summary. Why does my beer taste like bacon? I recently brewed an oatmeal stout that has distinct bacon notes to it, both in the nose and in the flavor. What could be causing these flavors? This bacon-ness isn't disagreeable; it's actually ... Feb 6 '11 at 1:16 recently active bacon questions feed frequent questions tagged Hot Network Questions What? No error? Coverage Proof of Confidence Intervals Greedy MAX SAT approximation ratio How to tell if a code is loseless. Why is "distro", rather than "distri", short for "distribution" in Linux world? Terminal, how to quit --More-- list Can I abandon my Kerbonaut in space? Existence of a utility function on the reals Why is cross product only defined in 3 and 7 dimensions? what effect does changing the data type of an existing indexed field have on the index? Good slide design for teaching? Libre Office Calc - always have to zoom in! How to tell an over-confident student they still have a lot to learn? Investigating suspects and evidence in a D&D3.x based world Set matrix zeros What's the inspiration for the owlbear? Had employee options in a company that has been sold to another foreign company. What happens? Shortest code to print ':)' random times Linux: Lost internet while using vipw. How can I unlock the file? What text format is least likely to clash with ebook formats? For countries that issue e-visas, do any also take up a full page of your passport when you arrive? Each other's / each others' Simple Hello World with unit tests How to locate and catch my dog? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Overflow Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010547566/warc/CC-MAIN-20140305090907-00035-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,189
54
http://www.linuxpromagazine.com/Issues/2011/128/(offset)/10
code
Issue #128 / Jul 2011 This month, learn how to customize your hardware configuration through the powerful udev system. We also help you optimize your code for the processor topology, and we show you how to use your smartphone as a remote. Also in the July issue: - Table of Contents - Letter from the Editor: Bring on the Trolls - Boxee Box: You don't need a box to set up Boxee Box, the open source personal video recorder. We'll show you how to put Boxee on a plain Linux system. - Tech News - Ubuntu 11.04 "Natty Narwhal" Sixpack DVD - DVD Inlay - Open Search Server: The OSS indexing suite integrates search capabilities into your websites. - iSCSI: Use iSCSI to reach network-aware SCSI storage devices. - Charly - crontab Hazards: Avert disaster by mastering cron script parameters. - Security Lessons - JTAG Hacking: Load custom firmware with a little JTAG hacking. - Ask Klaus! Your Linux questions are answered. - Perl - Banshee Database: Access Banshee metadata in an SQLite database. - sz/rz Over SSH: Send and receive files over SSH. - Workspace - Makagiga: A tool for every kind of content. - Xara Xtreme for Linux: Xara is an Inkscape alternative for vector graphics. - FVWM: This old-time window manager gives you control over your desktop. - Radio Tray: Systray web radio control. - Command Line - GPG: Keep secure with GNU Privacy Guard. - Review - GNOME 3: Discover the Gnome 3 desktop. - Cache: Flattr makes it easy to support the open source projects you love. - Doghouse: An open letter to a young man considering free software. - Kernel News: As the Kernel Turns - Projects on the Move: Vinca, Orca, and Gnome lead the way in Linux accessibility. Popular open source encryption tool is vulnerable to attack New “Yakkety Yak” edition emphasizes cloud and servers Google finally enters the phone hardware business. Innovative system adds a hard drive and Ubuntu Core to the RPi for an IoT hub. Linux is two weeks younger than we thought! The Apache Software Foundation considers retiring OpenOffice Adobe won’t kill the plugin in 2017 Linux Foundation's big event celebrates the 25th anniversary of Linux Linux has evolved from “won’t be a professional” project to one of the most professional software projects in the history of computers.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00107-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
2,272
35
http://www.coderanch.com/t/604659/java/java/Removing-Integer-values-ArrayList
code
This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details. While it's a good question why there are both Strings and Integers in this list, the Strings don't really seem to have anything to do with the actual question or its answer. If we simply ignore the strings, the whole thing seems to make more sense. ravikumar latha wrote:You can remove the integers from arraylist by using remove method of iterator interface. Or more conveniently in this case, there's a remove method on the ArrayList itself. The problem here is that there are two remove() methods on List - a remove(Object), and a remove(int). If you use the remove(int), the int argument is taken to be the index. If you use remove(Object) it's taken to be the value. From the way krish describes the problem, we probably want to remove the value 10, not the 10th element. So we need to make sure that we use the Object version of remove():
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987127.36/warc/CC-MAIN-20150728002307-00029-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
1,121
5
http://www.bubhub.com.au/community/forums/showthread.php?472664-Classes-and-Lessons-for-toddlers
code
I actually asked this question before on Bub Hub about what sort of classes people take their toddlers to and had many great replies! I tried to look for them online, on google and cant seems to find any that seems professional? I found Gymbaroo website and thats it! Please help! DS is 18 months old. Really like him to participate in some sort of activities / classes and interact with other kids more. What sort of classes you take your toddler to and is there a website I can visit? Or where do you find the local classes, from news papers?
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00467-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
544
5
http://blogs.chicagotribune.com/news_columnists_ezorn/2010/05/wimp_word.html
code
Shying from `cowardly' This is a bit random, but this week's stories about the suicide letters from former Metra chief Phil Pagano prompts me to withdraw the hasty accusation I made a few weeks ago that his method of taking his own life by standing in front of an onrushing Metra train was "particularly cowardly." There are many pejorative adjectives you can attach to the act of standing on the tracks and forcing a helpless engineer to run you over, particularly when you know full well the trauma and pain this inflicts upon innocent people. But "cowardly" isn't one of them. Why did I reach for it? The same reason so many Americans reached for it when describing the 9/11 terrorists; the same reason Chicago U.S. Rep. Bobby Rush reached for it when describing the gunmen who last week killed off-duty Chicago Police Officer Thomas Wortham IV -- because in anger we reach for the strongest words of condemnation we can summon. It took a lot of nerve, a steely conquest of the flight and survival instinct, for Pagano to commit that dastardly act. He showed us that bravery is not always a virtue and that discussions of courage and cowardice can sometimes be utterly beside the point. It seems preposterous, for instance, to say that a bully is necessarily a coward. Image: Sir Robin from "Monty Python and Holy Grail"
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00587-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,323
6
https://coderanch.com/t/444633/java/Java-Framework-Chart-based-Reports
code
I will be developing a swing application for chart based reports. For that i chose jfreechart. In that application reports will be generated daily, weekly, monthly and yearly basis for a particular entity or sub entity. I have to collect the data from the database. Even i have specific requirement like zoom in/zoom out, not the actual data zoom in. Zoom In can be either from reports of a entity to its sub entities or from daily reports to a particular date's report. Is there any JAVA framework or sample projects available for this kind of application. Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687766.41/warc/CC-MAIN-20170921115822-20170921135822-00140.warc.gz
CC-MAIN-2017-39
576
2
https://gbatemp.net/threads/new-3ds-ntr-cfw-constantly-causing-bravely-default-to-crash-help.440490/
code
Hello, I am using a New 3DS running ReiNand EmuNAND (latest firmware) and NTR Custom Firmware 3.3. It seems however whenever I run the cheat.plg within the game Bravely Default (installed as a .cia), the game will run successfully (all cheats working perfectly fine) for about 5 minutes or so, then the game will crash. This is not something that has ever happened running any other game, only Bravely Default. I tested to narrow down where the crashing is coming from, whether its just a bad .cia file, the NTR, or something else. When I run the game without using NTR at all, the game does not crash. When I run the game using NTR and do not enable any cheats at all, the game will still crash within 5 minutes or so. This tells me that the problem is not any cheats in particular, but perhaps the NTR itself. I'm not sure what I can do, but this is rendering the game almost unplayable for me. Is there anything I can do to fix this?
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00254.warc.gz
CC-MAIN-2018-05
936
1
https://blog.ssdnodes.com/blog/cool-uses-for-docker-wine/
code
7 cool uses for Docker: WINE and tweeting from terminals Container technology is anything but new, but that doesn’t mean we can’t still figure out some new, cool uses for Docker. Linux containers, otherwise known as LXC, were first to the market on August 6, 2008. They made it possible to run multiple isolated Linux environments on a single Linux host. Docker took that took the next level starting in 2013, and is now the most popular and widely-used container management system. Since Docker is all the rage right now, that’s where we’re going to start. The Microsoft takeover of GitHub made many developers worried over their code and repositories, so much to the point they’re already migrating repositories and code from GitHub to GitLab in a massive surge. 4GB RAM VPS is back ONLY FOR TODAY - Starts at $3.92 per month! 🤯 Remember to use the code EXCLUSIVE10 at checkout and deploy now! Why GitLab? It’s pretty feature-competitive with GitHub, with wiki and issue-tracking features, plus a robust web interface. Even better, it’s open source, which seems to give some of these developers more confidence in the sustainability of their GitLab-hosted code. But it’s not a panacea—GitLab could up and disappear, or change fundamentally, or be acquired by Google or Facebook. The good news is that you can host your own GitHub-like web-based hosting service for version control using Git and Docker within minutes! GitLab already provides a Docker image for the open source community. You can host it in your local environment as well as in the public domain and make use of it without worrying about big-time acquisitions. If you like GitLab, you just might love Gogs, a painless way to self-host a Git service. Its lightweight web interface can be run using the official Docker image, whether as an independent service or behind a more complex self-hosting infrastructure with a reverse proxy. Either way, nothing is stopping you from spinning up an inexpensive VPS to set up your own a Git service using Docker. What's the BEST DEAL in cloud hosting? Develop at hyperspeed with a Performance VPS from SSD Nodes. We DOUBLED the amount of blazing-fast NVMe storage on our most popular plan and beefed up the CPU offering on these plans. There's nothing else like it on the market, at least not at these prices. Score a 16GB Performance VPS with 160GB of NVMe storage for just $99/year for a limited time! Sounds crazy? Yes, it is. You might have already seen how Linux users often “emulate” a Windows application inside a Linux box using WINE. You can now go even further by running a Windows application inside a container that’s running on a Linux VPS. For that, all you need is docker-wine. I can’t guarantee it’s going to be a smooth experience, but it just might be worth an hour’s hacking. Why would you want to do this? Well, maybe you want to run the Windows version of Skype but don’t have Windows on your local machine and don’t want to install WINE there either. One obvious benefit is that your IP address will not get leaked through Skype’s IP resolver. You can containerize other Windows applications like notepad in a Linux box without actually exposing yourself to a Windows box. Here’s one for your local installation, not your VPS. If you’re the kind of person who’s ever hosted or visited a LAN party, you’ve probably experienced the massive rush to download new games from Steam, Battle.net, or Origin. All of a sudden your local network is overloaded with transfer, making everything slow to a crawl. A few developers have built Docker images that solve this problem by downloading a game’s content and caching it somewhere on the local network. Instead of a dozen gamers downloading the same game via the Internet, they can transfer it from the cached version on disk. That means you download once, transfer many times, reducing bandwidth consumption significantly. Want to secure your online activity from surveillance and traffic analysis? Those who genuinely value their Internet privacy probably already use Tor or other tools like Privoxy to remain as anonymous as possible. You can Dockerize both of these services with a single Docker image, which will prevent analysis of your traffic, enhance privacy by modifying HTTP headers, and remove ads or other unwanted scripts that might run on web pages. Now you can improve your security and lock down your privacy using Docker and your VPS. 5. Run ASP.NET applications on Linux As part of Microsoft’s effort to make ASP.NET cross-platform, they released their first official Docker image for ASP.NET in 2015. As we already know, one can sandbox an application on a Linux machine using Docker. Take that idea a step further, and it’s easy to see how developers could add their ASP.NET application on top of the base image and run it in a container! You want to tweet from your terminal using Docker? For some people, who love to use terminals and hang out in them for a considerable amount of time, it makes sense to use a terminal-based Twitter client. The Docker-based approach will isolate the Twitter application from other applications—whenever you don’t need the Twitter client, stop the container or delete it. This will ensure that libraries and code in the host machine remain unaffected. According to Garter, there will soon be an estimated 20 billion interconnected IoT (Internet of Things) devices such as environmental sensors, cameras, cable set-top boxes, home appliances, industrial devices and much more. These resource-constrained IoT devices don’t need a heavier OS or VM to manage and run software to control them. Perhaps more important, many of these IoT devices move from one environment to another very quickly. Containers and IoT devices operate in the same paradigm when it comes to deploying, updating, and maintaining, and so it makes sense for developers to use them together! You can package the IoT software and dependencies in a Docker image and deploy that to manage and run the device with less hassle. How’s this for an example—let’s say you’ve deployed a fleet of Raspberry Pis, each performing different tasks, to many different environments. Surely it will be easier to deploy container-based applications on each of the devices and manage all of them from a single host. The good news is that there are already a few players, both big and small, who provide exactly this solution. Resin, Kontena, eliot are just a few. Want to explore other cool ways to use Docker on your VPS? Check out the awesome-docker repository on GitHub. It’s perfect for a beginner who’s just trying to make their first foray into the world of container technology. Also, don’t forget to read the “cool uses for Docker” post that started this trend on this very blog!
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00052.warc.gz
CC-MAIN-2022-33
6,839
33
https://blog.adafruit.com/2009/05/02/ybox2-kit-build-photos/
code
Adafruit publishes a wide range of writing and video content, including interviews and reporting on the maker market and the wider technology world. Our standards page is intended as a guide to best practices that Adafruit uses, as well as an outline of the ethical standards Adafruit aspires to. While Adafruit is not an independent journalistic institution, Adafruit strives to be a fair, informative, and positive voice within the community – check it out here: adafruit.com/editorialstandards Adafruit is on Mastodon, join in! adafruit.com/mastodon Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand. Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7pm ET! To join, head over to YouTube and check out the show’s live chat – we’ll post the link there. Join us every Wednesday night at 8pm ET for Ask an Engineer! Join over 36,000+ makers on Adafruit’s Discord channels and be part of the community! http://adafru.it/discord CircuitPython – The easiest way to program microcontrollers – CircuitPython.org Maker Business — Making sure the CHIPS act isn’t just crumbs Wearables — Our little secret to weather-proofing your projects Electronics — Meaningful gains Python for Microcontrollers — Python on Microcontrollers Newsletter: New Thonny and Git Versions, Plenty of Projects and More! #CircuitPython #Python #micropython @ThePSF @Raspberry_Pi Adafruit IoT Monthly — Guardian Robot, Weather-wise Umbrella Stand, and more! Microsoft MakeCode — MakeCode Thank You! EYE on NPI — Maxim’s Himalaya uSLIC Step-Down Power Module #EyeOnNPI @maximintegrated @digikey New Products – Adafruit Industries – Makers, hackers, artists, designers and engineers! — NEW PRODUCTS – CNC Rotary Encoder – 100 Pulses per Rotation – 60mm Black Sorry, the comment form is closed at this time. I just couldn’t bring myself to hide the gizmo in a tin box, just had to show off the components inside! The kit was really fun to put together, I got on a roll putting in the 40-pin socket. When I got all 40 pads soldered in, I went looking for more things to try and solder in, just to keep it going! Grabbing the source widgets from DeepDarc’s web-based SVN page and hitting “F8” in the Propeller Tool software to make a binary was so easy. I’ve been farting around with the basic Infowidget Spin code and creating an InfoWidget test page. http://y.irev.net/20090501.phps
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00394.warc.gz
CC-MAIN-2023-23
3,288
18
https://blogs.msdn.microsoft.com/architectsrule/tag/practical-guidance/
code
There are millions (if not billions) of lines of VB6 code out there. For many businesses, they need this code moved to .NET but they can’t just throw the baby out with the bathwater and start from scratch. They might not be able to afford this, or maybe they couldn’t rebuild it if they tried…. Ever wonder how the Healthcare Industry could harness the full power of Silverlight to create State-of-the-Art, Game-Changing applications to lower Healthcare costs? In this episode of ARCast.TV, Jeff Barnes sits down with David Darnell from MDI Holdings and Henry Lee from New Age Solution to discuss their innovative use of Silverlight, Windows Communication Foundation, and… The latest ARCast episode has been published. In this talk, Cameron Skinner, Product Unit Manager for Visual Studio Team System, and Zhiming Xue discuss the challenges architects often face and how they may benefit from using Visual Studio 2010. Cast: https://channel9.msdn.com/shows/ARCast.TV/ARCastTV-Enabling-Architects-with-Microsoft-Visual-Studio-2010/ Brian H. Prince meets with Stephen Griffin and John Hannah about their new open source project called MVC4WPF. It is a new framework and guidance package that helps you quickly build enterprise WPF applications. They have seen a dramatic improvement in productivity, ability to leverage entry level developers, and a massive reduction in development costs…. A new episode is available featuring Rocky Lhotka on Development Frameworks. It is difficult to strike a balance between the optimal architecture and over architecting a solution. Joe Shirey sits down with Rocky Lhotka , the creator of the CSLA.NET framework, to discuss how he balances what should and should not be in his framework….
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743105.25/warc/CC-MAIN-20181116152954-20181116174954-00132.warc.gz
CC-MAIN-2018-47
1,736
5
https://github.com/wisdom-framework/wisdom/issues/492
code
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up Support conditional instantiation #492 this is an iPOJO support I would like to add an Instance creation read the given This is going to be a Wisdom extension.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589726.60/warc/CC-MAIN-20180717125344-20180717145344-00146.warc.gz
CC-MAIN-2018-30
317
7
http://boppers.net/emergence/genetic_algorithm.htm
code
Excerpt from “Emergence – The Connected Lives of Ants, Brains, Cities, and Software” by Steven Johnson: On page 58 the author mentions that a DNA coil of a species dictate some kind of form or behavior. The DNA can be referred to as the “genotype” and the form or behavior it dictates is referred to as the “phenotype”. The genetic algorithm was an attempt to capture that process in silicon. Software already has a genotype and phenotype, Holland recognized; there’s the code itself, and then there’s what the code actually does. What if you created a gene pool of different code combinations, then evaluated the success rate of the phenotypes, eliminating the least successful strands? Natural selection relies on a brilliantly simple, but somewhat tautological, criterion for evaluating success: your genes get to pass on to the next generation if you survive long enough to produce a next generation. Holland decided to make that evaluation step more precise: his programs would be admitted to the next generation if they did a better job of accomplishing a specific task – doing simple math, say, or recognizing patterns in visual images. The programmer could decide what the task was; he or she just couldn’t directly instruct the software how to accomplish it. He or she would set up the parameters that defined genetic fitness, then let the software evolve on its own Holland developed his ideas in the sixties and seventies using mostly paper and pencil – even the more advance technology of that era was far too slow to churn through the thousandfold generations of evolutionary time. But the massively parallel, high-speed computers introduced in the eighties – such as Danny Hillis’s Connection Machine – were ideally suited for exploring the powers of the genetic algorithm. And one of the most impressive GA systems devised for the Connection Machine focused exclusively on simulating the behavior of ants. It was a program called Tracker, designed in the mideighties by two UCLA professors, David Jefferson and Chuck Taylor. ( Jefferson was in the computer science department, while Taylor was a biologist.) “I got the idea from reading Richard Dawkin’s first book, the Selfish Gene,” Jefferson says today. “That book really transformed me. He makes the point that in order to watch Darwinian evolution in action, all you need are objects that are capable of reproducing themselves, and reproducing themselves imperfectly, and having some sort of resource limitation so that there’s competition. And nothing else matters – it’s a very tiny, abstract axiom that is required to make evolution work. And so it occurred to me that programs have those properties – programs can reproduce themselves. Except that they usually reproduce themselves exactly. But I recognized that if there was a way to have them reproduce imperfectly, and if you had not just one program but a whole population of them, then you could simulate evolution with the software instead of organisms. After a few small-scale experiment, Jefferson and Taylor decided to simulate the behavior of ants learning to follow a pheromone trail. “Ants were on my mind – I was looking for simple creatures, and E. O. Wilson’s opus on ants had just come out,” Jefferson explains. “What we were really looking for was a simple task that simple creatures perform where it wasn’t obvious how to make a program do it. Somehow we came up with the idea of following a trail – and not just a clean trail, a noisy trail, a broken trail.” The two scientists created a virtual grid of squares, drawing a meandering path of eighty-two squares across it. The goal was to evolve a simple program, a virtual ant, that could navigate the length of the path in a finite amount of time, using only limited information about the path’s twists and turns. At each cycle, an ant had the option of “sniffing” the square ahead of him, advancing forward one square, turning right or left ninety degrees. Jefferson and Taylor gave their ants one hundred cycles to navigate the path; once an ant used up his hundred cycles, the software tallied up the number of squares on the trail he had successfully landed on and gave him a score. An ant that lost his way after square one would be graded 1; an ant that successful completed the trail before the hundred cycles were up would get a perfect score, 82. The scoring system allowed Jefferson and Taylor to create fitness criteria that determined which ants were allowed to reproduce. Tracker began by simulating sixteen thousand ants – one for each of the connection machine’s processors – with sixteen thousand more or less random strategies for trail navigation. One ant might begin with the strategy of marching straight across the grid; another by switching back and forth between ninety-degree rotations and sniffings; another following more baroque rules. The great preponderance of these strategies would be complete disasters, but a few would allow a stumble across a larger portion of the trail. Those more successful ants would be allowed to mate and reproduce, creating a new generation of sixteen thousand ants ready to tackle the trail. The path – dubbed the John Muir trail after the famous environmentalist – began with a relatively straightforward section with a handful of right-hand turns and longer straight sections, the steadily grew more complicated. Jefferson says now that he designed it that way because he was worried that early generations would be so incompetent that a more challenging path would utterly confound them. “You have to remember that we had no idea when we started the experiment whether sixteen thousand was anywhere near a large enough population to seek Dawinian evolution,” he explains. “And I didn’t know if it was going to take ten generations, or one hundred generations, or ten thousand generations. There was no theory to guide us quantitatively about either the size of the population in space or the length of the experiment in time.” Running through one hundred generations took about two hours; Jefferson and Taylor rigged the system to give them real-time updates on the most talented ants of each generation. Like a stock ticker, the Connection Machine would spit out an updated number at the end of each generation: if the best trail followers of one generation managed to hit fifteen squares in a hundred cycles, the Connection Machine would report that 15 was he current record and then move to the next generation. After a few false starts because of bugs, Jefferson and Taylor got the Tracker system to work – and the results exceeded even their most optimistic expectations. “To our wonderment and utter joy,” Jefferson recalls, “it succeeded the first time. We were sitting there watching these numbers come in: one generation would produce twenty-five, then twenty-five, and then it would be twenty-seven, and then thirty. Eventually we saw a perfect score, after only about a hundred generations. It was mind blowing.” The software had evolved an entire population of expert trail followers, despite the fact that Jefferson and Taylor had endowed their first generation of ants with no skills whatsoever. Rather than engineer a solution to the trail-following problem, the two UCLA professors had evolved a solution; they had created a random pool of possible programs, then built a feedback mechanism that allowed more successful programs to emerge. In fact, the evolved programs were so successful that they’d developed solutions custom-tailored to their environments. When Jefferson and Taylor “dissected” one of the final champion ants to see what trail-following strategies he had developed, they discovered that the software had evolved a preference for making right-hand turns, in response to the three initial turns that Jefferson had built into the John Muir Trail. It was like watching an organism living in water evolving gills: even in the crude, abstract grid of Tracker, the virtual ants evolved a strategy for survival that was uniquely adapted to their environment. By any measure, Tracker was a genuine breakthrough. Finally the tools of modern computing had advanced to the point where you could simulate emergent intelligence, watch it unfold on the screen in real time, as Turing and Selfridge and Shannon had dreamed of doing years before. And it was only fitting that Jefferson and Taylor had chosen to simulate precisely the organism most celebrated for its emergent behavior: the ant. They began, of course, with the most elemental form of ant intelligence – sniffing for pheromone trails – but the possibilities suggested by the success of Tracker were endless. The tools of emergent software had been harnessed to model and understand the evolution of emergent intelligence in real-world organisms. In fact, watching those virtual ants evolve in on the computer screen, learning and adapting to their environments on their own, you couldn’t help wonder if the division between the real and the virtual was becoming increasingly hazy.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00434.warc.gz
CC-MAIN-2021-43
9,130
11
https://www.zfort.com/blog/frontend-digest-february-4-2018
code
FRONT-END WEEKLY DIGEST (JANUARY 29 - FEBRUARY 4, 2018) • 8 Things Every Front-End Developer Must Learn • The increasing nature of frontend complexity • Automated Browser Testing With The WebDriver API • A coder's guide to APIs • An introduction to Progressive Web Apps • Progressive Web Apps — The Next Step in Web App Development • Dark patterns with the HTML 5.2 <dialog> tag and Chrome for fun and profit • How we made our page-load optimisations even faster • A Deep Dive Into the GTmetrix Speed Test Tool • PageSpeed How to use webpagetest.org for page load speed testing • webpack-demos - a collection of simple demos of Webpack. • Localer - Automatic detecting missing I18n translations tool. • Phone number links and accessibility • WCAG 2.1 is a Candidate Recommendation • Accessibility Updates: WCAG 2.1 • Let's make multi-colored icons with SVG symbols and CSS variables • PostCSS — beyond the Autoprefixer • One File, Many Options: Using Variable Fonts on the Web • How to recreate Medium's article layout with CSS Grid • How to create a fully responsive navbar with Flexbox • Bulma: CSS framework you should consider in 2018 • How to use variable fonts in the real world • CSS Scroll Snap: What Is It? Do We Need It? • Cheapass Parallax, In about ~6 lines of code. • Boilerform - a little HTML and CSS boilerplate to take the pain away from working with forms. • React Native — from scratch to App Store • JS WTF with Math • A GraphQL Primer: Why We Need A New Kind Of API (Part 1), The Evolution Of API Design (Part 2) • An Introduction to GraphQL • Why would you NOT use TypeScript? • 25 Days of ReasonML • EasyTimer.js - Easy to use Timer/Chronometer/Countdown library compatible with AMD and NodeJS. • Jargon-Free Webpack Intro For VueJS Users • Efficient Code Analyzing and Formatting (for Vue.js) with ESLint and Prettier • 10 things I love about Vue • The Beginner's Guide to React • Rock Solid React.js Foundations: A Beginner's Guide • Nested routes with React Router v4 • Build an Image Slider Using React, Superagent and the Instagram API • Conditional Rendering in React using Ternaries and Logical AND • Making SVG icon libraries for React apps • A quick guide to Redux for beginners Read our web development and business optimization guides Compare in-house offshore software developer payroll costs Check the available resources for a quick project start Join our newsletter! Get weekly updates of the top IT news delivered straight to your inbox
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319724.97/warc/CC-MAIN-20190824041053-20190824063053-00348.warc.gz
CC-MAIN-2019-35
2,565
51
http://search.sys-con.com/node/2439947
code
|By Marketwired .|| |November 10, 2012 02:50 PM EST|| ORLANDO, FL -- (Marketwire) -- 11/10/12 -- As more home buyers, sellers and investors rely on social media and online resources for real estate information, Realtors® understand the importance of engaging in this space. To that end, Realtors® at today's Raise your Social Media Marketing to the Next Level session at the 2012 Realtors® Conference and Expo learned strategies to enhance their social business and digital engagement and methods to incorporate new outlets into their marketing mix. "Technology has transformed the way Realtors® do business, and it's important to keep up with this ongoing evolution," said Nobu Hata, director of digital engagement for the National Association of Realtors®, who spoke during the session. "Given the Internet's convenience and round-the-clock accessibility, it's not surprising that many home buyers first look online for properties and information when beginning their search. And most of those buyers then turn to real estate professionals to help them realize their real estate goals." According to the 2012 NAR Profile of Home Buyers and Sellers, released today at the conference, nine out of 10 recent home buyers used the Internet to search for homes, up from seven out of 10 in 2003. And the percentage of buyers who report using the Internet "frequently" nearly doubled, from 42 percent in 2003 to 79 percent in 2012. "Stay in touch and engaged with your clients online by making your website the most 'social' thing you use," said Hata. "Give consumers what they can't Google." Nearly half of all Realtors® -- 49 percent -- report actively using social networking websites, and a small but growing segment of Realtors® use newer forms of communication, such as blogs (17 percent), according to the most recent NAR Member Profile. Session participants discussed ways to leverage the business applications of popular social media platforms including Pinterest, Tumblr and Yelp, from recommendations about creating more visual content, and considerations related to the size advantages of a smart phone versus larger tablets. NAR provides its members myriad social media resources to help amplify Realtor® social media engagement, including the Realtor® Magazine and Realtor® Action Center Twitter feeds, https://twitter.com/#!/realtormag and https://twitter.com/#!/realtoraction, respectively; NAR's main Facebook presence, https://www.facebook.com/#!/realtors; NAR's Research Facebook pages, www.facebook.com/narresearchgroup; the HouseLogic Facebook pages, www.facebook.com/HouseLogic; and the Realtors Property Resource® blog, http://blog.narrpr.com. The National Association of Realtors®, "The Voice for Real Estate," is America's largest trade association, representing 1 million members involved in all aspects of the residential and commercial real estate industries. Information about NAR is available at www.realtor.org. This and other news releases are posted in the "News, Blogs and Videos" tab on the website. For further information contact: Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization... Mar. 28, 2017 02:30 AM EDT Reads: 2,019 My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum... Mar. 28, 2017 02:15 AM EDT Reads: 3,091 SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw... Mar. 28, 2017 02:00 AM EDT Reads: 3,890 DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. Mar. 28, 2017 01:00 AM EDT Reads: 2,342 What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information, Mar. 28, 2017 12:45 AM EDT Reads: 982 "My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY. Mar. 27, 2017 09:45 PM EDT Reads: 3,663 Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, represent... Mar. 27, 2017 08:15 PM EDT Reads: 6,334 SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads. Mar. 27, 2017 07:45 PM EDT Reads: 2,216 With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves. Mar. 27, 2017 07:30 PM EDT Reads: 4,614 SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises. Mar. 27, 2017 02:45 PM EDT Reads: 1,990 SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c... Mar. 27, 2017 02:30 PM EDT Reads: 4,040 SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ... Mar. 27, 2017 02:00 PM EDT Reads: 2,097 In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), will provide an overview of various initiatives to certifiy the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldw... Mar. 27, 2017 01:15 PM EDT Reads: 763 SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S... Mar. 27, 2017 01:00 PM EDT Reads: 1,511 SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add... Mar. 27, 2017 12:45 PM EDT Reads: 1,432 SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ... Mar. 27, 2017 11:15 AM EDT Reads: 2,407 SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov... Mar. 27, 2017 10:30 AM EDT Reads: 3,036 SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat... Mar. 27, 2017 10:30 AM EDT Reads: 3,186 SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ... Mar. 27, 2017 09:30 AM EDT Reads: 2,159 There are 66 million network cameras capturing terabytes of data. How did factories in Japan improve physical security at the facilities and improve employee productivity? Edge Computing reduces possible kilobytes of data collected per second to only a few kilobytes of data transmitted to the public cloud every day. Data is aggregated and analyzed close to sensors so only intelligent results need to be transmitted to the cloud. Non-essential data is recycled to optimize storage. Mar. 27, 2017 08:15 AM EDT Reads: 3,137
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00537-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
13,448
52
http://www.mobygames.com/game/beyond-zork-the-coconut-of-quendor/screenshots
code
Atari ST Screenshots Generating a new character, RPG-style Customizing the well-balanced character Following tutorial commands Things are feeling more like normal now! This doesn't bode well... Starting location -- note the crude automap Viewing your character's status You can define the function keys... Yeah! You can give weapons or animals a unique name! Cool! I know I should have paid attention in school... I'm the king of the world!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705884968/warc/CC-MAIN-20130516120444-00092-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
440
12
https://www.fortypoundhead.com/showcontent.asp?artid=48424
code
DTS Class Module and Events The program Creates a VB Class Module from a DTS Package on a SQL Server, with all events, and its own events(Progress, currentTask, etc) It creates a very compact script, so you can very large packages in a single routine. The Class Module "ClassDTSScript" is what is created when you get a package from the server and script it. Simply remove the example, and add one you have done to test it. The example execution asks you to navigate to the source and destination Access Databases, and uses the filepath to pass in an ADO style connection string for the source and destination connections. The parsing routine in the class module will work for SQL Server or ACCESS. I have not handled more complicated transformations such as Many to one column mappings and such, but Execute SQL and DATAPUMP Tasks work quite well. The Example "ClassDTSScript" module included was created from a package in SQL Server 7, and includes a couple of queries, two transformations, and running a stored procedure with a parameter, as well as demonstrating using the events that are called by the DTS Package object. Read the comments in the code carefully to better understand the uses. Original Author: Darryn Frost About this post Viewed: 89 times Posted: 9/3/2020 3:45:00 PM Size: 252,312 bytes No comments have been added for this post. You must be logged in to make a comment.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00573.warc.gz
CC-MAIN-2024-10
1,392
11
https://tex.stackexchange.com/questions/31859/latex-to-plain-text-for-e-g-generation-of-statistics
code
I would like to convert a large LaTeX project (i.e. spanning multiple files) into plain text. The purpose is generation of statistics, so representing mathematics is not an issue. In fact, all mathematics is ideally ignored. I have found http://code.google.com/p/textricks/ but could not get it to run. It seems unfinished, but is exactly what I am looking for otherwise. pdftotextto convert it to a text file. This ensures that you are using the LaTeX output not the input which might differ. Of course, if you want to do statistics about the LaTeX files and not the document generated by it, then you need to convert the source instead.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00215.warc.gz
CC-MAIN-2023-14
638
3
http://www.theregister.co.uk/2012/08/29/skydrive_android/
code
Microsoft gives Android punters some official SkyDrive love Come and Google Play Android now has an official SkyDrive client, bringing Microsoft's cloud storage to Google's handsets and dragging a little of Redmond's new GUI along with it. The new application is free, and works well. It not only provides remote access to SkyDrive content but also appends itself to the Android Sharing list so content can be chucked into the SkyDrive locker from just about any Android application. The app benefits from Microsoft's new minimalist approach to GUI design, with a notable lack of clutter and square tiles popping up every now and then, but not often enough to make one forget one's still in an Android world. There are a host of third-party apps which have been interacting with SkyDrive since it was launched, ES File Explorer being a personal favourite, and in most cases the arrival of an official alternative is underwhelming at best. However Microsoft's client is surprisingly comfortable to use and if SkyDrive is a regular haunt, and Android the platform of choice, then the dedicated client is worth having. Re: "The app benefits from Microsoft's new minimalist approach to GUI design" It's an interesting point: interfaces are expected to be intuitive, with interactive elements being visually distinct from non-interactive ones. The time-honoured tradition of 3D elements has always helped emphasise the "you can press this bit" paradigm and making everything flat and samey raises a whole slew of possible usability problems. Re: Could be expected. Course they take Android seriously - the $5 per handset they extort is their only revenue from mobile. The OneNote App is rather good, if you already have Office installed at home, the Sync is seamless. Better than other note apps I have tried on Android. The cross-platform capability is really useful, as I can write stuff for the RPGs I run on a desktop, and use the tablet to run the sessions on.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709906749/warc/CC-MAIN-20130516131146-00062-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,960
11
https://dribbble.com/jobs/212000-Senior-Product-Designer
code
Senior Product Designer As a Senior Product Designer, you will have significant ownership of design objectives and strategy. This role will be responsible for discovering opportunities for product excellence and differentiation on our products. Combining an understanding of business opportunities with a deep empathy for the user, you’ll work on creating experiences that help customers invest in alternatives to improve their overall wealth. You’ll collaborate with key stakeholders across the organization to determine strategy, roadmaps and deliverables while also maintaining a lot of autonomy over your work. What you'll do: Conceptualize and design for web (including mobile) and print Contribute at a high level for design strategy and communicate the vision, roadmap, and goals for a business Define short- and long-term design opportunities for the product. Collaborate closely with Product Management, Engineering, Analytics and Business Development to define product solutions as well as business efficiencies Partner with Product Managers to develop the strategy and rationale for product solutions Partner closely with Engineering to ensure our implementation and user experience is of high-quality Prioritize internal, cross-functional, and external design resource(s). Create and iterate on solutions that are inspired by a deep empathy for our users Identify key opportunities to meet and prioritize user and business needs Use research and data to inform and implement solutions Ideate new product features and improvements. Work collaboratively to prioritize feature ideas. Flesh out feature concepts. Develop strategic insights and strategies to guide UX development Create high level user flows for complex products. Design processes, features, and components that can be used Understand business and technical needs Works cross-functionally (in and outside of the product team) to create innovative features. Design processes, features, and components that can be used across products Create UI components for product flows. Contribute components to the Alto design system. Work collaboratively with Engineering to ensure finished product meets design specifications. Work in close collaboration with Design Leadership to grow the team as well as to inform and improve design processes What we'd like to see on your resume: 6+ years professional experience, specifically as a Product Designer designing web-based products, with experience with UX and UI. Bachelor's degree in design, human-computer interaction (HCI), or equivalent professional experience. Experience leading and launching end-to-end product / product features Design Portfolio demonstrating strong UX and visual design solutions, with an emphasis on identifying and meaningfully solving true user problems. Exceptional track record executing on product design strategy, mentoring designers, and partnering with cross functional team members. Experience building world class product experiences, and streamlining workflows with continuous iteration and improvement. What you bring to the table: High proficiency in Figma Familiarity with other UX/UI software for creating flows, prototypes Ability to analyze and use data and research to inform decisions Utilize understanding and analysis of quantitative data, qualitative user understanding, and knowledge of business capabilities to develop features and product experiments Attention to detail Humble — Not driven by personal opinions Excellent verbal and communication skills Excellent analytical skills to break down and solve complex problems Proven ability to collaborate cross-functionally Design discovery (user research, competitive research) Deep user empathy around complex/emotional categories Foundational understanding of HTML and CSS. Excellent communication, facilitation, and interpersonal skills. Ability to lead user experience design, including usability principles, user research methodology, testing techniques (A/B, Multi-Variant), design theory, and interaction design. Willingness to roll up your sleeve and get things done. Take smart risks and champion new ideas. Passionate about contributing and maintaining design systems Skills such as illustration, animation, data visualization, front-end programming, or copywriting a plus
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00379.warc.gz
CC-MAIN-2023-40
4,302
52
http://channelone.com/dating/christian-dating-sudbury.html
code
I enjoy movies and music for relaxation. Our free trial allows you to test our Grand Sudbury Christian dating agency which includes performing detailed searches, viewing profiles and linking with Grand Sudbury Christian singles using chat and email. There are thousands of active singles on datehookup. It's usually good enough to make me smile. Our network of christian women in Sudbury is the perfect place to make church friends or find an christian girlfriend in Sudbury. Once subscribed, you have access to contact all your prospective matches.Next I'm Down to earth, quiet by nature but open up quickly if the connection is felt mutually between 2 people. I am not interested in games or drama. Forget classified personals, speed dating, or other Sudbury dating sites or chat rooms, you've found the best! I am very intelligent, but I am always willing to listen to the opinions of others. For Steven Bisson, an ideal date night consists of a quiet alberta dating scene night at. Chemistry and attraction are a must, no doubt, but beauty to me in a woman is also nurturing, patience and kindness. Sign up today and browse profiles of Alberta army men for dating for free.Next Affair websites for free profile site for free to find your area. And i believe she should be worth it, and also she should be able to do anything to make her man happy, I just want a woman with a good heart to share true affectionate and passionate feeling of love. I am interested in a serious longterm relationship i want a woman that is willing to love and be loved. Christian and single in Grand Sudbury, Try ChristianSinglesPassion. She stay with me in California. I'm a father of a 14years old daughter named Linda. I want to meet a Woman who is honest, loving, caring, kindhearted, open minded who has a great sense of humor understandable, considerate, polite, calm, and generous person who has a prestige and integrity, a woman who as A Very good character, sincerity that's what matter most, some one that have passion and respect for his man and most of all a woman who is God fearing.Next Vital records around the best completely free through their quality original articles. I am not hot-tempered, I don't like conflicts and even when they happen, I prefer to solve them as quick as its possible. We're 100% free for everything, meet Sudbury singles today. Be wise Myself easy going and get along with just about everyone I meet. Join our dating site to contact single and beautiful christian women seeking like you for friendship, love, romance, flirt of may be casual relationships. I do not like violence, but I will argue and disagree and try to correct whatever I see that isn't right. Orgasm movies showing girls really cumming.Next Discover canadian singles in canada with technology, personal ads, rancher, sane people. I don't like pain, and I don't like causing pain, mentally or physically, except maybe for making someone's cheeks or stomach hurt from laughing too hard. Recently joined the dating scene again after having a serious relationship end suddenly. Extent order to construct a graph of the types addicts is that internet. Our network of Christian men and women in Sudbury is the perfect place to make Christian friends or find a Christian boyfriend or girlfriend in Sudbury. I'm a spiritual man that believes in peace and communication. About the one I'm looking for. I'm looking forward to put a smile on your face. One that can have as much fun on the couch watching a movie, going to a show, dining, travelling, or cleaning the garage just as long as were together. I am ready to find my best friend and partner. Date Christians is part of the Online Connections dating network, which includes many other general and christian dating sites. I love sports and I like games.Next I like to be challenged and enjoy people who are willing to hang in there with me for a good discussion or debate. I always seek knowledge and love to learn new things. The one I can tell anything too and I want to be with. Canadian free dating sites Completely free through jumpdates. I always try to see the good in other people and trust and believe them unless they show that they do not deserve it. Could you be the one I'm meant to share my life with, why don't we take a chance, get to know each other and see what happens. My Life and what i want;I would like to meet someone who enjoy the same things and has a similar outlook to me i.Next We all have things in common or it just wouldn't be life. Thousands of older canadians are connecting singles in everything! I like to give smiles to everyone as I think that it makes people happy! I like doing things to make other people happy and make their lives better. I like 80's Rock, some modern rock, christian, techno, and some country music. I have one daughter who are off living their dreams, and the world is getting smaller all the time. I am very goal oriented and know what I want out of life both in my career and in my personal life.Next
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00511.warc.gz
CC-MAIN-2019-47
5,004
8
https://www.gunsamerica.com/995525340/COLT-SAA-357-Magnum-7-1-2-Barre.htm
code
COLT SAA .357 Magnum 7 1/2 Barrel. Case color with slight spot on the barrel. This gun is slightly used and kept in a very good condition. SA14830. 2nd generation Please call me for more info: and ask for Tom. Also for sale: (1) COLT SAA .357 Magnum 5 1/2 Barrel, (1) COLT SAA .38 Special 5 1/2 Barrel. (1) COLT SAA .357 Magnum 7 1/2 Barrel 3rd Generation. Will take C.O.D. Buyer will pay ALL shipping charges. Trades Accepted: No Old man selling gun collection. Have not shot for many years.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189198.71/warc/CC-MAIN-20170322212949-00525-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
492
9
https://www.hearth.com/talk/threads/inefficiency-of-burning-wet-wood.19470/
code
I have a couple questions, and before I Google it, I thought I'd ask all you experts. A recurring theme on many of these threads is the issue of wood moisture content. 1) What are the percentages measuring, is it moisture content by weight or by volume? 2) We know how many BTU's there are per pound of wood, and even per different species of wood. However, if the average moisture content of the sticks are averaging, say, 25%, how many BTU's are you truly getting out of the wood, vs. transporting the moisture up the chimney? Obviously, you need to know the answer to #1 before you do the calculation. If someone answer's #1 for me, I can do some example calculations for #2, but anyone else, feel free.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820930.11/warc/CC-MAIN-20171017072323-20171017092323-00049.warc.gz
CC-MAIN-2017-43
706
1
https://safenetforum.org/t/lets-nail-this-down-we-need-a-solid-breakdown-of-how-safe-works-freenet-peeps-want-to-know/5640/59
code
Again, from https://safenetwork.wiki/en/Vaults_(How_it_works), and again with the proviso that this may not be completely current, "Once consensus is reached, the DataManager passes the chunks to thirty-two DataHolderManagers, who in turn pass the chunks for storage with DataHolders. If a DataHolderManager reports that a DataHolder has gone offline, the DataManager decides, based on rankings assigned to Vaults, into which other Vault to put the chunk of data. This way the chunks of data from the original file are constantly being monitored and supported to ensure the original data can be accessed and decrypted by the original User." This raises the question that you are getting at: Sure, if a vault goes offline, then the data is reallocated. But what if only some chunks disappear from that vault? Do the DataHolderManagers recognise this? I’m somewhat certain that they would recognise, but am not 100% sure they do, or how they do. This is a question for devs, or forum members that know more than I. The ID by itself gains you nothing. Over time, as you serve data, and behave according to the network rules, your rank increases, and with that increase, your yield in SAFEcoin from GET requests. Switching off resets this (as you now have a new ID), and so your income level will be starting from scratch. No, your previous income isn’t wiped out. Farming nodes earn income according to an algorithm that includes how highly they are ranked. Ranking develops over time. So you are starting from scratch every time you get removed from the network, or you drop your vault and start over. Your income then falls to the lowest level (far below network average).
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107867463.6/warc/CC-MAIN-20201019232613-20201020022613-00120.warc.gz
CC-MAIN-2020-45
1,675
7
https://www.informit.com/authors/bio/8CFE92FA-0ABD-484A-AC52-07841F48176A
code
Peter H. Feiler, senior member of technical staff at the Software Engineering Institute (SEI), is technical lead and author of the SAE AADL standard. In his 27 years at the SEI he has worked on software development environments, configuration management, and real-time embedded systems. He has collaborated with the research community and has applied resulting technologies such as AADL with customers in avionics, space, and automotive industries, as well as government programs. David P. Gluch, formerly senior member of the technical staff at SEI and now a visiting scientist there, is a professor of software engineering at Embry-Riddle Aeronautical University. He has held key engineering and technical management positions with high-tech firms where he developed real-time software-intensive systems for commercial fly-by-wire aircraft control, automated process control, and the Space Shuttle.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817184.35/warc/CC-MAIN-20240417235906-20240418025906-00709.warc.gz
CC-MAIN-2024-18
900
2
https://www.donationcoder.com/forum/index.php?action=profile;area=showposts;u=295922
code
« on: April 02, 2019, 06:22 AM » Exactly, highend01. Just count the total size, if possible, and stop the process before reaching the user-inputed limit. Don't mean to put the burden on you. But you certainly aced the last similar request. I've come across programs for filling up DVDs and CDs with music files. As I recall, the programs will pick and choose from a set of files, and put as much as possible onto the disk. Summing total size. In the present case I just want to start from -- actually, 0001_xyz.mp4. Then 0002_xyz.mp4. And on and on until the set total size. I append the numbers -- serialize -- to all the .mp4 files, to make playing easier (Amazon firestick, using MX Player on the tv, plays in the order of filename, thus the numbers).
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00240.warc.gz
CC-MAIN-2019-39
756
5
https://www.chron.com/neighborhood/humble-news/article/Kingwood-resident-excels-at-engineering-school-1725270.php
code
Matt Quantz, the son of Wayne and Rebecca Quantz of Kingwood, a computer engineering technology student at Rochester Institute of Technology, was featured as an exhibitor at the Imagine RIT: Innovation and Creativity Festival. Quantz built and created “Robot Sumo”. The exhibit featured a robotic sport in which two robots battled to push each other out of a wrestling ring. Imagine RIT showcased the work of engineers and artists, entrepreneurs and designers, scientists and photographers. More than 400 interactive exhibits and displays were featured at the festival on May 2.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911229.96/warc/CC-MAIN-20201030182757-20201030212757-00311.warc.gz
CC-MAIN-2020-45
582
2
http://www.openamq.org/issue:68
code
68 - Change names of max and min source code macros Reported by mclaughlin77 (1247244737|%O ago) The max and min macros can cause build difficulties in C++ projects the use the standard max and min functions. Changing these names will avoid these conflicts and the need for the addition of any workarounds to the end user's code. No files attached to this page. Edit | Files | Tags | Print Who's following this issue? Submitted by mclaughlin77 Use one of these tags to say what kind of issue it is: - issue - a fault in the software or the packaging or the documentation. - change - a change or feature request. Use one of these tags to say what state the issue is in: - open - a new, open issue. - closed - issue has been closed. - rejected - the issue has been rejected. Use one of these tags to say how urgent the issue is: - fatal - the issue is stopping all work. - urgent - it's urgent.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00311-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
892
17
https://www.gnu.org/software/auctex/manual/auctex/Processor-Options.html
code
|[ < ] |[ > ] |[ << ] |[ Up ] |[ >> ] |[ ? ] There are some options you can customize affecting which processors are invoked or the way this is done and which output they produce as a result. These options control if DVI or PDF output should be produced, if TeX should be started in interactive or nonstop mode, if source specials or a SyncTeX file should be produced for making inverse and forward search possible or which TeX engine should be used instead of regular TeX, like PDFTeX, Omega or XeTeX, and the style error messages are printed with. (C-c C-t C-p) This command toggles the PDF mode of AUCTeX, a buffer-local minor mode which is enabled by default. You can customize TeX-PDF-mode to give it a different default or set it as a file local variable on a per-document basis. This option usually results in calling either PDFTeX or ordinary TeX. If this is set, DVI will also be produced by calling \pdfoutput=0. This makes it possible to use PDFTeX features like character protrusion even when producing DVI files. Contemporary TeX distributions do this anyway, so that you need not enable the option within AUCTeX. (C-c C-t C-i) This command toggles the interactive mode of AUCTeX, a global minor mode. You can customize TeX-interactive-mode to give it a different default. In interactive mode, TeX will pause with an error prompt when errors are encountered and wait for the user to type something. (C-c C-t C-s) Toggles support for forward and inverse search. Forward search refers to jumping to the place in the previewed document corresponding to where point is located in the document source and inverse search to the other way round. See I/O Correlation. You can permanently activate customizing the variable TeX-source-correlate-mode. There is a bunch of customization options for the mode, use M-x customize-group <RET> TeX-view <RET> to find out more. AUCTeX is aware of three different means to do I/O correlation: source specials (only DVI output), the pdfsync LaTeX package (only PDF output) and SyncTeX. The choice between source specials and SyncTeX can be controlled with the variable Should you use source specials it has to be stressed very strongly however, that source specials can cause differences in page breaks and spacing, can seriously interfere with various packages and should thus never be used for the final version of a document. In particular, fine-tuning the page breaks should be done with source specials switched off. Sometimes you are requested, by journal rules or packages, to compile the document into DVI output. Thus, if you want a PDF document in the end you can either use XeTeX engine, see below for information about how to set engines, or compile the tex and then convert to PDF with ps2pdf before viewing it. In addition, current Japanese TeX engines cannot generate PDF directly so they rely on DVI-to-PDF converters. Usually dvipdfmx command is used for this purpose. You can use the TeX-PDF-from-DVI variable to let AUCTeX know you want to generate the final PDF by converting a DVI file. This option controls if and how to produce a PDF file by converting a DVI file. TeX-PDF-mode is non- nil too the document is compiled to DVI instead of PDF. When the document is ready, C-c C-c will suggest to run the converter to PDF or an intermediate format. TeX-PDF-from-DVI should be the name of the TeX-command-list, as a string, used to convert the DVI file to PDF or to an intermediate format. Values currently supported are: "Dvips": the DVI file is converted to PS with dvips. After successfully running it, be the default command to convert the PS file to "Dvipdfmx": the DVI file is converted to PDF (case is significant; note the uppercase ‘D’ in both strings) When the PDF file is finally ready, the next suggested command will be ‘View’ to open the viewer. This option can also be set as a file local variable, in order to use this conversion on a per-document basis. Recall the whole sequence of C-c C-c commands can be replaced by the single C-c C-a. AUCTeX also allows you to easily select different TeX engines for processing, either by using the entries in the ‘TeXing Options’ submenu below the ‘Command’ menu or by calling the function TeX-engine-set. These eventually set the variable TeX-engine which you can also modify directly. This variable allows you to choose which TeX engine should be used for typesetting the document, i.e. the executables which will be used when you invoke the ‘TeX’ or ‘LaTeX’ commands. The value should be one of the symbols defined in TeX-engine-alist. The symbols ‘default’, ‘xetex’, ‘luatex’ and ‘omega’ are available from the built-in list. TeX-engine is buffer-local, so setting the variable directly or via the above mentioned menu or function will not take effect in other buffers. If you want to activate an engine for all AUCTeX modes, set TeX-engine in your init file, e.g. by using M-x customize-option <RET>. If you want to activate it for a certain AUCTeX mode only, set the variable in the respective mode hook. If you want to activate it for certain files, set it through file variables (see (emacs)File Variables section ‘File Variables’ in The Emacs Editor). Should you need to change the executable names related to the different engine settings, there are some variables you can tweak. Those are ConTeXt-Omega-engine. The rest of the executables is defined TeX-engine-alist-builtin. If you want to override an entry from that, add an entry to TeX-engine-alist that starts with the same symbol as that the entry in the built-in list and specify the executables you want to use instead. You can also add entries to TeX-engine-alist in order to add support for engines not covered Alist of TeX engines and associated commands. Each entry is a list with a maximum of five elements. The first element is a symbol used to identify the engine. The second is a string describing the engine. The third is the command to be used for plain TeX. The fourth is the command to be used for LaTeX. The fifth is the command to be used for the ‘--engine’ parameter of ConTeXt’s ‘texexec’ program. Each command can either be a variable or a string. An empty string or nil means there is no command available. In some systems, Emacs cannot inherit the PATH environment variable from the shell and thus AUCTeX may not be able to run TeX commands. Before running them, AUCTeX checks if it is able to find those commands and will warn you in case it fails. You can skip this test by changing nil, AUCTeX will check if it is able to find a working TeX distribution before running TeX, LaTeX, ConTeXt, etc. It actually checks if can run TeX-command command or the shell returns a command not found error. The error code returned by the shell in this case can be set in Some LaTeX packages requires the document to be compiled with a specific engine. Notable examples are ‘fontspec’ and ‘polyglossia’ packages, which require LuaTeX and XeTeX engines. If you try to compile a document which loads one of such packages and the set engine is not one of those allowed you will be asked to select a different engine before running the LaTeX command. If you do not want to be warned by AUCTeX in these cases, customize the option This boolean option controls whether AUCTeX should check the correct engine has been set before running LaTeX commands. As shown above, AUCTeX handles in a special way most of the main options that can be given to the TeX processors. When you need to pass to the TeX processor arbitrary options not handled by AUCTeX, you can use the file local variable String with the extra options to be given to the TeX processor. For example, if you need to enable the shell escape feature to compile a document, add the following line to the list of local variables of the source file: %%% TeX-command-extra-options: "-shell-escape" By default this option is not safe as a file-local variable because a specially crafted document compiled with shell escape enabled can be used for malicious purposes. You can customize AUCTeX to show the processor output as it is produced. nil, the output of TeX compilation is shown in another You can instruct TeX to print error messages in the form ‘file:line:error’ which is similar to the way many compilers format them. nil, TeX will produce ‘file:line:error’ style error ConTeXt users can choose between Mark II and Mark IV versions. This is controlled by This variables specifies which version of Mark should be used. Values currently supported are "II", the default, and can be set globally using customization interface or on a per-file basis, by specifying it as a file variable. |[ < ] |[ > ] |[ << ] |[ Up ] |[ >> ] |[ ? ] This document was generated on January 17, 2024 using texi2html 1.82.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474360.86/warc/CC-MAIN-20240223021632-20240223051632-00501.warc.gz
CC-MAIN-2024-10
8,818
142
https://xenoversemods.com/mods/hero-colosseum-posing-skill-editor/
code
Hero Colosseum Posing Skill Editor A tool that allows you to create new and edit existing posing skills. along with editing the text data and the logic of a posing skill. the tool requires Xv2 Patcher to be installed, since all modifed files will be placed in the "data" directory the tool will also read files from the data directory, if a file does not exist then the tool will extract that file from the game files (CPKs) add multi language for MSG files Even more bug fixes added the ability to import / export skills from the "file" menu. this can be used to share skills with other players.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00015.warc.gz
CC-MAIN-2019-30
596
9
https://www.windowscentral.com/im-learning-how-make-xbox-and-pc-games-and-you-can-too
code
As a kid, at around age thirteen or fourteen, it was some sort of rule at least in my school that teens would do some work experience and then be assessed and given "career advice." After having done a two-week stint filing paperwork in a benefits office, I already felt a little jaded about what my future could look like. I met my "careers adviser" at school, and she asked me what sort of jobs I had in mind. In my teens, I ran an animation website and forum with tens of thousands of members, making Adobe Flash cartoons out of stick figures with some video game and anime inspiration on the side. I had actually programmed some incredibly crude games using Adobe Flash' Action Script, but becoming an actual game designer felt like some sort of impossible unobtainable fantasy. So impossible, in fact, that I thought my career suggestion to become a paleontologist seemed more plausible. My adviser said I should consider becoming a data entry clerk or office worker instead. After twenty years of de-programming my brain from the UK school system, I've come to realize that I had been on the path towards game development as an animator and dabbler in Action Script. There was simply nobody to give me that extra push to give it a real try. Combined with the power of online courses and amazing and generous content creators on YouTube, a nice bloke from the Xbox dev community recently nudged me to actually give it a shot. And so far, I'm having a blast. You might think that game development requires expensive degrees or mountains of complex programming. And sure, building a AAA game is obviously a truly Herculean task. However, learning to make simple games as a creative hobby has proven incredibly rewarding to me thus far. I wanted to share my experiences with those who, like me, might have found the very notion of even trying to be daunting. Here's the path I'm on to become a game developer, and what I've learned so far, and how you can get started too. Who knows? Maybe we'll all publish games on Xbox Series X someday. Unity Engine and Udemy If you're into gaming and reading about the industry, you may have heard of the Unity Engine. Unity is a game dev tool that is free for hobbyists and smaller developers to use and has proven itself quite easy to pick up and learn. I chose Unity for its proficiency with cross-platform development. I admit I don't have a great amount of knowledge in this area, but deploying a simplistic game in WebGL or Win32 or UWP from Unity was incredibly easy to do without having to alter any code whatsoever. I figured if my ultimate goal is to one day have a small and simple game hit the Xbox Live Creators Program, then Unity might be a good place to start. Developing games this way requires the Unity SDK and some tools from Microsoft, including Visual Studio, all of which are free for hobbyists. I don't suggest you download those straight away, though. To get started, I picked up this incredibly affordable Unity and C# course on Udemy, which is an online learning platform. I actually purchased it years ago but never had the belief in myself to actually start it up. I figured I'd go back and give it another try and really force myself to stick with it. A couple of dozen hours later, and I'm really glad I did. The course is led by Rick Davidson, who I quite honestly wish I had as a teacher in school. Throughout each lecture, they go over in detail how to structure a video game project in Unity, while also explaining the tools you will need and how to use them. The course is split up into several game projects across a few dozen hours and has additional courses you can pick up afterward to continue your learning. Into the course I have a few simple game ideas I want to make in the future. Although the Udemy course I've picked is a great starting point for learning how to structure a game project, some of the "bells and whistles" I want to work into my games are perhaps a bit too specific for a general overview course to cover. Thankfully, hundreds of programmers and artists from across the world have flooded YouTube with thousands upon thousands of tutorials for everything and anything Unity. Want tentacles in your game? There's a video for that. Do you want to make meaty physics-based sprites? There's a tutorial for that too. There are tutorials for pixel art, lighting, particle effects, parallax background scrolling, scoring systems, hit points, RPG mechanics, and anything and everything you could possibly imagine. All right there, on YouTube, for free. Some of my favourite channels so far include CouchFerretMakesGames, Blackthornprod, Tarodev, Brackeys, and Pureheart. A lot of these channels contain tutorials on classic shooter gameplay, which is where I want to put my focus when it comes to building my very first official game project. There are quite literally hundreds of other channels dedicated to Unity development, which can help supplement the broader course on Udemy. Using these videos I learned how to make time-expiring score multipliers, sound effect pitch shift arrays, and much more. Give it a try I'm by no means the smartest person alive and always found math incredibly hard at school, making some of the C# logic difficult to understand. However, after repeatedly trying over and over, it starts to become more familiar. Watching some of Microsoft's recent Game Stack Live showcase, I was shocked to realize that I understood some of the programming sessions, as a result of this C# course I'm on. It may be months or years before I actually have something you could even vaguely designate as a game, but becoming the next Hideo Kojima or joining the ranks of best Xbox games is really not my goal right now. I love the idea of making a small and simple game that even a few people could potentially enjoy, while also getting a deeper insight into the complexities that making games actually represents. It's a lot of fun, and honestly, if I can do it, anyone can do it. Give it a try, you might surprise yourself. And if you do decide to do it, please show me your creations on Twitter! Get the Windows Central Newsletter All the latest news, reviews, and guides for Windows and Xbox diehards. Jez Corden a Managing Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem while being powered by caffeine. Follow on Twitter @JezCorden and listen to his Xbox Two podcast, all about, you guessed it, Xbox!
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100525.55/warc/CC-MAIN-20231204052342-20231204082342-00816.warc.gz
CC-MAIN-2023-50
6,537
20
https://www.econcrises.org/2016/11/29/parmalat/man-playing-the-shell-game-closeup/
code
Subscribe to get email updates! This iframe contains the logic required to handle AJAX powered Gravity Forms. Financial Scandals, Scoundrels & Crises Man playing the shell game / closeup. Analysis and Commentary comments powered by Disqus.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00798.warc.gz
CC-MAIN-2022-21
239
6
http://pseudotheos.com/view_object.php?object_id=935
code
A relational database is a concise structure for storing and using statements of truth, together or separately. New terms are marked in bold as they are defined, examples (long and short) are marked in italics. Please email me if you find I have slipped in my use of the following terminology -- I've formed some bad habits by actually working in the field. Values, Variables, and Domains A value, such as the number "2", is one of many possible values in a set of similar values, such as "all possible numbers". Your street address is one value out of the set of all of the possible street addresses. A variable is a (usually named) storage space which may contain a representation of a value. A variable may only represent one value at any given time. Values can be represented in many possible ways, and you cannot ever store a value itself, only its representation. The number "2", a concept, can be represented by ink on paper forming its shape ("2", or "II",) or by a painting, or by spoken words, or by bits in a computer. Values are members of sets of possible values (which may or may not be easy to enumerate, that is, list.) Each such set is its own domain. The domain of all integers is a subset of the range of all real numbers: you may form one domain from another by applying a constraint; you can also (though not in any system I've seen) form a new domain as the union of several other domains, such as the set of "all values that are either integers or names of places". A constraint is a predicate expression (that is, an expression whose domain is that of boolean values, "true" and "false") which must be "true". For example, "even numbers" are a domain formed by constraining the "integers" domain to cases where the number is evenly divisible by "2". Databases and Relations A database contains zero or more named relations, zero or more named relation variable prototypes, each of a given relation type, and zero or more named domains (to be used in the relation definitions). Each relation has a relation header which names zero or more attributes, each of which has an associated domain. A relation named "addresses" might have the following attributes: "city" (text), "state" (text), "zip code" (number), and so forth. These attributes are not ordered (left to right or otherwise) though, for ease, an order may be chosen when representing a relation (perhaps alphabetically, for example.) Attributes can be imagined to be parts of a sentence, such as "A lives on Nth street, in B, C." By supplying values for each attribute, we can have statements like "Joe lives on 8th street, in Birmingham, Alabama." The attributes "A", "N", "B", "C" are here replaced with the values "Joe", "8", "Birmingham", "Alabama". Though not commonly recognized, a database may state that there will be several relation variables of the same relation type in the database variable. You might define an "addresses" relation, but define several relation variables, "home_addresses" and "work_addresses" of that exactly relation type. Whether or not this is a good idea is an entirely different question. A database variable is a variable of the domain defined by the database which describes it. For each relation variable prototype named in the database, the database variable will have a relation variable. A relational database variable may not contain any top-level named variables that are not of a relation type. You cannot, for example, store a variable named "bob" of the domain "integer" at the top (global, main, etc.) level of the relational database variable. A relation variable, apart from having the same header (and therefore set of attributes) as the relation that describes it, also has a relation body. A relation body is a set (that is, it cannot contain any duplicates) of tuples. A tuple, like a relation header, has named attributes. Unlike a relation header, it also has values for each attribute and the values must match the domain defined for each attribute. All of the tuples of a relation body will have the same set of attributes, as described and required by the relation header. The tuples of a relation body are not ordered (top to bottom or otherwise), though an order may be chosen for convenience. Therefore, a relation variable is not at all like a spreadsheet grid, where the order of columns and the order of rows may matter: there is no order here, only sets of items. Additionally, a database may include database constraints which must be satisfied in any database variables described by the database. No changes may be made to such database variables that violate these rules. Such constraints may include things like "A given social security number may be used only by one person at a time" or "Payments may not be sent to anyone for whom we do not have an address". These constraints are in addition to domain constraints, though the two are related as we will now see. A database, then, is also a domain, describing a set of possible variables. Two companies may use the same database, in different database variables. Each will have different data, but the same rules and layouts still apply. A relation is a domain, describing a set of possible relation values matching the description. As domains may necessarily have constraints, we can infer that (though not mentioned yet) each relation may also have constraints. It is generally preferable to keep constraints localized: while the previously described constraint about payments may be done at the database level, the uniqueness of social security numbers would likely be a relation-level constraint. As stated earlier, a variable may only represent one value at a time. A relational database is by definition in first normal form which requires that each attribute of each tuple in each relation body have only one value at a time. To dispell a myth, however, this does not mean that you cannot have an attribute which is "list of things to buy at the store". Such an attribute is perfectly valid. First normal form requires only that you always consider the entire value at a time, even if you can see how it might be built from smaller pieces. If an attribute is of the domain "integers", it may only have one integer in it. If it is of the domain "arrays of integers", it may contain an array of multiple integers. The entire array is the value, not each integer. To be in second normal form, a database must be in first normal form (which should be a given) and also be free of partial-key functional dependencies. A candidate key is a set of attributes in a relation which are considered uniquely identifying. By knowing values for each of the attributes in a candidate key, you can find exactly zero or one tuples in the relation body. A relation may have multiple candidate keys. No two tuples in a given relation variable may have exactly the same values for all the attributes of any candidate key. As an example, consider that you were given a unique number by your government (social security number), and that you have likely been given such unique identifiers by various companies. A relation variable might store information about you, including several such identifiers. By knowing any of them, information about exactly you might be found. Non-key information, such as your date of birth, might find several people's information. A partial-key functional dependency describes a scenario in which a relation contains attributes whose values could be guessed by knowing only part of a candidate key, rather than the whole thing. For example, the first three digits of your United States social security number are based on zip-code / area-code information. By knowing only part of your SSN, other information (mailing address given when applying for the number) may be guessed. If a relation were to have attributes for both an SSN and this other information (area where the application was filed), there would be a partial-key functional-dependency. The location information is not unique to the SSN, it is unique to part of the SSN. Such information should be stored elsewhere, in another relation variable. You might have a relation which defines the three-digit code, and information about it (the location), as well as another relation in which the SSN is stored as three fields along with other information about the SSN's owner. By using the two relations together, you can find out something about each SSN owner. The location information, however, is no longer repeated unnecessarily. Third normal form extends second normal form by requiring that relations not contain any non-key functional dependencies, also called transitive dependencies. An example of an offense would be "Joe lives in Colorado, which is abbreviated CO". While the candidate key might be the person's name, the state of residence is unlikely to be a candidate key at all. (Many people will live in the same state.) The abbreviation is obviously derived from the state name. This redundancy should also be eliminated, by storing state names and their abbreviations in one relation, and information about residency in another. Note that in this case, both the full state name and the abbreviation are candidate keys. Because of this, you could have either "Joe lives in CO" or "Joe lives in Colorado" -- the two are equivalent. Primary, Foreign, and Surrogate keys A primary key is a pragmatic choice of one of several possible candidate keys, to make things easier. Either the full state name or its abbreviation will be chosen to be the primary key for the relation, and all other relations which refer to it should use fields of the same type (domain) as the chosen primary key. It would be confusing, though not wrong, for a database to have relations using both candidate keys in various circumstances. Using the abbreviation when talking about residency, but using the full name when talking about elections, would unnecessarily complicate things. A foreign key constraint is a very commonly used database-level constraint. It requires that an attribute in one relation variable never contain a value which is not present in a tuple in another relation. For example, such a constraint would prevent you from saying "Joe lives in YT" when "YT" is not present in the state-name-to-abbreviation relation variable. This is a basic way to protect a database variable from being filled with "junk" data. A foreign key constraint is formed between one group of attributes in one relation variable and a group of similar attributes in another relation variable (though sometimes the same relation variable, and in rare cases even the same exact attributes.) As an example of self-referential constraints, consider a "person" relation in which the attributes "mother" and "father" must refer to other people in the same relation variable. In some cases, it may be preferable to use a surrogate key rather than any of the already available candidate keys, to be the primary key. These are attributes for which you define your own unique identifier. You may assign a unique number to each person in the database, even though a social security number could have worked just as well. Both will be candidate keys, but your generated number will be the primary key used throughout the database when referring to people. This is often more efficient (integers being easier to sort and compare than some other datatypes) and allows for more flexibility (such as missing SSN's), but that same flexibility may also be a liability or an oversight. In some circles, the use of surrogate keys is seen as a "hack" because it is often caused by poor design, and disregard for real-world truths. While relational databases are generally free of the need to define particular datatypes (such as numbers or text), there are a few requirements. For constraints to work, there must be some way to determine the truth or falsity of an expression, which implies the need for boolean datatypes of some sort. All sets require that their contents not contain any duplicates, which means that items must be comparable at least enough to determine if two items are duplicates. As a relation header is a set of attributes, where the uniqueness is based on the names, we need to be able to compare two attribute names and determine whether or not they are different. Relation bodies must be unique overall (regardless of the uniqueness of candidate key values) which means tuples must be comparable. Two tuples are identical if all of their attributes are equal in value. Because of this, you must be able to determine the equality of any two values of any of the domains used in any of the attributes of any of the relations in the database. Relations and tuples are also required, as they are also domains themselves, and essential to the definition of relational databases. Terminology in the Real World (tm) This terminology is not often fully used in the "real world" (of vendors). In general, database and relation types are ignored, though the term "schema" is sometimes used to refer to databases (as domains). Instead, database variables and relation variables are referred to as being databases and relations. "Database" may also refer to a database management system (also called a database server under some circumstances). Further, the terminology of spreadsheets is often used: relations are "tables", attributes are "columns", tuples in a relation body are "rows", and attribute variables in each tuple are either "fields" or "cells". Candidate keys are generally forgotten, though primary keys are not. In the place of foreign key constraints, the term "foreign key" is used to refer to the volatile attributes likely to cause a problem, and all foreign keys "point" to primary keys (though this is not actually necessarily true.) E.F. (Ted) Codd's 12 rules For want of finding a good copy of his original twelve (actually thirteen) rules, I'll instead provide my current interpretation thereof: 0) An RDBMS must manage all data through relational facilities. 1) All information is represented explicitly as values in relation variables. (No pointers, no row numbers, no hidden fields, etc.) 2) Any datum can be accessed by relation variable name, attribute name, and primary key value (together). 3) NULL, it present, consistently represents missing information, and is distinct from any other value. (It's not actually a value, really.) 4) The system catalog (which provides information about the database structure) is accessible relationally, just like other data. 5) At least one well-defined text-based language must be defined which allows DML (data-manipulation), DDL (data-definition) and transaction management. 6) All views which are theoretically updatable should be updatable. 7) Views (derived relation variables,) inasmuch as possible, should be indistiguishable from the base relation variables upon which they depend. 8) Physical changes to the database variable should be as transparent as possible. Users should not notice a change unless absolutely necessary. 9) Logical changes to the database should be as transparent as possible. Again, users shouldn't have to notice. (This one's a bit odd in that you normally mean for logical changes to be visible.) 10) All constraints should be definable and enforcable in the RDBMS itself, rather than in the client application. 11) Distributed and clustered databases should act like their non-distributed / non-clustered counterparts as much as possible. (Think of it as a physical change, as per rule 8.) 12) No matter how you access the RDBMS, security and constraints should be enforced at all times. Coming up next... So far, then, we have seen that relational databases are set-based constructs which can store truth statements with no redundancy (if designed properly.) We should cover relational algebra next. Just as integers have algebra (addition, subtraction, etc.), relations (being domains, just like integers) have their own allowable operations. Joining two relation values together (by matching attributes) or concatenating them (employee addresses plus manager addresses), getting their difference, finding their intersection, and so forth are all part of the relational algebra. We should also look at transactions (units of work). While neither unique to relational databases, nor required, transactions are a common component of relational database systems, and have interesting interactions with constraints. (At what point during a transaction should various types of constraints be satisfied?) Also also coming up... (Monty Python/Holy Grail reference) I should go back and fix some terminology just a bit more. The following table should be more correct, at least as I see things right this second. |Database Schema||Database||Database Variable (db-var)| |Relation Header*||Relation||Relation Variable (rel-var)| * I'm not entirely sure about that, as a header likely has its own type which could be defined in terms of other types. The same would also be true of database schemas, actually. Can't you store the declaration of a database, as if it were itself a value? Say, in a text field, store information about the rules and layout of a database... Is your brain on fire yet? The main change here is to consider a "database" to be the value which a database-variable may represent and name, and the same for relations. You can do math with relations, you can store their representations in relation variables, etc.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103619185.32/warc/CC-MAIN-20220628233925-20220629023925-00775.warc.gz
CC-MAIN-2022-27
17,491
41
https://bardhprenkaj.netlify.app/project/elinus/
code
• Orchestrated the storage of raw sensoristic information combined with vital signals from smartwatches of test patients in La Pace retirement home in Sutri, Latium. • Engineered the data gathering process from the middle-ware (sensors) to create timely profiles for each patient representing their performed daily activities. • Analysed old people behaviours in time and organised non-structured data into a relational PostgreSQL database. • Modeled normal behaviour profiles of patients to provide the caregivers with a condensed representation of their daily routines. • Designed a continuous machine learning algorithm in PyTorch to report anomalous behaviours in newly collected sequences of patient activity data. • Developed periodical code snippets in python to update the patient predictive models saved on Google Drive to reflect for possible seasonal changes in the patients’ routines. • Guided team members at programming visual plots to assist caregivers at devising prompt intervention strategies for patients with reoccurring anomalous behaviours.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00676.warc.gz
CC-MAIN-2024-18
1,078
7
https://superuser.com/questions/714353/asus-k55a-black-screen-boot-issue-after-windows-8-1-upgrade
code
Got a Windows 8.1 upgrade prompt. Clicked YES. Then laptop took long to download the upgrade. After a day, I assumed the upgrade was through as the laptop had rebooted. I powered on the laptop. It gave an error message about choosing a boot option. I guess similar to ""Reboot and Select proper Boot Device or insert Boot Media in selected Boot device and press a key_" when I changed the boot config from Launch fast boot to launch CSM and also disabled Secure boot Control in the BIOS and hit the SAVE key.... it took time. So clicked again on Save and finally hit the Power button to Hard reboot the laptop. Since then the Laptop display is blank. I have tried some of the tips... ex. remove laptop battery, reset the CMOS after removing the CMOS battery and keeping it away overnight, putting a coin in the battery compartment to discharge, reverse polarity of battery for a few seconds, pressing the power button for 45 sec after removing laptop batteries....removing RAM and putting it again in the same slot... I see the Power LED is On, I see the hardisk and other LEDs blink once and the system seems to want to start and then all is black. I have even removed the Hard Disk and tried to reboot. Still nothing displays on LCD display which is also brand new. Any other options? Thinking of taking it to the ASUS service center. My laptop is out of warranty since Nov 2013.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986707990.49/warc/CC-MAIN-20191020105426-20191020132926-00282.warc.gz
CC-MAIN-2019-43
1,381
4
https://in.tradingview.com/support/solutions/43000673912/
code
Columns are a type of chart that reflects the change in the real price of the instrument. You can enable Columns in the chart type settings, and they will also be available for additional symbols. To plot the chart, the values of the closing price are used, by default. At the same time, the choice of other values for plotting is available: You can also customize column colors: Up color when the open price is less than the close price, green by default. Down color when the closing price is less than the opening price, red by default.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474361.75/warc/CC-MAIN-20240223053503-20240223083503-00062.warc.gz
CC-MAIN-2024-10
538
5
http://www.tomshardware.com/forum/301855-30-which-motherboard-choose-specs-below
code
Having researched for a while in several forums and discussion groups and taking into account many suggested considerations I am interested in finding out if there is any one motherboard that fills fhe following list of requirements: 1. Intel Ivy ready 2. With integrated video and audio 3. Socket LGA to support Sandy Bridge both i5 and i7 processors 4. able to handle Sata III components 5. with USB 3.0 ports with at least one in the front panel. 6. PCI Express slot for dual TV tuner card 7. 2 e-sata ports (at least 1 in front panel) 8. with at least one extra available PCI Expressx16 2.x slot 9. at least one 1000BaseT Ethernet port 10. Able to support 32 GB of memory. How many memory slots/banks do I need? I do have Windows 7 Home 64-bit. I understand it may support 32 GB of memory if the motherboard does. I have a budget but at this time i rather not limit the possible choices Should there be no such a Mobo, which is/are the one/s that get as close as possible to fulfill the list?
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662197.73/warc/CC-MAIN-20160924173742-00132-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
996
14
https://practicepteonline.com/how-to-find-correct-answers-in-ielts-reading-from-the-right-location/
code
Many students face this problem that they can find the location but not the exact answer. In order to overcome this problem you need two things while attempting reading first is Vocabulary which obviously everybody knows but the second thing which many students do not know or ignore is common sense. I have always maintained that even if your English is not that good but you keep your mind open while reading you can easily score 6–6.5 bands in reading. Now what do I mean by common sense. While attempting any question you need to understand the underlying idea of the question that is while it is asking about noun for example some person or place, or verb like some kind of activities. These small small things make it very easy to attempt reading. This method is particularly effective for blanks and diagrams. For MCQs which students find incredibly difficult, the easier method is not to look for the answer but to look for the options which are not there. Since you can find the location of the question. The next step should be at out of four options A, B, C and D find out which ones are not mentioned at the location. Then after elimination you are generally left with two likely answers. To find the exact answer find the synonyms for both the options and which every has more synonyms in the passage is the answer.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100674.56/warc/CC-MAIN-20231207121942-20231207151942-00148.warc.gz
CC-MAIN-2023-50
1,330
3
https://forum.uipath.com/t/use-excel-sheet-in-reframework-dispacher/525444
code
We are using Excel Sheet in Dispacher so the Activities are performed in the below manner 1)Read the Sheet 2)It Stores in data table 3)Perform the Activities in Sequence And in the End add to Queue The challange we face is that if at any step there is an exception then we have to rerun it and this adds items into queue multiple times So is there a way that in case there is an exception it still continues with the other items in data table When you add items to the queue, Use for each loop to add in the queue one by one. In the for each, place try catch in the body. if BOT fails to add in queue mark it as failed in excel. and if you want to push only once and it should not be repeated then provide Reference to the queue item. Check out below two tutorials, might help you. Actually let me explain in more detail We are using a Sequence in Dispacher which follows the below approach 1)From portal downloads the files(Download files sequence) 2)Identifies one by one which country the files belong(identify Sequence) 3)Perform calculation and adds that details into the Excel Sheet(Calculation Sequence) 4)Add to Queue So the thing is it moves to the next sequence only once its done with the current one. Currently For Each Row in Data table is used Which means that if it finds some error while calculating for the first file then it fails and stops and does not go to the next sequence Is there a way we can conquer this this and make it independent so that if for example for one of the sheet it fails it still continues with others Thanks kalpesh for the response
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00793.warc.gz
CC-MAIN-2023-14
1,575
21
http://www.spacerocketgames.com/mobile/race-into-space-pro.html
code
This version does not have any of the restrictions applied to the Free version. Release Date: 20/11/2014 Available on: Android If you choose to buy Pro version, you'll get: - Mission videos. - Play with soviets (instead of USA only). - Purchase all hardware (without restrictions). - Build all launch pads (instead of just two). - Change default settings. - Updates and enhancements. - You can write to me and ask for particular updates. Race Into Space Pro is developed by DaBIT.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578586680.51/warc/CC-MAIN-20190423035013-20190423061013-00436.warc.gz
CC-MAIN-2019-18
480
12
http://michielpost.nl/posts/reducing-blazor-webassembly-download-size
code
Blazor WebAssembly apps can become pretty large (10+ MB), every Blazor app includes the dotnet runtime compatible with WebAssembly, and of course all of your app's dependencies. Microsoft is working on getting the download size as small as possible, with techniques like tree shaking. This means that unused code won't be included in the Release build. But there are also some things we can do ourselves. The documentation here is helpful: https://docs.microsoft.com/en-us/aspnet/core/blazor/webassembly-performance-best-practices?view=aspnetcore-5.0 Before I started reducing my app size, it was 14.4 MB. I addes these lines to my This reduced the size to 13.7 MB Then I added It went to 12.5 MB You should only include these lines if your app does not use any Timezone or culture specific globalization functions. Please check the documentation on the link above. After that, I analyzed and removed all packages that have a dependency on Newtonsoft.Json. I had to create my own fork of RestEase for that. I removed the Newtonsoft.Json dependency and used System.Text.Json instead. The fork can be found here and is also available on NuGet: https://github.com/michielpost/RestEase The app is now only 11 MB! Blazor WebAssembly already includes System.Text.Json, so it's best to use that, because it does not add anything to your app size. But if you use Newtonsoft.Json, over 1 MB will be added to your apps download size. Next steps? Let's try out .NET 6! .Net 6 preview 3 is available and upgrading reduced the size to 10.6 MB. Almost 4 MB gone from our 14.4 MB starting point (-26%). Let's hope we can further reduce the app size with the next preview versions of .NET 6 mistermag00 notified me that compression can also make a huge impact: https://twitter.com/mistermag00/status/1395357490787794944 Originally I didn't look at compression, because I assumed Blazor took care of that. And that's true. When making a Release build, three versions of each files are generated. filename.gzCompressed with GZip compression filename.brCompressed with Brotli But the server is responsible to serve the right files. ASP.Net will do that for you, but when hosting your Blazor app on GitHub pages or Skynet this doesn't work. Luckily, you can force the Blazor app to download the Brotli files with a small change in your index.html. With forced downloading of the Brotli files, the app size went to 4.4 MB! Check out the documentation: https://docs.microsoft.com/en-us/aspnet/core/blazor/host-and-deploy/webassembly?view=aspnetcore-5.0#compression .NET 6 with Brotli enabled Preview 3: 4.3 MB Preview 4: 4.4 MB
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.25/warc/CC-MAIN-20240414095752-20240414125752-00329.warc.gz
CC-MAIN-2024-18
2,605
30
https://www.experts-exchange.com/questions/22877529/How-to-change-OpenSUSE-WLAN-network-preference.html
code
How to change OpenSUSE WLAN network preference Posted on 2007-10-07 I need to prevent OpenSUSE from connecting to my neighbor's WLAN AP and make sure it connects seamlessly to my AP when in reach. I have laptop just installed OpenSUSE 10.2 x86_64 updated with whatever OpenSUSE recommended, installed with fglrx driver from ATI, sk98lin from Marvell, powernow from AMD, and broadcom wireless using ndiswrapper with 32bit XP driver, since included bcm43xx did not work even with firmware. I need my computer to prefer my AP (non-broadcasting, WPA) over neighbors one (broadcasting,unencrypted) And I need some way to store WPA key so I am not asked for it again and again.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579567.73/warc/CC-MAIN-20171215211734-20171215233734-00608.warc.gz
CC-MAIN-2017-51
671
6
https://scholar.google.com.hk/citations?user=eVFF0tEAAAAJ&hl=zh-CN&oe=GB
code
|Therapeutic target database update 2018: enriched resource for facilitating bench-to-clinic research of targeted therapeutics| YH Li, CY Yu, XX Li, P Zhang, J Tang, Q Yang, T Fu, X Zhang, X Cui, G Tu, ... Nucleic Acids Research, 2017 |Therapeutic target database update 2016: enriched resource for bench to clinical drug target and targeted pathway information| H Yang, C Qin, YH Li, L Tao, J Zhou, CY Yu, F Xu, Z Chen, F Zhu, ... Nucleic acids research 44 (D1), D1069-D1074, 2016 |Therapeutic target database update 2014: a resource for targeted therapeutics| C Qin, C Zhang, F Zhu, F Xu, SY Chen, P Zhang, YH Li, SY Yang, YQ Wei, ... Nucleic acids research 42 (D1), D1118-D1123, 2014 |SVM-Prot 2016: A Web-Server for Machine Learning Prediction of Protein Functional Families from Sequence Irrespective of Similarity| YH Li, JY Xu, L Tao, XF Li, S Li, X Zeng, SY Chen, P Zhang, C Qin, ... PloS one 11 (8), e0155290, 2016 |ANPELA: analysis and performance assessment of the label-free quantification workflow for metaproteomic studies| J Tang, J Fu, Y Wang, B Li, Y Li, Q Yang, X Cui, J Hong, X Li, Y Chen, ... Briefings in bioinformatics, 2019 |The Human Kinome Targeted by FDA Approved Multi-Target Drugs and Combination Products: A Comparative Study from the Drug-Target Interaction Network Perspective| YH Li, PP Wang, XX Li, CY Yu, H Yang, J Zhou, WW Xue, J Tan, F Zhu PloS one 11 (11), e0165737, 2016 |Clinical success of drug targets prospectively predicted by in silico study| F Zhu, XX Li, SY Yang, YZ Chen Trends in pharmacological sciences 39 (3), 229-231, 2018 |Discovery of the consistently well-performed analysis chain for SWATH-MS based pharmacoproteomic quantification| J Fu, J Tang, Y Wang, X Cui, Q Yang, J Hong, X Li, S Li, Y Chen, W Xue, ... Frontiers in pharmacology 9, 2018 |Computational identification of the binding mechanism of a triple reuptake inhibitor amitifadine for the treatment of major depressive disorder| W Xue, P Wang, G Tu, F Yang, G Zheng, X Li, X Li, Y Chen, X Yao, F Zhu Physical chemistry chemical physics 20 (9), 6606-6616, 2018 |Comparison of FDA approved kinase targets to clinical trial ones: insights from their system profiles and drug-target interaction networks| J Xu, P Wang, H Yang, J Zhou, Y Li, X Li, W Xue, C Yu, Y Tian, F Zhu BioMed Research International 2016, 2016 |Exploring the binding mechanism of metabotropic glutamate receptor 5 negative allosteric modulators in clinical trials by molecular dynamics simulations| T Fu, G Zheng, G Tu, F Yang, Y Chen, X Yao, X Li, W Xue, F Zhu ACS chemical neuroscience 9 (6), 1492-1502, 2018 |Exploring the inhibitory mechanism of approved selective norepinephrine reuptake inhibitors and reboxetine enantiomers by molecular dynamics study| G Zheng, W Xue, P Wang, F Yang, B Li, X Li, Y Li, X Yao, F Zhu Scientific reports 6, 26883, 2016 |Consistent gene signature of schizophrenia identified by a novel feature selection strategy from comprehensive sets of transcriptomic data| Q Yang, B Li, J Tang, X Cui, Y Wang, X Li, J Hu, Y Chen, W Xue, Y Lou, ... Briefings in bioinformatics, 2019 |Prediction of the binding mode and resistance profile for a dual-target pyrrolyl diketo acid scaffold against HIV-1 integrase and reverse-transcriptase-associated ribonuclease H| F Yang, G Zheng, T Fu, X Li, G Tu, YH Li, X Yao, W Xue, F Zhu Physical chemistry chemical physics 20 (37), 23873-23884, 2018 |What makes species productive of anti-cancer drugs? Clues from drugs’ species origin, druglikeness, target and pathway| X Li, X Li, Y Li, C Yu, W Xue, J Hu, B Li, P Wang, F Zhu Anti-Cancer Agents in Medicinal Chemistry (Formerly Current Medicinal …, 2019
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00236.warc.gz
CC-MAIN-2020-34
3,654
45
http://woldcreative.com/internet-explorer/fix-internet-explorer-will-not-save-sites-to-my-favorites.php
code
> Internet Explorer > Internet Explorer Will Not Save Sites To My Favorites Internet Explorer Will Not Save Sites To My Favorites I think its something like virus because from im very frustrated. Restart the computer and proceed and a CRT the 6410 displays fine with CRT???? If your budget allows for it, a Patriot TorqX or Sandisksystem again and log on normally.It worked great lastin, the screen went blank once again. I have also ran Trojan Remover hoping this your IP that's assigned from your ISP. Original Boot Configuration favorites http://woldcreative.com/internet-explorer/guide-internet-explorer-8-has-issues-with-some-sites.php do for me and directed me to Microsoft. Internet I tried starting it back prompt and the window popped up. Is the hard drive from this machine previously,to choose selective startup. I can now log into my 1/2 yrs old. I guess I down and when i wake up i try and start it up. See if your network adapter driver, will Any help would be great!The jack was something easy to see but internally there could more probably still have the original PS2 around. When I plugged the A/C adapter back have any other options? After a certain amount of time,or somewhere you can find again. Can't Save Favorites In Internet Explorer 11 Hello, I know this is explorer be greatly appreciated.Colored lines on screen or BSOD? My laptop died on mehave the best Graphic drivers installed. Nothing is wrong with it, it's not Nothing is wrong with it, it's not Needless to say http://www.gcflearnfree.org/internetexplorer/adding-and-managing-favorites/1/ but certainly no expert.Haha this HD to boot on its own.They told me there was nothing they could only drive option I have is g: drive. It boots (at least it sounds like it explorer when the drivers aren't working?I need to do Cannot Add To Favorites In Internet Explorer 11 or from another one with a XP install?The computer restarted a couple of times not really related to "computation" per-se... I'm aware that there are otherspeople viewed it... Can I delete my 97 the way it was set up.Any help woulddamage. Is a Geforce 8800GT better then a Geforce 9600 GT?I was then able my is my problem.Would that happen if my have a peek at these guys will account, but still cannot access the internet. So i unplug the monitor cable, and do all the same.It just doesnt seem to'Run as administator', and follow prompts. I see 18 https://social.technet.microsoft.com/Forums/ie/en-US/4eca3315-3b7b-4ffb-94a1-c2171c628d69/internet-explorer-will-not-save-sites-to-my-favorites?forum=ieitprocurrentver and environment and certainly hasnt been smacked.You uninstalled it, got desktop back, save few days i am geting alot of virus Warnings. Im Computer literate, make any sense to me. Since you have a wireless now, youcases. I bought the, "Galaxy Gefore Gts 250 1gb, 256bit, DDr3.When attempting to install while running windows theno longer has the recovery media available.Uncheck system services so please bear with me. Also when I switch it between the LCD Internet the J logical partition?.The SpinPoint is faster, but the Caviar Black on the best solution. Download the package to 'Desktop' How To Add Favorites Bar To Internet Explorer 11 who think it is a good/bad calculator.It does still seem to run of failing to boot HDs. I checked normal startup and everything was http://woldcreative.com/internet-explorer/info-internet-explorer-save-open-function.php I am having a bad https://answers.microsoft.com/en-us/ie/forum/ie8-windows_other/ie8-does-not-save-favorites-on-windows-7/bcedd7c4-c0fc-4768-8085-f4ed50c3c7d4 up again without the A/C adapter.It will not however letMy question is this: I have Internet issue but i cant find anything. The PSU I have sharing will not enable. Hope this helps someone" - rccm112 Internet Explorer 11 Add To Favorites Not Working G3 SSD will be best. Hi, I'm using windows 7 x64.and startup items.This goes for broken and I don't regret buying it. Can anybody see a way of gettingsomething and change my IP.So I try the dsubto install the new driver package.They may change it every few months or never in someafter along battle to keep it alive, it was an XP OS.If not, do Inight and now nothing. I am lost check my blog back to normal the next time I rebooted.I was looking for the sameBAN] They can access it by changing there IP.Cboydrun Which model is your Windows Vista home running on my Desktop. Thanks for any help. Hey every one I Can't Add To Favorites In Windows 7 and I finally saw my desktop again. Right click on it's icon, choose install from windows xp to windows 7. Make sure you are installing theit will just end the call.So lets say someone gets banned. [IP should remain checked. would resolve the issue but that also helped nothing. I am trying to do a clean has larger warranty coverage, so take your pick. I restart aburned the recovery media from the hard drive. What do you mean Unable To Create Access Is Denied Favorites So far it has worked for me. sites Although the desktop never showedthe site aswel. It's still most likely you don't and reconnect it, no luck. This is a long postinstalled in the past couple of days. I typed msconfig into the command Internet Explorer 11 Favorites Not Working does), but there is no image on my monitor.But that's what Ibe charging with the A/C adapter. Network discovery and file and Graphic adapter are in that list. I tried the monitor oncouple times no go. will Ok so hereup the command prompt did. Once they are installed, restart the now is a FSP ATX350-PA. And it's just my luck that Acer have a Video card problem I hope you can help with..... I of course like an ***** never could always run Ubuntu.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00236.warc.gz
CC-MAIN-2020-05
5,684
24
http://python.sys-con.com/node/2458286
code
|By Marketwired .|| |November 26, 2012 08:30 AM EST|| TORONTO, ONTARIO -- (Marketwire) -- 11/26/12 -- TransGaming Inc. (TSX VENTURE:TNG), would like to address the concerns evident in the marketplace regarding the current state of the business. The company's management, board, and employees are all committed and focused on ensuring the ongoing growth of TransGaming. We recognize and hear the questions and concerns raised by our shareholders regarding our cash flow and we have taken steps to restructure the company towards achieving an even more streamlined organization, with a reduced cost base. A number of alternatives remain under consideration, but for now we believe these steps will allow us to reach our goal of break-even by Q3 or Q4 of this fiscal year. The business itself has demonstrated ongoing strength with a compounded annual growth rate in revenue in excess of 30% in the past 5 years. Even our detractors have cited significant revenue growth projections for this current fiscal year (2013), which we are on target to achieve. As part of the effort to streamline the company, we have recently completed a major internal restructuring that has now organized the business into two P&L verticals: i) The Graphics & Portability Group (GPG) which handles all of our licensing of core IP related to Cider and SwiftShader; ii) The Digital Media Group (DMG) which handles all of our in-home content and digital distribution. We have integrated our acquired iTV business unit and our GameTree TV business unit together into a single unified division to leverage common resources and talent pool. Both groups operate as their own P&Ls and are being measured against internal budgets and forecasts. Additionally, over the last 2-3 quarters, we have implemented a 30% reduction in headcount to reduce expenses along with other cost constraining measures. We have strong growth prospects ahead of us, which are outlined below. For our Graphics and Portability Group: -- This business segment has not received attention from shareholders but continues to grow and generates high margin revenues. -- We are working on a number of new Mac titles, on an ongoing basis, and these titles are generating both upfront revenues as well as back-end revenue shares. -- Our MMO games are performing well and gaining traction. The recently announced Guild Wars 2 online game is exceeding user targets and will drive revenues in the coming quarters. -- We have a long-term business relationship with companies like EA and Disney that bring a continual flow of new titles. -- We have major licensing negotiations underway for our core IP with both Cider and SwiftShader that we expect to close within Q3. -- Overall, we expect our GPG group to generate strong revenues in FY2013 and to be profitable as a business unit. For our Digital Media Group: -- We recognize that GameTree TV deployments have taken considerably longer then originally anticipated. The slower pace arises from the current economic slowdown being experienced globally that has caused major operators to become more conservative with their CapEx spending. That said, TransGaming continues to negotiate new agreements and has seen the level of activity increase. -- Our SelecTV deployment is moving forward positively, albeit behind schedule. The initial deployment plan was with a series of hotels in Asia. However, SelecTV now has major hotels within the Toronto and the broader Canadian market and both parties deemed it to be more effective and efficient to begin deployments in TransGaming's home territory instead of Asia. More information about the specific hotels will be released early in the new year. -- We are in active negotiations on the commercial agreement with a major European service provider. We expect to execute an agreement early in 2013 and commence integration with a new operator towards launch with H1 calendar 2013. -- We are actively working with the East Asian operator deployment (announced June, 2012) and expect them to be ready to launch within the second quarter of calendar 2013. -- Our Asian partner relationship is now entering the negotiation stage of full commercial terms. We are actively reviewing business terms and legals and expect to execute a definitive agreement early in the new calendar year. -- We are imminently releasing a major update to Free which includes a complete overhaul on the GameTree TV client and transitions the service to a subscription only model at EUR4,99. This will improve the overall ARPU for the GameTree TV service and this updated client will be deployed on all upcoming new service providers. -- We released a major update with DISH Network in October that has been received very well and will provide increased revenues with higher ARPU. -- Overall, and to reiterate, the GameTree TV business has taken longer to materialize then projected. However, TransGaming today has 5 service providers that we are actively working with or striving to launch by Q2 of calendar 2013. We are also front-loading our payment schedule with new launches so that the service providers cover development and integration fees. Between existing customers and new agreements being signed, TransGaming has the largest Connected TV distribution footprint in the world. Finally, to strengthen the balance sheet, TransGaming is working with our investment bankers and financial institutions to evaluate a range of options that will provide the markets with the comfort that the company's current liquidity risk will be reduced. We delivered a record FY13 Q1 and we expect continued strength in Q2, and the following quarters, towards a fiscal year that will be marked by the greatest year-over-year growth to-date. Therefore, we remain bullish about the company's future prospects. About TransGaming Inc. TransGaming Inc. (TSX VENTURE:TNG) is the global leader in the multiplatform deployment of interactive entertainment. TransGaming works with the industry's leading developers and publishers to enable and distribute games for Smart TVs, next-generation set-top boxes, Mac computers, and Linux/CE platforms. TransGaming is headquartered in Toronto, Canada. This news release contains forward-looking statements. Actual events or results may differ materially from those described in the forward-looking statements due to a number of risks and uncertainties, including changes in financial and product market conditions. Forward-looking statements are based on management's estimates, beliefs, and opinions. The Company assumes no obligation to update forward-looking statements, other than as may be required by applicable law. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. Chief Financial Officer We’ve worked with dozens of early adopters across numerous industries and will debunk common misperceptions, which starts with understanding that many of the connected products we’ll use over the next 5 years are already products, they’re just not yet connected. With an IoT product, time-in-market provides much more essential feedback than ever before. Innovation comes from what you do with the data that the connected product provides in order to enhance the customer experience and optimize busi... May. 4, 2016 05:00 PM EDT Reads: 1,302 A critical component of any IoT project is the back-end systems that capture data from remote IoT devices and structure it in a way to answer useful questions. Traditional data warehouse and analytical systems are mature technologies that can be used to handle large data sets, but they are not well suited to many IoT-scale products and the need for real-time insights. At Fuze, we have developed a backend platform as part of our mobility-oriented cloud service that uses Big Data-based approache... May. 4, 2016 03:30 PM EDT Reads: 606 The increasing popularity of the Internet of Things necessitates that our physical and cognitive relationship with wearable technology will change rapidly in the near future. This advent means logging has become a thing of the past. Before, it was on us to track our own data, but now that data is automatically available. What does this mean for mHealth and the "connected" body? In her session at @ThingsExpo, Lisa Calkins, CEO and co-founder of Amadeus Consulting, will discuss the impact of wea... May. 4, 2016 03:00 PM EDT Reads: 1,161 SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re... May. 4, 2016 01:45 PM EDT Reads: 1,327 SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter... May. 4, 2016 01:00 PM EDT Reads: 1,423 The demand for organizations to expand their infrastructure to multiple IT environments like the cloud, on-premise, mobile, bring your own device (BYOD) and the Internet of Things (IoT) continues to grow. As this hybrid infrastructure increases, the challenge to monitor the security of these systems increases in volume and complexity. In his session at 18th Cloud Expo, Stephen Coty, Chief Security Evangelist at Alert Logic, will show how properly configured and managed security architecture can... May. 4, 2016 12:47 PM EDT Reads: 321 In his session at @ThingsExpo, Chris Klein, CEO and Co-founder of Rachio, will discuss next generation communities that are using IoT to create more sustainable, intelligent communities. One example is Sterling Ranch, a 10,000 home development that – with the help of Siemens – will integrate IoT technology into the community to provide residents with energy and water savings as well as intelligent security. Everything from stop lights to sprinkler systems to building infrastructures will run ef... May. 4, 2016 12:45 PM EDT Reads: 1,256 The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc... May. 4, 2016 12:45 PM EDT Reads: 904 We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and... May. 4, 2016 12:45 PM EDT Reads: 595 Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process... May. 4, 2016 12:15 PM EDT Reads: 1,337 trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vice president of product management, IoT solutions at GlobalSign, will teach IoT developers how t... May. 4, 2016 12:15 PM EDT Reads: 530 There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the... May. 4, 2016 11:15 AM EDT Reads: 1,237 Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic... May. 4, 2016 11:00 AM EDT Reads: 1,383 So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear... May. 4, 2016 10:00 AM EDT Reads: 1,517 Digital payments using wearable devices such as smart watches, fitness trackers, and payment wristbands are an increasing area of focus for industry participants, and consumer acceptance from early trials and deployments has encouraged some of the biggest names in technology and banking to continue their push to drive growth in this nascent market. Wearable payment systems may utilize near field communication (NFC), radio frequency identification (RFID), or quick response (QR) codes and barcodes... May. 4, 2016 08:15 AM EDT Reads: 1,007 You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ... May. 4, 2016 08:00 AM EDT Reads: 1,191 The IETF draft standard for M2M certificates is a security solution specifically designed for the demanding needs of IoT/M2M applications. In his session at @ThingsExpo, Brian Romansky, VP of Strategic Technology at TrustPoint Innovation, will explain how M2M certificates can efficiently enable confidentiality, integrity, and authenticity on highly constrained devices. May. 4, 2016 08:00 AM EDT Reads: 1,268 Manufacturers are embracing the Industrial Internet the same way consumers are leveraging Fitbits – to improve overall health and wellness. Both can provide consistent measurement, visibility, and suggest performance improvements customized to help reach goals. Fitbit users can view real-time data and make adjustments to increase their activity. In his session at @ThingsExpo, Mark Bernardo Professional Services Leader, Americas, at GE Digital, will discuss how leveraging the Industrial Interne... May. 4, 2016 04:45 AM EDT Reads: 1,361 Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe... May. 3, 2016 12:30 PM EDT Reads: 1,232 You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond. May. 3, 2016 12:15 PM EDT Reads: 1,617
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860124045.24/warc/CC-MAIN-20160428161524-00185-ip-10-239-7-51.ec2.internal.warc.gz
CC-MAIN-2016-18
17,603
57
https://www.serviceacademyforums.com/index.php?members/parkhurst89.1390/#profile-post-62
code
What did you mean when you said stay out of Ho Chi Minh or Kornhead will get you? Doing a research project on the tunnels and trying to gather info....also what it was called before it got the name HCM Trail. Thanks! Is there a way I can check on the status of my summer seminar application? Also, was just inducted into the National Honor Society, and would like to add that to my application. Do I just email that addition to firstname.lastname@example.org? Please advise, thank you
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00354.warc.gz
CC-MAIN-2022-33
484
2
https://showbiz.com.ng/male-final-year-student-university-benin-committed-suicide-photo/
code
A Final year Computer Engineering Student, the University of Benin, known as Adam, on Thursday evening, committed suicide after he hung himself in his room. Reports that were gathered from his close friend, the late Adam had attended lectures, as usual, that fateful day. Then he came back home to charge his phone, washed his clothes, then went inside his room where he hanged himself within 20 minutes of locking the door. His lifeless body was later found when his biological sister, who came from Ekenwa campus, open the door forcefully after she had knocked severally and got no reply from him. Adam is said to be a sufferer of Bipolar disorder (once known as manic depression or manic-depressive disorder). The reason behind his action is yet unknown but as we heard, he is a first-class student, so academic regression would hardly be the case here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00509.warc.gz
CC-MAIN-2022-33
856
5
http://www.ipadforums.net/threads/new-to-forum-but-have-experience-with-ipad-2.65276/
code
I am new to this forum, and have read quite a bit about it. I look forward to sharing ideas with everyone! I admit that I found this site because I am having an issue. I have don't a search for it already, and so far no one else seems to be having this issue. So as soon as I figure out how to post a question, I hope to receive some help! I also hope to be able to return the favor to someone some day!
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00622.warc.gz
CC-MAIN-2021-10
403
1
http://www.celephais.net/board/view_thread.php?id=60929&start=278&end=302
code
Ah well, that's the nature of dice and card games. There is some strategy, but luck is always big factor. Thanks for trying my games anyway. Here's a video of someone playing my games: https://youtu.be/PO3xsYxl5Wk (shaky low quality video warning) Enjoy watching him suffer >:D (And he started playing Can't Stop without reading the help screen, so he didn't know what he was doing at first.)
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738858.45/warc/CC-MAIN-20200811235207-20200812025207-00565.warc.gz
CC-MAIN-2020-34
392
6
https://docs.oceanprotocol.com/concepts/tools/
code
Plecos is a Python tool to check metadata (a JSON file) to see if it conforms to the OEP8 schema. Plecos wraps the jsonschema validator. Users can use Plecos to check their metadata before sending it to an Aquarius instance. Aquarius can also use it to check metadata. Plecos can be found in the Plecos repository on GitHub and as a Python package in PyPI. Plecos can be used in a microservice to facilitate data onboarding, as described in the plecos_service repository. - The squid-py tutorials in Jupyter notebooks - The squid_py/examples/ directory of the squid-py repository - The squid-py tests
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202510.47/warc/CC-MAIN-20190321092320-20190321114320-00415.warc.gz
CC-MAIN-2019-13
600
5
http://www.tekkotsu.org/dox/classMoCapEvent.html
code
|Tekkotsu Homepage||Demos||Overview||Downloads||Dev. Resources||Reference||Credits| MoCapEvent Class Reference Provides notification of new external localization data. More... Inheritance diagram for MoCapEvent: Provides notification of new external localization data. Could be feedback from simulation, GPS, or a full motion-capture system. May not include data for all reference frames, or may only provide one of position or orientation... You can probably assume at least BaseFrame will be included e.g. getPosition(BaseFrame)/getOrientation(BaseFrame), otherwise you can also access positions or orientations directly for lookup/iteration. Mirage allows some control over what frames are reported, see MoCapPos and MoCapOri in EnvConfig.h (assigned via command line or a .mirage world configuration file). By default only the base frame is reported for both position and orientation. Member Typedef Documentation Constructor & Destructor Documentation Member Function Documentation Constructs a transformation matrix for the specified reference frame, throws std::out_of_range if not found for either position or orientation. The transformation matrix can be right-multiplied by a point relative to the frame to obtain the corresponding world-frame position. In a perfect simulation, this transform should be equivalent to Member Data Documentation The documentation for this class was generated from the following files: ||Generated Mon May 9 04:59:13 2016 by Doxygen 1.6.3|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104141372.60/warc/CC-MAIN-20220702131941-20220702161941-00466.warc.gz
CC-MAIN-2022-27
1,480
15
http://infoskills.xyz/archives/4253
code
Novel–Astral Pet Store–Astral Pet Store Chapter 349 – Regaining Control! testy phobic The layers of safety disappeared to the Black Dragon Hound. Immediately, the step became quite opened and s.p.a.cious. Performed it… avoid? He believed he was saving up individuals movements for Qin Shaotian or Ye Longtian. But his time within the Exclusive League got reach an end… That tone of voice obtained no sentiment and was even chillier than ice-cubes. Generally speaking, it would be hard for common 9th-get ranking pets to discover the protective abilities with their individual people. Nevertheless, this Dimly lit Dragon Hound which has a mid-ranking bloodline acquired unleashed all 5 protective techniques in one go!! The elder couldn’t support but have another consider the Dark Dragon Hound and Xu Kuang who had been operating on the animal. How could anyone explain to that a challenger might be this horrifying? That youthful male had not been less strong than that unusual gal! Immediately after Liu Qingfeng left behind, the determine also retrieved his dragon also. With stressed sentiments, the evaluate believed to Xu Kuang, “You earned. You had been also able to manage dog quickly. Healthy. Usually, you could have been disqualified in case you have been the winner.” The elder couldn’t support but consider another think about the Black Dragon Hound and Xu Kuang who had been working on the furry friend. How could any individual tell a challenger could be this horrifying? That youthful mankind was not weaker than that strange girl! fields of victory is the protector finished He imagined he was saving up all those goes for Qin Shaotian or Ye Longtian. But his time within the Elite League had come to an end… la fontaine fables summary 9th-rank Dim h.e.l.l s.h.i.+eld! Liu Qingfeng taken into consideration how embarra.s.sed he was presently. He clenched both his tooth enamel and the fists. That has been humiliating! Several of these!!! 9th-position Thunder Defense! The Liu Household will have to spend more income and information to bridegroom a different potential director! The large wolf of black fire located an individual feet in the Blowing wind-wing Dragon, then stretched its throat and started its burning up lips. The heating and the fire closed in on the evaluate the judge’s curly hair was lowered to airborne dirt and dust and all of that happened within a sheer following. “What…?” Just then, the Darkish Dragon Hound observed a freezing tone of voice on its imagination. All 5 9th-get ranking protective expertise were actually unveiled simultaneously and in addition they taken care of within the Black Dragon Hound, matching together perfectly. Not a warrior at the legendary rate might have broken that safeguard simply! The judge’s pupils contracted. The close had started by then. The elder of your Liu Family members hurried onto the level. He was alleviated to check out that Liu Qingfeng was ok. The elder had not been very happy to discover this outcome. It was a complex match up and Liu Qingfeng shed. That has been to talk about, Liu Qingfeng has been excluded from your Best 10 and the man would never have another chance to create a recovery! Five of which!!! Even so the alternatives… would never be as nice as the very first decision. Xu Kuang turned into a spot from the crowd, wearing a style of admiration. His Wind flow-wing Dragon were defeated?!! The Dark Dragon Hound was even now roaring. The twenty-gauge taller wolf mimicked the motions, that pounced on the judge. Just then, the Black Dragon Hound heard a cold sound on its thoughts. The large wolf of dark colored flames set just one foot over the Wind power-wing Dragon, then extended its throat and established its eliminating oral cavity. The heating and also the fire sealed in about the determine the judge’s frizzy hair was diminished to particles as well as that occured within the sheer 2nd. “What…?” Paranormal World (The Semi-Physical World) The hire deal pa.s.sed the sound to the Black Dragon Hound. It may understand Xu Kuang’s terms. The Darkish Dragon Hound believed it simply had to adhere to the tone of voice. The massive wolf of black flames inserted one feet over the Wind flow-wing Dragon, then extended its neck area and established its getting rid of lips. The warmth and the fire shut in around the decide the judge’s hair was diminished to particles and all of that occured inside a mere second. “What…?” swords and deviltry His Breeze-wing Dragon have been defeated?!! The elder had not been happy to discover this final result. It had been a difficult complement and Liu Qingfeng misplaced. That was to talk about, Liu Qingfeng ended up being excluded in the Top notch 10 and the man would never have another likelihood to have a recovery! The doing the job employees along with the elder from your Liu Friends and family stared in disbelief for a second, then breathed in pain relief. Novel–Astral Pet Store–Astral Pet Store
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00663.warc.gz
CC-MAIN-2023-06
5,020
39
https://roadmap.ploi.io/projects/3-panel-requests/items/193-improve-teams-add-member-ux
code
We wanted to add a "admin" user that has access to every server and site but with the current Ploi UX we had to manually add each server to the user and after that add all sites in each server. We have 10+ servers and each server contains multiple sites. An option "give full access" would have been very helpful in this situation. @Glenn-Carremans Just for the record, you don't have to select "each site" if you want them to be able to access all sites, not selecting sites means access to all the sites on that server 😉 This is also described: Improve teams add member UX Dennis moved item to board Planned10 months ago Glenn Carremans moved item to project Panel Requests10 months ago Glenn Carremans created the item10 months ago I would also add, that you have to be able to find team sites in / (slash) search :)
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00263.warc.gz
CC-MAIN-2023-23
822
9
http://intraplanar.net/projects/
code
- Luna Theme - This is a Firefox theme based on the Windows XP interface that uses native (unskinned) widgets. - Luna Blue Theme - This is a Firefox theme based on the Windows XP interface that uses fully skinned widgets. This means that it looks as similar as possible to the Windows XP standard theme (with blue window borders) on all operating systems and OS themes. - Tabbrowser Preferences Extension (defunct) - This is a Firefox extension that adds GUI options to change some of the hidden tabbed browsing preferences available in Firefox. It also enables "single window mode" by opening links in a new tab instead of a new window. - Sunbird official theme (concept only) - This was work that I did for the default Sunbird theme long ago.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00130-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
744
8
https://rebels.cs.uwaterloo.ca/journalpaper/2017/04/16/an-empirical-study-of-the-integration-time-of-fixed-issues.html
code
Abstract - Predicting the required time to fix an issue (i.e., a new feature, bug fix, or enhancement) has long been the goal of many software engineering researchers. However, after an issue has been fixed, it must be integrated into an official release to become visible to users. In theory, issues should be quickly integrated into releases after they are fixed. However, in practice, the integration of a fixed issue might be prevented in one or more releases before reaching users. For example, a fixed issue might be prevented from integration in order to assess the impact that this fixed issue may have on the system as a whole. While one can often speculate, it is not always clear why some fixed issues are integrated immediately, while others are prevented from integration. In this paper, we empirically study the integration of 20,995 fixed issues from the ArgoUML, Eclipse, and Firefox projects. Our results indicate that: (i) despite being fixed well before the release date, the integration of 34% to 60% of fixed issues in projects with traditional release cycle (the Eclipse and ArgoUML projects), and 98% of fixed issues in a project with a rapid release cycle (the Firefox project) was prevented in one or more releases; (ii) using information that we derive from fixed issues, our models are able to accurately predict the release in which a fixed issue will be integrated, achieving Areas Under the Curve (AUC) values of 0.62 to 0.93; and (iii) heuristics that estimate the effort that the team invests to fix issues is one of the most influential factors in our models. Furthermore, we fit models to study fixed issues that suffer from a long integration time. Such models, (iv) obtain AUC values of 0.82 to 0.96 and (v) derive much of their explanatory power from metrics that are related to the release cycle. Finally, we train regression models to study integration time in terms of number of days. Our models achieve R² values of 0.39 to 0.65, and indicate that the time at which an issue is fixed and the resolver of the issue have a large impact on the number of days that a fixed issue requires for integration. Our results indicate that, in addition to the backlog of issues that need to be fixed, the backlog of issues that need to be released introduces a software development overhead, which may lead to a longer integration time. Therefore, in addition to studying the triaging and fixing stages of the issue lifecycle, the integration stage should also be the target of future research and tooling efforts in order to reduce the time-to-delivery of fixed issues. Preprint - PDF
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510219.5/warc/CC-MAIN-20230926175325-20230926205325-00240.warc.gz
CC-MAIN-2023-40
2,614
2
https://skillsmatter.com/courses/470-eric-evans-ddd-foundations
code
Are you interested in learning how to apply Domain Driven Design patterns within your business practice? Do you want to a hands on course where you are active in the learning process? Than Eric Evans' DDD Fast Track is the right course for you! Eric Evans' DDD Fast Track for Developers course will show you how to apply DDD building block patterns to the creation and refinement of practical software based on conceptual domain models. At the same time, you will learn a new vocabulary of design that can help you collaborate with other DDD developers and techniques for working with the non-technical business experts. This is a very hands-on course in which you will be pair programming and actively involved in group exercises. Learn how to: - Structure domain models that solve important, difficult business problems at scale - Design and implement objects that cleanly express domain models via a ubiquitous language - Apply DDD "Building Blocks" for crisply executing designs based on business concepts - Collaborate with non-technical business experts to systematically explore a business domain and create models together - Maintain the boundaries of a subsystem and adjust your style of design to the part of the system you are working in. DDD Building Blocks and Ubiquitous Language - What is DDD and how can models provide real value? - Patterns for structuring models. - Coding exercise: Explicitly expressing a model in code. (Transform some really ugly code into explicit expressions in the ubiquitous language.) - Collaborating with non-technical business to explore deep models of the domain. Boundaries and Interactions - Maintaining the context boundaries around your subsystems that allow you to keep your design clean - Shifting your design and coding style depending on the context you are in - Bringing precision and rigor to your models with assertions. - Crafting a supple design If you are a Programmer, Software Developer, Tester, Business Analyst or Software Architect that wants to learn the foundations of DDD then this course is for you! To get the most out of Eric Evans' DDD Foundations course, you should have basic knowledge of object modeling and design.
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608120.92/warc/CC-MAIN-20170525180025-20170525200025-00319.warc.gz
CC-MAIN-2017-22
2,190
22
https://github.com/simskij
code
Create your own GitHub profile A curated list of resources on software architecture Seed project for creating apps for WebAPI with Angular, Bootstrap, jQuery and such. A Web API implementation using a narrative outline to generate context-free text. Demonstrating how to read surface pro 3's sensors in C++ without using CLR.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00231.warc.gz
CC-MAIN-2019-13
325
5
http://posts.cs.brown.edu/2009/03/20/Amy/
code
The Career Development Awards from Brown's ADVANCE program are intended to help faculty establish new collaborations with colleagues at other institutions and explore new research directions. Funded with a five-year grant from the National Science Foundation, the ADVANCE Program at Brown supports new initiatives for formal faculty development. Amy plans to use the funds to build collaborative relationships with Electronic Commerce (sometimes called algorithmic economics or algorithmic game theory) research labs at Yahoo! and Microsoft. The hope is that these collaborations will lead to joint publications, and further funding opportunities through these companies' university relations programs.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00412.warc.gz
CC-MAIN-2020-34
702
2
https://metalevel.link/2016/09/22/haxe-for-windows-download/
code
Haxe for Windows download If you’re stuck at work and haxe.org is blocked “for your security”, then this download is for you my friend!. September 22, 2016 by goedelescherbach Uncategorized 0 You may also like... Zato 2.0 – ESB, SOA, REST, APIs and Cloud Integrations in Python Religion = no people Want to learn to Program in Python? Why not have a whole lot of fun while doing it! Leave a Reply Cancel replyYour email address will not be published. Required fields are marked *Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511284.37/warc/CC-MAIN-20231003224357-20231004014357-00513.warc.gz
CC-MAIN-2023-40
597
1