Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Notice (2018-05-24): bugzilla.xamarin.com is now in Please join us on Visual Studio Developer Community and in the Mono organizations on GitHub to continue tracking issues. Bugzilla will remain available for reference in read-only mode. We will continue to work on open Bugzilla bugs, copy them to the new locations as needed for follow-up, and add the new items under Related Our sincere thanks to everyone who has contributed on this bug tracker over the years. Thanks also for your understanding as we make these adjustments and improvements for the future. Please create a new report on Developer Community or GitHub with your current version information, steps to reproduce, and relevant error messages or log files if you are hitting an issue that looks similar to this resolved bug and you do not yet see a matching new report. In 1.2 I have several ImageButtons that I have created custom touch listeners for by inheriting from View.IOnTouchListener, I then instantiate an instance of this class, pass it into SetOnTouchListener, I do all of this on the OnCreate call in the Activity. I haven't had a chance to pull out this behavior into an empty project and isolate the issue, but when I reverted back to 1.2 and made no code changes it all started working again. The TouchListener OnTouch event never fires, I set a breakpoint on it and at no point does it drop into this code. It seems to ignore it completely... Could I please get a sample for testing? Writing reproduction code greatly slows things down, and is time better spent investigating and fixing the dozens of other filed bugs... :-) What I do have is a test which does: var textview = new TextView (this); textview.Touch += OnTouch; and my OnTouch method is invoked. Ok I will get a sample to you when I get a chance to isolate the issue... Created attachment 832 [details] Attached an Example where the issue is Isolated The example attached is very simple, it uses a ImageButton, attaches a View.IOnTouchListener which fires off a couple ToastMessages. I would encourage you to run it using 1.2 first to see correct behavior, and then run it in 1.9.2 to see incorrect behavior (does nothing). I don't know why it even worked in 1.2, but it certainly shouldn't have. :-) When implementing IJavaObject, you must inherit Java.Lang.Object: If we fix MyTouchListener to inherit from Java.Lang.Object, the sample works: public class MyTouchListener : Java.Lang.Object, View.IOnTouchListener public event EventHandler<EventArgs> OnClick; public MyTouchListener(Context ctx) context = ctx; public bool OnTouch(View v, MotionEvent e) Toast.MakeText(context, "Down", ToastLength.Short).Show(); Toast.MakeText(context, "Up", ToastLength.Short).Show(); private void Clicked() if(OnClick != null)
OPCFW_CODE
I mostly use Markdown, but I have also been considering learning Org-mode. Other than Markdown, Org-mode, and good old plaintext—which is not a markup language per se—there are also AsciiDoc and LaTeX, to mention two. I was curious if it is worth learning a new markup language, and what other people use. Was it easy for them to learn a new markup language? Are they using it for writing research papers, blogging, emails, and taking down notes? No better way to find out other than by making a poll and asking the fediverse community! - Duration: 7 days - Date: 2023-04-02 to 2023-04-09 - Total votes: 83 - Total voters: 53 - The poll What are people saying? For Fell’s thesis, LaTeX is the markup language of choice, but in all other cases, good old plaintext is more than enough because it is compatible with everything. I agree. Plaintext is plaintext. It is the easiest and fastest way to take down notes. You do not have to worry about converting a particular file from one markup language to another. However, for Nick Anderson, Org-mode is life. Nick uses Org-mode for everything: knowledge management, writing email, blogging, presentations, tracking time, Jira ticketing, and more. Nick’s entire workflow and daily routine, you can guarantee Org-mode is there. I checked the articles Nick shared, and after going through Nick’s experience and process, I can imagine how it made Nick’s daily workflow faster and easier. This is a good thing, as I have mentioned I have been considering learning Org-mode and integrating it into my daily routine. For Evan Keeton, the answers are LaTeX and Markdown markup languages. Evan shared that for anything mathematics related, LaTeX is the one to use. This is true, LaTeX is the choice in publishing scientific documents 2. If you plan to publish a research paper, it is a good idea to master LaTeX. However, for things like note-taking, Evan said that Markdown fits perfectly. For Evan, Markdown is easy to read even without a proper renderer. I can attest to that. As a long-time Markdown user, my mind processes a Markdown document as if my brain is a Markdown renderer. It flows naturally, I do not have to consciously think about the markups in the document. Potung Thul shares the same view as Fell, plaintext is the most compatible format of all the choices in the poll. Vim is also Potung’s go to cross-platform software for editing texts. From the above poll, while not conclusive, it gave a general overview on which format, or markup language, fediverse citizens use. The top two are: Markdown and Org-mode, followed by plaintext. While LaTeX is the de facto choice for scientific publications and in other fields 2. I have heard good things about AsciiDoc, though it only received a few votes in this poll, it is a markup language that one should consider. It may be easier for you compared to the other popular choices. Should you try one of these markup languages? Yes, you definitely should. Markdown is common in software development, for example, it is the markup language used by web repository services like Codeberg 3. It is also the default in many static-site generators like Hugo 4. There are also forum software which allow Markdown editing other than the old BBcode format. And as was mentioned, LaTeX is the de facto standard in STEM (Science, Technology, Engineering, and Mathematics) publishing. Authors can submit their papers as a LaTeX document, and the publishers converts it to their own preferred format without losing any text information, like bold, italics, images, and footnotes. Compare that to submitting PDF, ODF, and DOCX documents, more likely than not the publisher will have to check if the conversion was accurate. Or, the authors of the paper have to spend more time on these things when they can better spend it doing research and experiments. Of course, plaintext is as good as any. If presentation is not important, like bold and italics, plaintext can fill our everyday needs in documentation and note-taking. Even before the age of computers, that is what we were already doing… writing down in plaintext. Links and more information For more information about the various markup languages mentioned, check these useful links:
OPCFW_CODE
Testing React Components using Storybook and Cypress I’m a big fan of Storybook and I’ve written in the past about how to combine Storybook, Jest and TypeScript to test React components with “Storyshots”. I’ve got a couple of component libraries that, in turn, extensively leverage different visualization libraries. However, many of the visualization libraries I’m leveraging (either directly or via React wrappers around them) are framework agnostic and build visuals via direct manipulation of the DOM. In trying to test these, I’ve had all kinds of problems because while you can wrap them as a React component and implement the various lifecycle hooks, when you try to test them, you aren’t really testing them against a real DOM if you are using something like Jest. For this reason, I wanted to build tests that leverage real DOM. In looking around, I heard lots of good things about Cypress so I thought I’d give it a try. The first issue I ran into was how to really construct the tests that I wanted. Generally speaking, I’m dealing with libraries of components here. Cypress is largely an integration testing tool built for testing a single app. I don’t have a single app in these cases, I have lots of components. Storybook is a wonderful way to interactively “play” with components. But if you think about it, Storybook is an app too. So, I thought, could I combine Cypress with Storybook as a way to do “integration testing” on my Storybook stories. I Googled around, but really didn’t find any discussion of this topic so I’m writing this to document my experience. What I found, with this experiment, is that you really can treat Storybook as an app to perform “integration testing” on. Just launch Storybook like you normally would (e.g., start-storybook -p 6006) and then write your Cypress tests. The main steps, which I’ll demonstrate below, are visiting your app, finding the story you are interested in, getting the iframe associated with your components and then writing your assertions. I’ve done this for my own stories, but showing tests in the context of my stories probably isn’t that useful. So I’m including a Cypress script that tests a public site so you can really understand what it is doing. The following script tests the react-dates components hosted at http://airbnb.io/react-dates. I’ve added comments so you can understand what is going on. As I said, the first step is to visit the site. We do this in the beforeEach function associated with the describe of our whole test suite. I then chose to use a context to describe each collection of stories. In all but the default case, you need to also add an additional beforeEach to navigate to the story collection in the UI (and note, as indicated in the script, this is done differently for different versions of Storybook). I use each it to test a particular component. First we need to select the story for that component and then we grab the iframe associated with that story and continue testing in the context of that iframe. Note that in Chrome you can get pretty good selectors for most elements by just inspecting the element, clicking on the element in the DOM and doing Copy > Copy selector. But, as shown above, using an attribute selector works quite well with Storybook since it actually injects labels as attributes in many cases (N.B., Chrome won’t leverage attribute selectors, so you have to build these yourself). That is pretty much it. What I miss is the ability to do snapshot testing of the DOM for the components. That would be really nice for detecting UI regressions. I was wondering whether Cypress had plans to add that, but when asked about this their response was that since Cypress is mainly for end-to-end testing, they didn’t think it was that useful. I can’t say I blame them, but I’m not sure they considered a (crazy?) use case like this. There is one last “wrinkle” here. Normally when doing testing I run Storybook at a shell prompt and just let it run. I then run Cypress (i.e., cypress open) and keep Cypress running so it reruns my tests (against the running Storybook) each time I update my tests. But in order to automate this process for continuous integration, we need to do a few simple additional things. The setup I just described is mainly for interactive stuff. But what I want is to run a script that finishes with an exit status. To do that, I need to have a setup that starts Storybook, then runs Cypress non-interactively (i.e., cypress run) and then shuts everything down once I get an exit status from Cypress. I was able to achieve all of this pretty easily using package.json file looks like this: "test": "jest && concurrently 'npm run storybook:run' 'npm run cypress:test' -k -s first", "storybook:run": "start-storybook -p 6006", "cypress:test": "wait-on http://localhost:6006 && cypress run", "cypress:run": "cypress run", "cypress:open": "cypress open" Two things to note here. First, the wait-on command is there to give Storybook time to get up and running. Since it is running webpack behind the scenes, this can take a while. Also, note that I included jest at the start of my test script. That is because I may very well have additional tests that do not require a browser to perform. In this way, it will perform all the mocha) tests first and then run Cypress if no failures were found. It is also worth noting that Cypress produces a video (like the one above) of the tests as they are run. I was using CircleCI and I was able to easily configure CircleCI to store the video as an artifact of the build. This allows me to go back and view the video if there are errors. I can imagine that when there are failures this could be quite handy. The basic idea here is to use Storybook to package up a component gallery into a single application and then use Cypress’ integration testing capability to recast unit/functional testing as a bunch of integration tests. It is, admittedly, a rather odd use case. But it has the advantage that if you are already using Storybook, you can leverage it to do all the packaging work you need in order to jump right in and use Cypress to interact with your components with a real DOM. Honestly, I’m much happier writing tests with jest that do snapshots of (shallow) rendering. But I’ve run into a few use cases where that doesn’t work very well because you need the real DOM which is what led me here. I still think that writing these kinds of browser/DOM based integration tests is a bit tedious, but Cypress and Storybook at least made it pretty easy to get things setup. I’m writing this mainly as a reference for myself, but hopefully other people will find this useful as well.
OPCFW_CODE
Although during the planning and construction of a visitor center, there is sometimes a feeling that the accompanying consultant is a kind of nuisance for the implementation company, at the end of the process the implementation company also benefits When the developer of the establishment of the visitor center - a local authority / KKL-Junk / the Nature and Parks Authority or any other body decides that he will hire a content consultant - a programmer to accompany the project, there is sometimes a feeling from the executing company - that here comes the man who is going to bother them, make them work harder and earn less money. On the one hand - the concern can be understood. It is clear that working with a client who does not understand exactly the meaning of various precisions in the script, or the meaning of using this or that technology for the projection of the video or the playback of the sound - sometimes 'shortens processes' and increases the execution budgets. But on the other hand - in the direct work of a content and production company with the client, there are many challenges that arise from the client's lack of confidence in the professional parties, and from the client's lack of understanding of the meaning of making decisions or making changes to plans on the fly. And give an example: In one of the centers where I worked for the client, the performing company submitted for the client's approval the script of the main performance planned for the auditorium. The script was fine, and the client even happily approved it. After the client's approval in principle, I asked the company to provide us with a completely different script unrelated to the first draft. The reason for the request was because the first script was not exciting. The client cooperated with my request, and after two weeks we received a completely different script proposal - a different story, a different energy - everything was different. And this time - the client was enthusiastic about the story. Although the second script was more complex and expensive in terms of production, the result was excellent and all parties benefited. From experience - and working with execution companies on large projects for large clients (such as KKL) - even when there were disagreements, and also when the execution company had to work harder to solve problems that the client himself did not even understand the meaning of - at the end of the processes, the owner of The executive company thanked me profusely! I can give many examples of this, (and I will expand on shockingly spicy examples in the following posts) but in general it must be said that the work of the supervising consultant first of all serves the entrepreneur, but it also helps the executing company to achieve better results. And when the results are better - the visitor center is more successful, and the performing company gets all the credit (and the next job, of course..).
OPCFW_CODE
Amongst players and developers, the market of multiplayer video games is broadly common. There are quite a few games that will be common all through the year 2010. There are a wide assortment of multiplayer games that have various price tag ranges, themes and types. The most common multiplayer games have various genres which consist of very first individual shooters (FPS), Massively Multiplayer On the internet Function Playing Game (MMORPGs) and Genuine Time Tactic (RTS). Every genre has various common MMORPGs. Diverse multiplayer games primarily based on these genres can be identified, not only for the pc but for gaming consoles as nicely. The genre which has the most common multiplayer games is massively Multiplayer On the internet Roleplaying Game and it is the most lucrative genre for gaming providers. Titles like Everquest, Warcraft, Get started War Galaxies, Guild Wars and Matrix on line are the common multiplayer games that belong to this genre. These games attract a big quantity of gamers all through the planet and generally gamers spend a month-to-month charge in order to play them on line. Even so, the game which does not charge any charge is Guild Wars but it also boasts reduced population of gamers as compared to other games. There are more than 400,000 players who play Everquest and Globe of Warcraft claims to have six million players. Yet another genre which can be played on the pc as nicely as on a console is very first individual shooters. The games which are regarded to be common in this genre are Battlefield two, Counter Strike, Quake four and Halo two. In this form of multiplayer game players compete with each and every other in MMORPGs such as capture the flag, and so forth. These games are generally set in a futuristic setting. The multiplayer games which are common in Genuine Time Tactic genre are Warcarft III, Lord of the Rings – battle for middle earth and Starcarft. In these kinds of games players ordinarily manage an whole army in order to compete against each and every other. There are a assortment of troops and settings readily available in Warhammer titles which variety from orcs to elves to space marines and alien monsters. These games are about for quite a few years and continue to be amongst the most common titles in 2010. Apart from these larger games there are little multiplayer games also like casino games, puzzle games, part playing, text primarily based games and so forth. By paying some charge, gamers are capable to play these games. Some on line casino games let no cost play but there are some games that supply actual dollars betting. The above pointed out games are the ones that are regarded to be common in 2010 and even in coming years. Some are no cost though for other people, you have to spend some charges. These are just some of a quantity of various games readily available for each the console and private pc. New games are also coming out on frequent basis.
OPCFW_CODE
In the beginning, working with git can be quite daunting - things get lost and can’t seem to be found in the depth of the terminal and all this talk about rewriting history and every developer's personal git horror stories are not making things easier. That being said, git is also a super powerful tool. It’s absolutely essential to everyday work as a developer and while it might cause you some headaches it’s also very likely to save your butt at some point. For me personally, working with git brings up a whole mix of emotions: Sometimes it feels like having superpowers, sometimes like going a little too fast without headlights on. To make this experience a little less scary, I have a list of basic terminal commands I come back to almost every day - if you are a beginner like me, maybe these are a good starting point. git status - gives you an overview of your file status: are there modified changes? Is there anything you didn’t commit yet? Also very useful to check whether there were changes that you do not want to push. git log - returns all commits made to your repository. For example useful if you want to get the commit ID for a rebase. To get out of the vim editor (a whole other story) where these logs are shown, use the command ‘wq’ (write & quit) to get back to your console. pwd - not exclusively git related, but helpful either way. Prints your working directory, that is the path of the file you are in. git checkout -b creates a new branch for you to add your changes to. If you are contributing to an existing project, checkout their naming conventions before going wild with your branch name. See below how you can checkout already existing branches. git add . - adds all changes you made to the stage. You can also just add specific files by using git add <filename> git commit -m “commit message” - commits your staged changes and adds a new commit message. If you just made minor changes, you might want to use git commit --amend git commit --amend - adds your staged changes to the last commit you pushed without adding an additional commit message. git push - pushes your local changes to a remote repository. git fetch - retrieves changes and additions from a remote repository, however doesn’t change any of your local branches. git pull - pulls changes from a remote repository into your current branch. Basically doing a git fetch followed by a git checkout - checking out an existing branch from a remote repository by name. You can also check out specific commits with git checkout <SHA>. Ctrl + C - stops whatever process is currently running in your terminal. git reset --hard origin/ - You can also reset your current branch to a remote branch by specifying the branch name. This of course, is a very basic and not at all extensive list - I hope it helps you either way. Is there anything existential missing from your point of view? Feel free to share it in the comments - same goes for git horror- and love stories or helpful resources! <3
OPCFW_CODE
deploying 2 rails application on ec2 i'm using Ubuntu 16.04 and installed docker on my machine and created 2 Hello World ruby on rails web application images. The first one say "Hello World", and the second say "Howdy World". Ran both containers in parallel on my local machine, on different ports using the localhost. Created a free AWS account, set up a VPC with a public subnet and spine up an EC2 instance with a public security group and created the relevant ssh credentials. When I try to deploy the images to GitHub by pushing, the first one pushed fine, then second one however doesn't seems to work because I keep getting this error Updates were rejected because the tip of your current branch is behind it's remote counterpart. integrate the remote changes (git pull...) before pushing again after that I need to somehow run these 2 applications on the EC2 machine (succeeded in connecting to the machine via Ubuntu terminal but got stuck since then) so I could give it a public IP and with it's port, to and see them from any device that I connect. Any help will be appreciated ? As I understand, you want to run 2 applications in one VPS(ec2) ? Unrelated, but neither you nor Amazon use "it's" correctly :p yes, i want to run the 2 applications in one ec2, for example the first is <IP_ADDRESS>\3000 (for hello world) and the second is <IP_ADDRESS>\4000 (for howdy world) @Liro, are yr apps dockerized already ? take a look this https://rancher.com. I could help you, but it just takes time to make it. Try to learn more about Rancher, you can run as many as images in one server. If the two applications are different repos on your local machine, you cannot push them to the same GitHub repository (hence the error about your branch being behind the remote because the histories are unrelated). You should create a new GitHub repository and push to it instead. i created two different repos on GitHub, but when i'm forcing it to be pushed, it's always to the first repository instead of the new one and i can't figure out why. You can resolve by two methods Method 1 --> So, in that case, you need to push your content forcefully. run below command git push -f origin (branch name ) Method 2 --> first you need to update your local repo (it's in the case of using single repo for all files) git pull then push your content without using -f (forcefully) git push origin (branch name) i understood that from the first time i got the error, so i created another repository. i tried to push the second container (even forced it) but still i'm getting this error.
STACK_EXCHANGE
Implement iPadOS 13.4 Pointer Interaction for UI and VM This PR fixes #176 and contains parts of #160 Adds pointer interaction support to: UI: VM List cells VM View toolbar buttons VM: Cursor movement Auto hiding iPadOS cursor when inside VM display area Missing: Configuration views, any other views not mentioned Is there any reason why we need to manually code cursor support for UIButton? In #192 I just set the pointerInteraction attribute in the storyboard. Is there any reason why we need to manually code cursor support for UIButton? In #192 I just set the pointerInteraction attribute in the storyboard. Good catch, I hadn't thought of doing it in the storyboard. Just tried it. That unfortunately results in the "preview" effect applied to the button, which is incompatible with the Visual effect view (causes the button to render completely black). It seems like the CI that auto builds here is still on Xcode 13.3.1, therefore fails to build the project with the reference to new API. Not sure if I can fix that on my end? That unfortunately results in the "preview" effect applied to the button, which is incompatible with the Visual effect view (causes the button to render completely black). Right, that's why I also changed the contrast so the buttons are white (which frankly I like more). I think I might merge that and then rework this so it just handles the cursor in the VM. What do you think? It seems like the CI that auto builds here is still on Xcode 13.3.1, therefore fails to build the project with the reference to new API. Not sure if I can fix that on my end? That's fine. Right, that's why I also changed the contrast so the buttons are white (which frankly I like more). I think I might merge that and then rework this so it just handles the cursor in the VM. What do you think? Sure! I can do that if you want, it's just a matter of rolling back some of the changes I made. Should I keep the effect for the vm list? It magnifies the cell slightly on hover. Right, that's why I also changed the contrast so the buttons are white (which frankly I like more). I think I might merge that and then rework this so it just handles the cursor in the VM. What do you think? Sure! I can do that if you want, it's just a matter of rolling back some of the changes I made. Should I keep the effect for the vm list? It magnifies the cell slightly on hover. I think it's a bit confusing for the user because it lifts the entire cell but the whole cell is not clickable (plus the Edit button is not differentiated so it's not clear it's clickable). In my version, the run button and the edit button are highlighted. It also is more consistent with the pause screen where the resume button is highlighted. Also, I merged #192 so you can rebase from it. Also, I guess because of how the release pipeline is set up, we won't be able to make a release until https://github.com/actions/virtual-environments/issues/620 is resolved. Also, I merged #192 so you can rebase from it. The rebase worked as far as I can tell, nice button hover effects everywhere! And cursor still working in VM. Sorry, can you git rebase master and git push -f so it's easier to see the changes? Also, I see that sometimes if you move the cursor too fast, you'll hit an edge or the toolbar and the cursor would stop. Is there any way to "capture" the cursor (in the SPICE GTK client, the cursor is forced to (0,0) every time to prevent it from leaving the window)? Sorry, can you git rebase master and git push -f so it's easier to see the changes? I will clean up the history tomorrow. Also, I see that sometimes if you move the cursor too fast, you'll hit an edge or the toolbar and the cursor would stop. Is there any way to "capture" the cursor (in the SPICE GTK client, the cursor is forced to (0,0) every time to prevent it from leaving the window)? The cursor is controlled by the system and can’t be captured like on desktop OS meaning I can’t set the system cursor position, only read. 🤷‍♂ And iOS stops sending cursor updates once it reaches the edge of the screen. A possible solution to the toolbar issue would be to overlay a view on top of everything that only handles cursor inputs. The way it works now is similar to the continuity cursor mode in many desktop VMs where you can seamlessly mouse away from the VM. Another approach could be to use UIApplicationSupportsIndirectInputEvents but it seems to only help distinguish gestures, not remove the pointer limitations. I see, I think I have another idea which I'll work on in another branch. Basically for #122 I'm implementing qemu's usb-tablet option which emulates a USB tablet device. This allows SPICE to send an absolute position instead of a motion vector. I'll change it to allow the user to choose between mouse and tablet for touch, pencil, and touchpad. @osy86 Here you are, it's much easier to see the changes now. 😆 single commit What if we make guest additions for OSes like Windows, capture the cursors, and carry that data to the VM? Wouldn't we be able to put it in a specific address in memory and have the host read it? From what I can see from [https://developer.apple.com/documentation/uikit/pointer_interactions](this guide,) it seems like we can make the guest OS use the Windows pointer to reduce lag, and report the cursor location to the guest server? That’s already what we’re doing? Unless I’m misunderstanding? Looks like I misunderstood the thread. Sorry about that
GITHUB_ARCHIVE
Modeling as a Way of Learning About Complexity of Earth Systems Dave Bice, Professor of Geoscience, Penn State University For the last two decades, I've been fascinated by the complexity of Earth systems and have made teaching about these systems one of the main foci of my career. I have found that my own personal understanding as well as student learning has been greatly enhanced through the use of models. To my mind, the beauty of models is multifaceted: 1) they allow for experimentation as a way of learning; 2) they force us to think about quantifying things; 3) they provide a unifying framework for discussing processes; and 4) they force us to look for and describe relationships and feedbacks. Models alone are not enough for understanding all of the complexity of Earth systems, but I think they are essential tools. How did I get to this point? I went to graduate school at Berkeley in the mid-eighties thinking that I was going to become an all-purpose, field-based stratigrapher/structural geologist/tectonicist, like my mentor Walter Alvarez. To a certain extent, this is what I still do, but I also spend a great deal of time working with and teaching with numerical models of all kinds of Earth systems. This evolution was a natural outgrowth of my environment – Walter filled our office with computers, my classmate Lung Chang taught me how to program, and Walter had this unique way of looking at geology as the result of a vast array of processes with complex causes and effects. Before long, we began to realize the potential power of the computers to help us explore numerically the ideas we were always talking about. This, my slide into modeling began. As I was about to leave graduate school for my first faculty position at Carleton College, Walter excitedly called me into his office to show me this new program he had learned about that made numerical experimentation with systems so easy – this was my introduction to STELLA. When I got to Carleton, I bought a bunch of computers and started to find ways of including STELLA modeling into class and lab exercises in many of my classes. I found it to be an effective and stimulating vehicle for getting students to think about how earth systems work and how complex the dynamics of these systems can be. Fr the most part, I was tinkering with modeling these systems because I was busy with the task of teaching a crowd of wonderfully curious and fantastically talented students. I finally got serious about developing a more comprehensive set of earth systems models during a sabbatical leave, which gave me the time to learn about the how to represent some of the key features of the climate system in the form of simple models. I created a web page presenting these materials in the hopes that they would be useful to like-minded educators (Exploring the Dynamics of Earth Systems (more info) ), and have been pleased to see that many people have made use of these resources. My interest in these models is now in a sort of renaissance period due to interactions with my colleagues at Penn State, many of whom are catching the STELLA bug.
OPCFW_CODE
How to properly calculate averages in SQL For things such as revenue per employee, or selling price per product Which method would be correct, and are there cases where one will work and the other won't? SUM(revenue)/SUM(employee) /* OR */ AVG(revenue/employee) I have been getting different answers with the above. How much is the difference? It may be due to rounding, I assume. Please share table structures, sample data and desired output. @KaziMohammadAliNur's suggestion is generally in line with what - let's say - "regular" users might expect. That is, an average of the two separate totaled values. That's not to say that the other way is incorrect (it's more an average of averages, though) - and in certain business cases you may be asked to calculate in that way. The SUM of employee 1 and employee 9 is 10. I don't see how that's useful sum(employee) seems either a strange number to calculate or a strange name for the column Most people would compare AVG(revenue) with SUM(revenue) / COUNT(employee). In which case the main difference would be that the first would effectively ignore rows with NULL revenue, the second would effectively treat such rows as having 0 revenue... (The average of NULL,1,2,3 is 2, but the sum/count would be 6/4 = 1.5, unless the employee value is also NULL.) Without knowing your table structure it's difficult to know what's appropriate for you. SUM(revenue)/SUM(employee) will just divide sum of all revenues by sum of employees (Or it can be count). Where as AVG(revenue/employee) will sum all the revenue/employee then divide the result by number of rows. For your purpose I think first method will be appropriate. The two are doing two very different things. One is calculating an overall average (the first one). The second is calculating a weighted average treating each row separately. As a simple example, consider a small table: employees revenue 99 99 1 100 The first returns (100 + 99) / 100 = 1.99. This would be the same average if you had a table with 100 rows, one per "employee" and the revenue were revenue / employee (i.e. 1 on 99 rows and 100 on 1 row). The second returns (99 / 99) + (100 / 1) = 101. This would be treating each row "equally". That is, it is a biased average. Sometimes this is useful, but not usually for counts. For instance, it might be useful for rates of some sort. Or it might be useful if the sizes on the rows are roughly the same. In general, you would want the first format under most circumstances, based on the limited information in your question.
STACK_EXCHANGE
cannot access com.trueaccord.lenses.Updateable in java When I try to build the scala project to a jar for my java application. It shows that "cannot access com.trueaccord.lenses.Updateable " in my Intellij IDE. But there is actually no problem on compile & run time. Did you know what is the root cause of this problem? I'm unable to reproduce this. There's nothing special about this class. Maybe you can try to invalidate IntelliJ's cache? Tried to invalidate the intellij cache but with no luck. Can you help me to take a look is there any problems if I defined the class like this? final case class BasicPortfolios( val accountId : scala.Option[scala.Int] = { /* compiled code */ }, val accountType : scala.Option[scala.Predef.String] = { /* compiled code */ }, val balance : scala.Option[scala.Double] = { /* compiled code */ }, val equity : scala.Option[scala.Double] = { /* compiled code */ }, val unrealizedPl : scala.Option[scala.Double] = { /* compiled code */ }, val unrealizedPlPips : scala.Option[scala.Double] = { /* compiled code */ } ) extends scala.AnyRef with com.trueaccord.scalapb.GeneratedMessage with com.trueaccord.scalapb.Message[com.plugin.model.protobuf.portfolio.BasicPortfolios] with com.trueaccord.lenses.Updatable[com.plugin.model.protobuf.portfolio.BasicPortfolios] with scala.Product with scala.Serializable { val serializedSize : scala.Int = { /* compiled code */ } def writeTo(output : com.google.protobuf.CodedOutputStream) : scala.Unit = { /* compiled code */ } def mergeFrom(__input : com.google.protobuf.CodedInputStream) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getAccountId : scala.Int = { /* compiled code */ } def clearAccountId : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def withAccountId(__v : scala.Int) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getAccountType : scala.Predef.String = { /* compiled code */ } def clearAccountType : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def withAccountType(__v : scala.Predef.String) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getBalance : scala.Double = { /* compiled code */ } def clearBalance : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def withBalance(__v : scala.Double) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getEquity : scala.Double = { /* compiled code */ } def clearEquity : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def withEquity(__v : scala.Double) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getUnrealizedPl : scala.Double = { /* compiled code */ } def clearUnrealizedPl : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def withUnrealizedPl(__v : scala.Double) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getUnrealizedPlPips : scala.Double = { /* compiled code */ } def clearUnrealizedPlPips : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def withUnrealizedPlPips(__v : scala.Double) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } def getField(__field : com.trueaccord.scalapb.Descriptors.FieldDescriptor) : scala.Any = { /* compiled code */ } def companion : com.plugin.model.protobuf.portfolio.BasicPortfolios.type = { /* compiled code */ } } object BasicPortfolios extends scala.AnyRef with com.trueaccord.scalapb.GeneratedMessageCompanion[com.plugin.model.protobuf.portfolio.BasicPortfolios] with scala.Serializable { implicit def messageCompanion : com.trueaccord.scalapb.GeneratedMessageCompanion[com.plugin.model.protobuf.portfolio.BasicPortfolios] = { /* compiled code */ } def fromFieldsMap(fieldsMap : scala.Predef.Map[scala.Int, scala.Any]) : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } val descriptor : com.trueaccord.scalapb.Descriptors.MessageDescriptor = { /* compiled code */ } val defaultInstance : com.plugin.model.protobuf.portfolio.BasicPortfolios = { /* compiled code */ } implicit class BasicPortfoliosLens[UpperPB](_l : com.trueaccord.lenses.Lens[UpperPB, com.plugin.model.protobuf.portfolio.BasicPortfolios]) extends com.trueaccord.lenses.ObjectLens[UpperPB, com.plugin.model.protobuf.portfolio.BasicPortfolios] { def accountId : com.trueaccord.lenses.Lens[UpperPB, scala.Int] = { /* compiled code */ } def optionalAccountId : com.trueaccord.lenses.Lens[UpperPB, scala.Option[scala.Int]] = { /* compiled code */ } def accountType : com.trueaccord.lenses.Lens[UpperPB, scala.Predef.String] = { /* compiled code */ } def optionalAccountType : com.trueaccord.lenses.Lens[UpperPB, scala.Option[scala.Predef.String]] = { /* compiled code */ } def balance : com.trueaccord.lenses.Lens[UpperPB, scala.Double] = { /* compiled code */ } def optionalBalance : com.trueaccord.lenses.Lens[UpperPB, scala.Option[scala.Double]] = { /* compiled code */ } def equity : com.trueaccord.lenses.Lens[UpperPB, scala.Double] = { /* compiled code */ } def optionalEquity : com.trueaccord.lenses.Lens[UpperPB, scala.Option[scala.Double]] = { /* compiled code */ } def unrealizedPl : com.trueaccord.lenses.Lens[UpperPB, scala.Double] = { /* compiled code */ } def optionalUnrealizedPl : com.trueaccord.lenses.Lens[UpperPB, scala.Option[scala.Double]] = { /* compiled code */ } def unrealizedPlPips : com.trueaccord.lenses.Lens[UpperPB, scala.Double] = { /* compiled code */ } def optionalUnrealizedPlPips : com.trueaccord.lenses.Lens[UpperPB, scala.Option[scala.Double]] = { /* compiled code */ } } final val ACCOUNT_ID_FIELD_NUMBER : scala.Int = { /* compiled code */ } final val ACCOUNT_TYPE_FIELD_NUMBER : scala.Int = { /* compiled code */ } final val BALANCE_FIELD_NUMBER : scala.Int = { /* compiled code */ } final val EQUITY_FIELD_NUMBER : scala.Int = { /* compiled code */ } final val UNREALIZED_PL_FIELD_NUMBER : scala.Int = { /* compiled code */ } final val UNREALIZED_PL_PIPS_FIELD_NUMBER : scala.Int = { /* compiled code */ } } portfolios.getAccountId(); // intellij will prompt "cannot access com.trueaccord.lenses.Updateable" but it can still compile and run the java application. It sounds like the way you produce the jar doesn't make intellij fetch the sources for Lenses. Maybe you can manually add lenses as a dependency of your project? I found that the root cause is Intellij can't access scala package object, would you have any suggestion? i.e. can't access com.trueaccord.lenses.Updatable normal in any JAVA file. Hi Gary, I am unable to reproduce this. Can you share complete instructions on how to trigger this issue? It sounds like an IntelliJ bug, not a Lenses bug. On Wed, Jun 3, 2015 at 8:16 PM, Gary Lo<EMAIL_ADDRESS>wrote: I found that the root cause is Intellij can't access scala package object, would you have any suggestion? i.e. can't access com.trueaccord.lenses.Updatable normal in any JAVA file. — Reply to this email directly or view it on GitHub https://github.com/trueaccord/Lenses/issues/2#issuecomment-108701612. -- -Nadav Closing due to inactivity. I have the same problem - also tested with a minimal configuration. Intellij says: cannot access com.trueaccord.lenses.Updateable in java, but everything compiles just fine. I'm using "com.trueaccord.scalapb" %% "scalapb-runtime" % "0.5.42" % "protobuf" Can you send step-by-step instructions on how to get this problem? new scala-sbt project in Intellij (2016.2.4), with build.sbt: name := "CloudTest" version := "1.0" scalaVersion := "2.11.8" PB.targets in Compile := Seq( scalapb.gen() -> (sourceManaged in Compile)(_ / "generated-proto").value ) libraryDependencies ++= Seq( // For finding google/protobuf/descriptor.proto "com.trueaccord.scalapb" %% "scalapb-runtime" % "0.5.42" % "protobuf" ) and scalapb.sbt: addSbtPlugin("com.thesamet" % "sbt-protoc" % "0.99.1") libraryDependencies += "com.trueaccord.scalapb" %% "compilerplugin" % "0.5.42" and Person.proto: syntax = "proto3"; package test; message Person { string name = 1; int32 age = 2; } and the java class: import test.Person.Person; /** * Created by qux on 9/26/16. */ public class Main { public static void main(String[] args) { Person a = new Person("7",7); int age = a.age(); } } Thanks! Was able to reproduce this. I think it's because everything in the lenses project is inside a package object. I've just released a new version of it which moves Updatable and other classes outside the package object. Can you add this to your libraryDependencies in build.sbt? "com.trueaccord.lenses" %% "lenses" % "0.4.7" If this works, I'll release ScalaPB with dependency on it, so you'll get the new version of lenses transitively (and this line could be removed) Around 8 hours ago it worked perfect with this additional dependencies. But now I get compile-errors with the additional dependency (without it works, but intellij errors): error: cannot access Updatable ... class file for com.trueaccord.lenses.package$Updatable not found Maybe it's because I'm using a map in a message: syntax = "proto3"; package test; message Person { string name = 1; int32 age = 2; map<string, string> foobar = 3; } So perhaps with a new release of scalapb with the dependency on it all is fine.. Ok, remove the dependency, update ScalaPB to 0.5.43, clean in sbt, and let's see if it helps. Works like a charm as far as I see. No compile or Intellij errors :) Thanks! Awesome!
GITHUB_ARCHIVE
I have blogged a few days a ago about a study by Kiju Jung that suggested that implicit bias leads people to underestimate the danger of female-named hurricanes. The study used historical data to demonstrate a correlation between femininity and death-toll, and subsequent experiments seemed to show that people indeed estimate hurricanes to be less dangerous (all else equal) if they have more feminine names. As you can read in my previous post, the study seemed rather convincing to me, if a bit surprising in its findings. But there was correlation in the regression, and there were the experiments which show why this correlation would occur. Plus, PNAS is a large journal, they would have this properly vetted, right? Anyway, I consider myself a rather cynical person when it comes to trust in statistical analyses, but my own “bullshit detectors were not twitching madly”, as did those of other people (see further critics here and here, and feel free to add links, I will sure have missed some). Particularly the reanalysis of the regression on historical data by Bob O’Hara was sobering, as it suggested that the study failed to notice that hurricane damage affects death toll nonlinearly, and if one does include this in the models, femininity drops out as a predictor. I trust that Bob knows what he’s doing, so I would have given him the benefit of the doubt, but in the end I thought I’ll have a look at the data myself. The data is open, and Bob has his analysis commendably on GitHub, which I forked to my account. Bob’s arguments were mainly based on looking at the residuals, and choosing that as a guideline for adding predictors. What I did is basically backing this up with a model selection to see how the different possible models compare, and if there are reasonable models that still have femininity of the name in there as a predictor. My full knitRed report is here, sorry, easier to leave everything on GitHub, but in a nutshell: - I guess Bob is right, there should be a nonlinear term for damage added. There is consistently higher AICc for models that have such a term included. Btw., I get more support for a sqrt than for a quadratic term - If we do this, femininity as a predictor is never in the best model, but models that include femininity stay within Delta AICc of 2. I would conclude from that that we have no smoking gun for an effect of femininity, but it’s also not completely excluded. ## Global model call: gam(formula = alldeaths ~ MasFem * (Minpressure_Updated.2014 + ## sqrt(NDAM) + NDAM + I(NDAM^2)), family = negbin(theta = c(0.2, ## 10)), data = Data[sqrt(Data$NDAM) < 200, ], na.action = "na.fail") ## --- ## Model selection table ## (Int) MsF Mnp_Upd.2014 NDA NDA^2 sqr(NDA) MsF:NDA ## 21 0.3439 -0.0001522 0.04794 ## 86 0.3341 -0.08935 -0.0001610 0.04874 2.861e-05 ## 25 0.5271 -3.468e-09 0.03524 ## 282 0.5197 -0.03425 -3.757e-09 0.03556 ## 150 0.3177 -0.16580 -0.0001650 0.04941 ## 22 0.3470 0.08015 -0.0001536 0.04795 ## 154 0.5172 -0.15170 -3.772e-09 0.03565 ## 23 3.3220 -0.003029 -0.0001482 0.04659 ## 29 0.3052 -0.0001907 9.577e-10 0.05092 ## 27 5.6880 -0.005271 -3.464e-09 0.03390 ## MsF:sqr(NDA) I(NDA^2):MsF df logLik AICc delta weight ## 21 3 -293.0 592.2 0.00 0.172 ## 86 5 -290.9 592.4 0.24 0.153 ## 25 3 -293.3 592.8 0.65 0.124 ## 282 1.358e-09 5 -291.1 592.9 0.69 0.122 ## 150 0.004055 5 -291.2 593.1 0.95 0.107 ## 22 4 -292.7 593.9 1.69 0.074 ## 154 0.003908 5 -291.7 594.1 1.86 0.068 ## 23 4 -292.9 594.2 1.99 0.064 ## 29 4 -292.9 594.4 2.16 0.058 ## 27 4 -293.0 594.4 2.21 0.057 Note that three outliers (hurricanes with extremely large damages) were removed for these results, which seemed sensible (because more conservative) to me, but which acts out in favor of femininity. Details in the report. The figure below shows the predictions of the best model that included femininity – the model predicts a practically relevant effect of femininity for high damage values. But as I noted in my report, we have very few data points for the high-damage region, so I think this is a VERY fragile result and I wouldn’t bet my money on it. All in all, my conclusions from the statistical analysis based on the data in study is that I wouldn’t exclude the possibility that femininity could have an effect of a size that would make it relevant for policy, but there is no certainty about it at all. I have read other comments that claimed that the data is wrong. I can’t say anything about that, but if that is so, it should be possible to establish that. In general, assuming that the data was correct, I think the study isn’t quite as ridiculous as some portrayed it. I find it quite conceivable that there is an implicit bias that will affect our estimates of the severity of storms. The experiments seem to support that hypothesis, and they are easy enough to replicate, so people, please go out and do so. The more shaky question is how relevant this bias is for fatalities. Here, I would say the study is clearly overconfident in its analysis. I would conclude that the uncertainty range clearly includes zero, and I’m not quite sure what my best estimate would be … gut feeling: lower than the plot that I present above. To end this on a general observation: what makes me a bit sad is knowing that, as for another recent study in PNAS where we wrote a reply, the authors would have probably found it much more challenging to place this study in PNAS if they would have done a more careful and conservative statistical analysis. I’m not saying that the competition for space in high-impact journals directly or indirectly encourages presenting results in an overconfident way – there are many examples in the history of Science of overconfidence and wishful thinking when the fight was not yet about journal space and tenure. But let’s say that there is a certain amount of noise in the analyses that are on the market place where studies are traded. High-impact journals are looking for the unexpected, and sometimes they may find it, but this also means that they will attract the outliers of this noise. As a consequence, at equal review quality, we can expect these journals to have more outliers (wrong results), and that I think is a problem because the publications in these journals decide careers and influence policy. I’m quite sure that the review in PNAS or Nature is as good as in any other journal, but given the previous arguments, it needs to be better than other journals, much better, specially on the side of the methods. A few practical ideas: maybe worth thinking about adjusting p-values or alpha values to journal impact, knowing that high-ranked journals implicitly get more studies offered than they publish, and thus implicitly test multiple hypothesis (implicitly all studies that are done, because if someone finds a high effect he/she sends it to a high journal). Also, the big journals should definitely find or hire statisticians to independently confirm the analysis of all papers the publish. And in general, it would be good to remember that extraordinary claims require extraordinary evidence. 7 thoughts on “Female hurricanes reloaded – another reanalysis of Jung et al.” Jeremy Freese has the best takedown I’ve seen yet, at http://scatter.wordpress.com/2014/06/03/my-thoughts-on-that-hurricane-study/ See also Andrew Gelman at http://andrewgelman.com/2014/06/06/hurricanes-vs-himmicanes/ Thanks for the links. Brought me to Andrew’s post at http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/06/05/hurricanes-vs-himmicanes/ which I found worth reading, especially the comments by the anonymous “colleague who is interested in risk perception”. I read the post by Jeremy Freese but I’m afraid that I find a lot of his criticism derived. For example, I don’t see what’s the problem with the effect being mediated by an interaction. Seems perfectly possible to me that carelessness only becomes deadly for larger storms. I think they should have better tested their models (see above), but if I have done this and I find a significant and strong interaction, why wouldn’t I report it? Pingback: Female hurricanes reloaded – another reanalysis of Jung et al. ← Patient 2 Earn Pingback: Rational explanations | Jeff Ollerton's Biodiversity Blog Pingback: What’s in a name? Female hurricanes are deadlier than male hurricanes | theoretical ecology Pingback: Hurricanes and Himmicanes revisited with DHARMa | theoretical ecology Pingback: Hurricanes and Himmicanes revisited with DHARMa | R-bloggers
OPCFW_CODE
#include <chrono> #include <ctime> #include <iomanip> #include <iostream> #include <regex> #include <vector> #include "coordinate.h" #include "date.h" #include "observer.h" #include "radian.h" #include "test.h" using namespace PA; void ephemeris(Date date) { double jd = date.GetJulianDate(); { std::ios_base::fmtflags fmtflags = std::cout.flags(); std::cout << "Date & Time:" << std::endl; std::cout << " Julian Date: " << std::fixed << std::setprecision(6) << jd << std::endl; std::cout << " Date (TT): " << date.GetTTString() << std::endl; std::cout << " Delta-T: " << std::showpos << std::fixed << std::setprecision(2) << date.GetDeltaT() << "s" << std::endl; std::cout.flags(fmtflags); } { std::ios_base::fmtflags fmtflags = std::cout.flags(); Observer observer{jd}; std::cout << "Earth:" << std::endl; std::cout << " Nutation Lon.: " << std::setw(16) << RadToArcSecStr(observer.GetNutationLongitude(), 3) << std::endl; std::cout << " Nutation Obliq.: " << std::setw(16) << RadToArcSecStr(observer.GetNutationObliquity(), 3) << std::endl; std::cout << " Mean Obliq.: " << std::setw(19) << RadToDMSStr(observer.GetObliquityMean(), 3) << std::endl; std::cout << " Obliquity.: " << std::setw(19) << RadToDMSStr(observer.GetObliquity(), 3) << " (" << RadToDegStr(observer.GetObliquity(), 6) << ")" << std::endl; Observer::Body bodies[]{Observer::Body::kSun, Observer::Body::kMoon}; for (auto body : bodies) { std::cout << Observer::BodyName(body) << ":" << std::endl; std::cout << " Geocentric Lon.: " << std::setw(19) << RadToDMSStr(observer.GetGeocentricLongitude(body), 2) << " (" << RadToDegStr(observer.GetGeocentricLongitude(body), 6) << ")" << std::endl; std::cout << " Geocentric Lat.: " << std::setw(19) << RadToDMSStr(observer.GetGeocentricLatitude(body), 2) << " (" << RadToDegStr(observer.GetGeocentricLatitude(body), 6) << ")" << std::endl; std::cout << std::setprecision(8); std::cout << " Radius Vector: " << std::setw(11) << observer.GetRadiusVectorAU(body) << " AU"; std::cout << std::fixed << std::setprecision(1) << " (" << observer.GetRadiusVectorAU(body) * 149597870.7 << " km)" << std::endl; std::cout << " Aberration Lon.: " << std::setw(16) << RadToArcSecStr(observer.GetAberrationLongitude(body), 3) << std::endl; std::cout << " Aberration Lat.: " << std::setw(16) << RadToArcSecStr(observer.GetAberrationLatitude(body), 3) << std::endl; std::cout << " Apparent Lon.: " << std::setw(19) << RadToDMSStr(observer.GetApparentLongitude(body), 2) << " (" << RadToDegStr(observer.GetApparentLongitude(body), 6) << ")" << std::endl; std::cout << " Apparent Lat.: " << std::setw(19) << RadToDMSStr(observer.GetApparentLatitude(body), 2) << " (" << RadToDegStr(observer.GetApparentLatitude(body), 6) << ")" << std::endl; std::cout << " Apparent R.A.: " << std::setw(14) << RadToHMSStr(observer.GetApparentRightAscension(body), 3) << " (" << RadToHourStr(observer.GetApparentRightAscension(body), 6) << " = " << RadToDegStr(observer.GetApparentRightAscension(body), 6) << ")" << std::endl; std::cout << " Apparent Decl.: " << std::setw(19) << RadToDMSStr(observer.GetApparentDeclination(body), 2) << " (" << RadToDegStr(observer.GetApparentDeclination(body), 6) << ")" << std::endl; } std::cout.flags(fmtflags); } { std::ios_base::fmtflags fmtflags = std::cout.flags(); #if 0 std::cout << " Mean Longitude.: " << std::setw(19) << RadToDMSStr(sun_mean_longitude, 2) << " (" << RadToDegStr(sun_mean_longitude, 6) << ")" << std::endl; std::cout << std::fixed << std::setprecision(8); #endif /* Equation of Time */ // For comparison: http://mb-soft.com/public3/equatime.html // double eot{RadNormalize(sun_mean_longitude - 20.49552_arcsec - // 0.09033_arcsec - sun_apparent_ra + // nutation_longitude * cos(obliquity))}; // std::cout << "Equation of Time: " << std::setw(16) << RadToHMSStr(eot) // << " (" << RadToDegStr(eot) << ")" << std::endl; std::cout.flags(fmtflags); } } int main(void) { test_internal(); std::string line; std::cout << "Date (y-m-d-hh:mm:ss or y-m-d), or (n)ow? "; std::cin >> line; if (line == "n") { PA::Date date; std::time_t now; now = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now()); struct tm* ptm; ptm = std::gmtime(&now); date.SetCalendarTT(1900 + ptm->tm_year, ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_hour, ptm->tm_min, ptm->tm_sec); ephemeris(date); } else { std::smatch match; if (std::regex_match(line, match, std::regex("(\\d+)\\-(\\d+)\\-(\\d+)-" "(\\d+):(\\d+):(\\d+(?:\\.\\d+)?)", std::regex::ECMAScript))) { PA::Date date; date.SetCalendarTT(std::stoi(match.str(1)), std::stoi(match.str(2)), std::stoi(match.str(3)), std::stoi(match.str(4)), std::stoi(match.str(5)), std::stof(match.str(6))); ephemeris(date); } else if (std::regex_match( line, match, std::regex("^(\\d+)\\-(\\d+)\\-(\\d+(?:\\.\\d+)?)$", std::regex::ECMAScript))) { PA::Date date; date.SetCalendarTT(std::stoi(match.str(1)), std::stoi(match.str(2)), std::stof(match.str(3))); ephemeris(date); } } #if 0 { // [Peter11] p.166 // 214.8675028 // 1.716074358 // RA: 14h 12m 10s // Decl: -11d 34' 52" date.SetCalendarTT(2003, 9, 1); ELP82JM elp{date.GetJulianDate()}; elp.GetPosition(); } #endif return 0; }
STACK_EDU
User talk:Dan Bron/Snippets/PrettyPictures - Errors represented with [\] -- Oleg Kobchenko <<DateTime(2008-12-16T03:12:11-0500)>> - Consider adding glwh to standardize picture size for screenshots. -- Oleg Kobchenko <<DateTime(2006-12-13T01:37:50Z)>> - As samples, pictures for smallest nice rectangular size of viewmat: 121, i.e. n=60 -- Oleg Kobchenko <<DateTime(2006-12-13T06:35:40Z)>> - Here is more compact thumbnail layout: 1. filled in circle 2. hollow circle 3. Bullseye, delta-circumference = 1 4. Bullseye, delta-circumference = m m ([ * [: <. 0.5 + %~) ] -- Oleg Kobchenko <<DateTime(2006-12-13T06:50:08Z)>> Oleg provided a mechanism to save viewmat images to PNG files, so I feel that the following should generate all the image files for this table. However, it does not; I just get a bunch of empty white images. Debugging indicates that it is probably a repaint/selected window issue. I do not know how to fix it. load 'viewmat media/platimg debug' coinsert 'jgl2' PP =: 1 : 'viewmat@:(u [: | j./~@:i:)' glqall =: (|. $ [:glqpixels 0 0&,)@glqwh glwh =: 3 : 'wd''pmovex '',(+0 0,y-glqwh_jgl2_@$@#)&.".wd''qformx''' ppp =: dyad define NB. Pretty picture print wdreset'' ". '(',y,') PP 60' NB. jvm_g_paint_jviewmat_ :: 0: '' glwh 4 3*30 NB. jvm_g_paint_jviewmat_ :: 0: '' 6!:3]0.5 (glqall'') writeimg jpath '~temp\pp\',x,'.png' 6!:3]0.5 wdreset'' ) A =: '>' ; '= <.' ; '<.@:]' NB. Etc B =: 'filled_in_circle' ; 'hollow_circle' ; 'bullseye' NB. Etc B ppp&.> A -- Dan Bron <<DateTime(2007-03-16T15:42:32Z)>> Well this steps into the uncharted treacherous territory of sync programmatic execution and async GUI. So I guess to do it like that, the use of timer might help. But that's an overkill. I recommend using something that is proven and supported like the animate addon. You can designate each picture as a step in the animation. And all will be saved nicely in a series of files. I guess you could take it further by making code that will convert your boxed list of picture verbs into a script for animate automatically. Or have just one new animate script that will accept a list of picture verbs in some form and derive number of steps, etc. Maybe an Options dialog. -- Oleg Kobchenko <<DateTime(2007-03-16T16:35:59Z)>>
OPCFW_CODE
I woke up, fell out of bed, dragged a comb across my head, and checked the statistics generated by one of my mail servers during the past 24 hours. The day before, I wrote a Sendmail milter in Perl to match every inbound mail relay against three of the most popular DNS blacklists: spamhaus.org, sorbs.net and spamcop.net. No actual blocking took place, as I was just interested in collecting numbers. (A milter is an extension to Sendmail’s mail transfer agent; the code for my milter is freely available on my blog). After the inbound e-mail was catalogued in the database, it was passed on to a trio of e-mail filters. First, it hit the greylisting milter, which uses a heavily customized version of Evan Harris’s relaydelay code. If it passed that filter, it was checked by ClamAV for viruses and phishing scams, then finally passed to SpamAssassin for spam checking. As you can see, the results are impressive. Of the 122,865 connections seen, spamcop.net matched on 45,829 IP addresses, sorbs.net matched 59,010, and spamhaus.org’s sbl-xbl list matched 57,881. Beyond the DNS blacklist matches, we see that the greylisting filter is working overtime: 120,571 messages were seen by the greylisting code, with only 87 matching manual whitelists. Of those, only 2,515 messages were retried and successfully passed through the filter. Of that number, ClamAV discarded seven worms and 23 phishing scams, and SpamAssassin pulled out 64 confirmed spams, although 308 suspected spams were passed through. This filtering resulted in 2,113 messages actually delivered to e-mail inboxes in that 24-hour period, or just less than 2 percent of the overall mail volume. If the DNS blacklist checks were in place and refusing e-mail based on the lookups to sorbs.net and so on, the number of e-mails hitting the filter chain would be nearly halved, although at least 60,000 unwanted e-mails would still hit the filters. Looking through the logs during the past few weeks, I saw that this was not an anomalous event. These numbers crop up nearly every single day. The MySQL database running as the relaydelay back-end has seen more than 43 million e-mails since I implemented it in its current form almost exactly one year ago. If you think that this filter chain is rather absurd, take it as an indication of the general state of e-mail traffic today. Without these filters, e-mail through this server would be completely unusable due to the crushing spam volume. That’s the truly absurd part.
OPCFW_CODE
Cast float to decimal in C#. Why is (decimal)0.1F == 0.1M not false because of rounding? If I evaluate the following in C#, it yields true: (decimal)0.1F == 0.1M Why doesn't the conversion to float and back to decimal introduce any rounding errors? 1 Answers Cast float to decimal in C#. Why is (decimal)0.1F == 0.1M not false because of rounding? The cause of the observed behavior is that Microsoft’s C# implementation converts decimal using only seven decimal digits. Microsoft’s implementation of C# uses .NET. When .NET converts a single-precision floating-point number to decimal, it produces at most seven significant digits, rounding any residue using round-to-nearest. The source text 0.1F becomes the single-precision value 0.100000001490116119384765625. When this is converted to decimal with seven significant digits, the result is exactly 0.1. Thus, in Microsoft’s C#, (decimal) 0.1F produces 0.1, so (decimal) 0.1F == 0.1M is true. We can compare this with a non-Microsoft implementation, Mono C#. An online compiler for this is available here. In it, Console.WriteLine((decimal)0.1F); prints “0.100000001490116”, and (decimal)0.1F == 0.1M evaluates to false. Mono C# appears to produce more than seven digits when converting Microsoft’s C# documentation for explicit conversions says “When you convert decimal, the source value is converted to decimal representation and rounded to the nearest number after the 28th decimal place if required.” I would have interpreted this to mean that the true value of the float, 0.100000001490116119384765625, is exactly converted to decimal (since it requires fewer than 28 digits), but apparently this is not the case. We can further confirm this and illustrate what is happening by converting double and then to decimal. Microsoft’s C# converts decimal using 15 significant digits. If we convert double, the value does not change, because double can exactly represent each float value. So (double) 0.1F has exactly the same value as 0.1F, 0.100000001490116119384765625. However, now, when it is converted to decimal, 15 digits are produced. In a Microsoft C# implementation, Console.WriteLine((decimal)(double) 0.1F); prints “0.100000001490116”, and (decimal)(double) 0.1F == 0.1M evaluates to false.
OPCFW_CODE
4. In Scientific Communication, Social Media Took over Cell/Nature/Science In August 2012, we wrote a commentary titled - At that time, both concepts promoted in the commentary were ridiculous. Everyone told us that ‘biologists would not post preprints in arxiv.org’ and ‘dominance of Nature/Science/Cell would never go away’. Fast forward by sixteen months and you find - (i) NIH and CSHL establish bioarxiv, (ii) a Nobel laureate utilize the publicity of award-ceremony to talk against Cell/Nature/Science culture. Here is the real evidence that the so-called ‘high-visibility’ journals are losing their dominance. During early 2013, K. R. Bradnam and collaborators submitted their Assemblathon 2 paper to arxiv and GigaScience. Assemblathon 2 paper had been a colossal failure by accepted standards of biology, because it was not published in a ‘high-visibility journal’. GigaScience, being an upstart journal, is fairly low in the pecking order. On the other hand, based on the measure of actual high-visibility, the Assemblathon 2 paper was immensely successful. The paper was widely discussed in the community, and its author used social media (twitter - @assemblathon and blog) to generate and maintain interest in the work. Moreover, the open review model of GigaScience made the process transparent, constructive and beneficial for the scientists involved. We are definitely seeing a major change in trend in the overall publication and scientific communication process. The last point is best understood by comparing open review model of GigaScience with the closed review process experienced by Heng Li for his BWA- MEM paper. An anonymous reviewer called the author ‘scientifically dishonest’ for no good reason. As a result, frustrated Heng Li posted his preprint in arxiv and got done with it. More details are available in the following commentary. Overall, 2013 saw an explosion in the use of social media (e.g. blog, github, twitter, slideshare, biostar, etc.), through which scientists from all over the globe started to communicate directly with each other (for example, here is a cuban blog on bioinformatics and proteomics). In earlier years, publishing a paper in a high- profile journal or presenting in a high-visibility conference were two only modes of communications, but now publish or perish is being replaced by Tweet or Perish. The biggest benefit of internet communication is the creation of new services and ways of sharing, whose equivalents did not exist in the earlier era. Three examples are shown below. Rosalind and Stepic Rosalind has been very popular bioinformatics teaching tool and its creators went on to build another useful service called Stepic. The relevant commentaries are shown below. In a series of commentaries, we discussed about various aspects of Biostar - an online question-answer forum for bioinformaticians. In mid-2012, we received an early access of SOAPdenovo2 executable code and found it useful. When the SOAPdenovo2 paper was published six months later, we decided to understand the 40,000 line C-code, and shared the insights with our readers. We used blog and wiki for this effort. Coming up next -
OPCFW_CODE
""" create many bots with the same functionality """ import asyncio from vkwave.bots import SimpleLongPollBot, TaskManager, ClonesBot bot = SimpleLongPollBot(tokens=["Bot0TOKEN"], group_id=444,) @bot.message_handler(bot.text_filter("123")) async def simple(event: bot.SimpleBotEvent): await event.answer("HELLO") @bot.message_handler() async def any_(event: bot.SimpleBotEvent): await event.answer("any") clones = ClonesBot( bot, SimpleLongPollBot("Bot1TOKEN", 192868628), SimpleLongPollBot("Bot2TOKEN", 172702125) ) async def clone_request(): # clones.clones - тупл с клонами (SimpleLongPollUserBot или SimpleLongPollBot) # Запрос с первого клона print(await clones.clones[0].api_context.users.get()) def add_clone(clone): # Добавляет клона в общий список clones.add_clone(SimpleLongPollBot("Bot3Token", 122134648)) asyncio.get_event_loop().run_until_complete(clone_request()) clones.run_all_bots(last_handler=any_) # В клонах хендлеры могут перемешаться, так что ставим хендлер который реагирует # на все подряд последним # or # task_manager = TaskManager() # task_manager.add_task(bot.run) # task_manager.run()
STACK_EDU
TekkitRestrict 1.20 Dev 1 UploadedNov 3, 2013 Supported Bukkit Versions [TekkitRestrict v1.20 Dev 1 Changes] - Fixed bug with quarries not selecting the proper area. - Fixed all other landmark problems related to Tekkit Restrict. [TekkitRestrict v1.19 Release Changes] - Fixed bug where certain preset modgroups were not interpeted correctly. - Fixed metrics not sending version correctly - Changed updater to comply with new policy - Added better crafting handler. This one also handles crafting on automatic crafting tables, project tables etc., and will give you the items back you used for crafting that banned item. [TekkitRestrict v1.18 Release Changes] - Removed most cache interactions and replaced them with a more efficient system. - Improved Efficiency and Performance - Added messages to DisabledItems, Limiter, NoClick and LWCProtect. You can now set messages that will be displayed when a user tries to obtain a banned items, places more than the limits of a block, etc. See: Ways to describe an Item - Added TMetrics. Lately, MCStats hasn't been very reliable so I decided to make my own metrics system. It reports the same things as MCStats does, with the exception that it also sends the memory allocated to the server. - Added a notice that appears after all plugins have loaded if there were any warnings/errors thrown while loading TekkitRestrict. - Added debug command - Added /tr admin reload subcommands to only reload certain parts of TekkitRestrict. - You can now check someone's limits if he is offline. - You can now clear someone's limits if he is offline. - use /tr warnings <load|config|other> to view warnings that occurred since TekkitRestrict has been running. Dupes And Hacks - Added fix for Mining Laser + Automatic Crafting Table MK II dupe - Improved Hack Handling - Added Use-Command-On-Hack and Use-Command-On-Dupe - Configs now update alot better and keep your settings. If settings were readable in the last version, they will be in this and the next version. - Changed Hack.config.yml to HackDupe.config.yml - Improved the layout of the HackDupe config. - Fixed a bug where TekkitRestrict ignored the settings in the Hack.config.yml - Removed MicroPermissions and replaced them with GroupPermissions. - Limiter now works as it should and interprets configlimit correctly - Limiter now supports group permissions (tekkitrestrict.limiter.ee.2) - Limiter now uses a permission cache. If you change someone's limiter permissions, use /tr admin reload limiter for it to update. - Fixed MySql database issues - Fixed /tr emc commands - Entity remover now removes entities per chunk, lowering the chance of errors and problems. - Fixed Gem Armor blocker - Fixed bug where NoClick doesn't work properly when you click on the air (ticket 279) - Added potential fix for ComputerCraft turtles + Automatic Crafting Tables MK2 dupe. (ticket 286) - Added fix for Automatic Crafting Table MK2 and Block breaker dupe. (ticket 282) - Fixed typo's in error messages. - Fixed openalc bug when the viewer of the bag is also the owner of that bag. - Fixed bugs with RPTimerSetter. (It now works perfectly) - Fixed bug where the forcefield anti-hack triggered when you shot someone with an arrow. - Fixed alot of crashes when putting certain items in the Deployer. - Improved SafeZone efficiency so that threads can check safezones faster. - Generalized warning messages. [TekkitRestrict v1.17 Release Changes] - Fixed critical bug where openalc could cause clients to freeze and servers to crash. [TekkitRestrict v1.16 Release Changes] Note: The config files have changed again, but this time they should automatically be updated. - Added option to patch computercraft to prevent certain ComputerCraft scripts from crashing your server. - Added MySQL support. - Added more error messages and better error reporting. - Fixed alot of bugs I found while enabling more error messages. - Possibly fixed quarries not working/not quarrying the proper area. - Fixed safezones not storing locations. - SafeZones now cause less lag. - /tr admin safezone check now reports correct information. - Fixed specific griefprevention safezones - Fixed banned items not being removed all at once from someones inventory. - Fixed critical bug where players with similar names could share limits. - Fixed bug where limiter permissions didn't work. - Fixed limiter not always checking the config for limits. - Fixed bug where limiter sometimes didn't remove limits when someone else removed a players blocks. - Removed some inventory checks for banned items in creative to prevent players from crashing. - Made noitem, limiter and limitedcreative faster in checking for banned items. - Cleaned up code - Fixed the chunkunloader - Added more options for the chunkunloader: - Per world max chunks and the order in which to unload chunks. - Fixed /openalc - Fixed /openalc randomly closing - Added some missing help information - Added /tr about - Added /tr warnings - Displays warnings that were thrown during load. (they often get missed in the long list of logmessages) - Fixed updater bug - Updated metrics to the new version. - Cleaned up database code - Added more and more informative database messages - Added MySQL support. - Fixed RPTimer not setting Timers to the correct time. - Wrote some implementation for my upcoming EEPatch release. - Note: Unless if you have my EEPatch release, you will not notice any changes. - Forcefield Anti-Hack sometimes triggers when you mine nether ores.
OPCFW_CODE
I’m trying to get real-time subscription and bits info from a channel. Kinda stuck with the pubsub topic in that it isn’t accepting my OAuth token. I’m confused about whether this token is supposed to be a user access token or an app access token. I read here that if I need to consume topics from another channel, then that channel will have to go through my authentication process? How will that work out? “Also keep in mind that you need a token of the channel you wish to receive events from.” How do I get a token for any channel to get that channel’s real-time events? Is that even possible? For channel subscriptions on pubsub, you will need a User Access token with the channel_subscriptions scope. App access tokens aren’t for accessing anything that requires user authorization. To get a User Access Token you can follow the documentation https://dev.twitch.tv/docs/authentication/ You need the owner of the channel you want to get events for to go through the authentication flow for your app and that will provide you with a User Token that you can use to get subscriber events for their channel. For bits any scope will work, you don’t need a specific one, but it does have to be from the channel owner of the channel you want events for. Alternatively you can use an IRC or Websocket connection to Twitch Chat and listen to events through that https://dev.twitch.tv/docs/irc/ but again you will need permission from the channel owner to do so or you could be breaking the Developer Agreement. Will individual chat messages have any information that tells me that they are cheers? I just tried connecting to a channel and got it’s chat stream. I can see that some of these messages have the cheer icon with them but they don’t appear to have anything particular in them that identifies that. You need to enable the TAGS capability and consume PRIVMSG. The cheer icon/badge tells you that a person has cheered on that channel. Would I need to fetch my auth token with a specific permission to enable the TAGS capability? I tried passing USERNOTICE command after connecting but I get a 421 message saying it’s an invalid command. Here’s the list of commands I’m sending now: s = socket.socket() s.send("CAP REQ :twitch.tv/tags".encode("utf-8")) I’m still only getting the message in PRIVMSG. This is the first time I’m using the irc protocol so the questions are a bit noobish. I’m using python btw. Check the Raw line. Sounds like your python IRC lib is extracing the tags from the message and putting them somewhere else for refering to I think you might be missing a I needed to send s.send("CAP REQ :twitch.tv/tags".encode("utf-8")) before all the other commands. I’ve received the ACK now and I’m getting all the badges. Thank you all for your responses. This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
Install-Module should be installing to parent of $PSHOME for PSCore6 Currently, Install-Module puts modules into a Modules folder under $PSHOME. The problem is that when 6.0.0-beta.7 is upgraded to 6.0.0-beta.8, all the modules are "gone" since they are in the 6.0.0-beta.7 folder. They should probably be installed in ${env:programfiles}\PowerShell\Modules which is already in $env:PSModulePath There's two options I think. My preferred option is at the bottom. I think changing one line could achieve this but have a question in respect of x86 versus x64 versions of PowerShell. Does the x86 version of PowerShell Core install to C:\Program Files (x86)\PowerShell\ on an x64 machine and C:\Program Files\PowerShell on an x86 machine? If so, what would the use of $env:ProgramFiles return in either case? If a user on an x64 machine installed x86 PowerShell (possible?) any use of $env:ProgramFiles would result in issues for them wouldn't it? I know these are questions I could answer myself by JFDI but if someone can answer without me building an example, that would be preferable. I've commented the original line and added in the new code underneath it (in the else) but if processor architecture and version of PowerShell installed is going to play a part here, it probably needs to be accommodated. I expect some additional if, else within the else using $env:PROCESSOR_ARCHITECTURE if so. if($script:IsInbox) { $script:ProgramFilesPSPath = Microsoft.PowerShell.Management\Join-Path -Path $env:ProgramFiles -ChildPath "WindowsPowerShell" } else { #$script:ProgramFilesPSPath = $PSHome $script:ProgramFilesPSPath = Microsoft.PowerShell.Management\Join-Path -Path $env:ProgramFiles -ChildPath "PowerShell" } The alternative is that $PSHOME is still used and its parent calculated instead. The reason this is preferred is because if the user were to install PowerShell to some other location on disk (assuming they get that option?) then the use of $env:ProgramFiles for PSCore might not be where the install is or where the user wants Modules to go. if($script:IsInbox) { $script:ProgramFilesPSPath = Microsoft.PowerShell.Management\Join-Path -Path $env:ProgramFiles -ChildPath "WindowsPowerShell" } else { #$script:ProgramFilesPSPath = $PSHome $script:ProgramFilesPSPath = Microsoft.PowerShell.Management\Split-Path -Path $PSHOME } These are at line: 29 in the module so very early on. Any comments welcome. Answering my own question without giving anyone enough time to respond... Installing PowerShell x86 on an x64 machine puts PowerShell, by default in C:\Program Files (x86)\PowerShell however both $env:ProgramFiles AND ${env:ProgramFiles(x86)} from the PSCore6 shell returns C:\Program Files (x86), so that issue goes away. However the user does have the option to install PowerShell wherever they please, as a result, any use of $env:ProgramFiles to define the location of the modules results in them being installed to potentially a folder that is outside of the control of the installer. A further question is whether it will ever be possible to install versions of PowerShell 6 side-by-side? IF so, wouldn't having a Modules folder for each one be preferable rather than a shared set of Modules? This is what WindowsPowerShell does with PSCore now - presumably to be a differentiator. The more I think about it, the more I think using $PSHOME is the wisest choice. @lwsrbrts thanks for looking into this! Once 6.0.0 is final, my thinking is that 6.0.x versions are upgrades to 6.0.0 while 6.1.0 would have an option in the installer to be side-by-side (need to see if this is actually possible). Alternatively, all msi installed versions are upgrades and side-by-side is only supported via the .zip which means it could be put anywhere. However, most modules shouldn't care if they are running 6.0.0, 6.1.0, or 6.x.0 so I think having them in the shared Modules by default should work for most users. Perhaps we should add more options to -scope so the user can override: -SharedSystem Updated AllUsers scope install location on PWSH to the parent of $PSHOME. https://github.com/PowerShell/PowerShellGet/pull/196.
GITHUB_ARCHIVE
Selecting nodes in between comments with xpath from xmldocument I'm trying to get nodes in between comments. example: <Name> <First>a</First> <Last>b</Last> </Name> <!-- family names --> <Name> <First>c</First> <Last>d</Last> </Name> <Name> <First>e</First> <Last>f</Last> </Name> <Name> <First>g</First> <Last>h</Last> </Name> <!-- family ends --> <!-- other names --> <Name> <First>i</First> <Last>j</Last> </Name> <Name> <First>k</First> <Last>l</Last> </Name> <!-- other ends --> I'd like to be able to select the nodes in between the comment family names and family ends. Tried several ways with xpath, but I cant get further then selecting all comment nodes. When I want to select comment nodes containing value x, I do not get any result. So I'm not sure how to continue. for example: var x = xml.SelectSingleNode("//comment()[contains('family names')]"); Thanks in advance. I think between [], you are expected to specify an index, not a predicate. @GáborBakos No, that's not correct. Using [ and ] is the only way to use a predicate. @MathiasMüller Sorry, I remembered wrong. (Even in JSONPath the predicates should be within [ and ], but with a further ?( and ).) Thanks for the correction. @GáborBakos No reason to apologize! I know people are just trying to help - and sometimes they themselves profit from it ;) What's wrong with your attempt? An expression like //comment()[contains('family names')] is not valid XPath. The contains() function expects two arguments, a first argument that is a string (or can be coerced into a string by computing the string value of a node) and a second one that is also a string. The following would have worked: //comment()[contains(.,'family names')] But that does not get you far yet, because once you've identified the starting comment, you need to find what comes after it. A correct XPath expression Use the following expression: //comment()[contains(.,'family names')]/following::*[not(preceding::comment()[contains(.,'family ends')])] which translates to //comment() Find comment nodes anywhere in the documents [contains(.,'family names')] but only select them if they contain the text "family names" /following::* Select all element nodes that follow those comments [not(preceding::comment() but only return them if they are not preceded by a comment node... [contains(.,'family ends')])] ...that contains the text "family ends". Applied to a well-formed and more sensible input XML document: Input XML <root> <Name> <First>NO</First> <Last>NO</Last> </Name> <!-- family names --> <Name> <First>YES</First> <Last>YES</Last> </Name> <Name> <First>YES</First> <Last>YES</Last> </Name> <Name> <First>YES</First> <Last>YES</Last> </Name> <!-- family ends --> <!-- other names --> <Name> <First>NO</First> <Last>NO</Last> </Name> <Name> <First>NO</First> <Last>NO</Last> </Name> </root> The result will be (individual results separated by -------): Output <Name> <First>YES</First> <Last>YES</Last> </Name> ----------------------- <First>YES</First> ----------------------- <Last>YES</Last> ----------------------- <Name> <First>YES</First> <Last>YES</Last> </Name> ----------------------- <First>YES</First> ----------------------- <Last>YES</Last> ----------------------- <Name> <First>YES</First> <Last>YES</Last> </Name> ----------------------- <First>YES</First> ----------------------- <Last>YES</Last> Whoever designed this XML document did not design it very cleverly, if you pardon my French. Relying on comments with a specific text in a specific position is very dangerous. I know the design of the xml is ugly, though I'm going to have to use it. So thank you very much for making this possible! This is a great and complete answer! Is it possible to get only the name nodes with the first and last inside. (So not the individual ones) Only the ones that are like YES YES . So from your result excluding the single results seperated by ----------------------- @Rob You can use //comment()[contains(.,'family names')]/following::Name[not(preceding::comment()[contains(.,'family ends')])] to only get Name elements in the results. But they are likely sent to output including all their descendants (see here). I found it eventualy last night. This is exactly what I needed. Thanks again for your wonderful help!
STACK_EXCHANGE
Schrodinger's cat experiment in a black hole Imagine Alice falling into a black hole with a Schrodinger's cat experiment setup. after passing the event horizon towards the singularity she performs an observation to see if the cat is dead or alive. Bob floats just above the event horizon of the black hole. Will he ever know what was the result of the observation done by Alice? was this information lost? if so it contradicts basic quantum physics information conservation law and the second law of thermodynamics where entropy should always increase. In Bob’s world, Alice never crosses the horizon, so no information is lost. The vast vast majority of physics interpretations say that the results of experiments that collapse spatially dispersed entangled states do not transmit any information, and can only be evaluated in any way by bringing the two results together and observing the correlations. The Alice and Bob in your experiment, therefore, exchange no information, and there is no contradiction with classical General Relativity. Information cannot reach Bob from Alice. Her information about the cat is as lost to him as she is. Information is not destroyed simply because it becomes unavailable to Bob. Information about Alice and the cat experiment remains in existence and contributes to the entropy of the black hole, which is proportional to the size of its event horizon. Ultimately the black hole will evaporate. The information it contains will be transformed into Hawking radiation and carried back into the purview of Bob's descendants. Sadly, the information defining Alice and the cat will be so transformed as to be no longer recognisable. (I play a little fast and loose here with correspondences between mass-energy, information and entropy; not all physicists agree it is that simple, but it does make my answer a lot less like a three-hour lecture) Since the outcome of Alice cat experiment was known after passing the event horizon ,it cannot be part of the information coded on the event horizon surface and it cannot be transmitted back by outer surface Hawking radiation. that is why it seems that this information will be lost assuming that one day the black hole will fully evaporate into space @EranSinbar You misunderstand the nature of Hawking radiation; over time the entire black hole evaporates, everything in it. You also misunderstand the distinction between losing and transforming information; in the sense you are using it, a transformation "loses" specific facts but creates new ones in their place. The net sum is not decreased, which is what physicists mean by "losing" information. To my best understanding , Alice knew if her cat was dead or alive before they reached the singularity . But even after the entire black hole will evaporate due to Hawking radiation the this information is lost forever and we cannot reconstruct ,even theoretically what she knew as she was falling towards the singularity. This to my understanding is a loss of information. @EranSinbar I am sure it is your understanding. What physicists describe it as is "transformation".
STACK_EXCHANGE
Blazor: In-memory state container as cascading parameter The "In-memory state container service" section shows a state container as a service. For simple app-wide stuff, it is simpler to use a cascading value. I posted a related question (with working code) to StackOverflow. (UPDATE: "MrC" posted another very nice solution.) For example, working code: AppState.razor <CascadingValue Value="this"> @ChildContent </CascadingValue> @code { private bool _isDirty; [Parameter] public RenderFragment ChildContent { get; set; } protected override async Task OnAfterRenderAsync(bool firstRender) { await base.OnAfterRenderAsync(firstRender); if (firstRender) { await LoadStateFromLocalStorage(); } if (!firstRender && _isDirty) { await SaveStateToLocalStorage(); _isDirty = false; } } private bool _isDarkMode; public bool IsDarkMode { get { return _isDarkMode; } set { if (value == _isDarkMode) return; _isDarkMode = value; StateHasChanged(); _isDirty = true; } } //other properties... private async Task LoadStateFromLocalStorage() { Console.WriteLine("LOADED!"); await Task.CompletedTask; } private async Task SaveStateToLocalStorage() { Console.WriteLine("SAVED!"); await Task.CompletedTask; } } App.razor <AppState> <Router> ... </Router> </AppState> MyComponent.razor <!--- markup ---> @code { [CascadingParameter] public AppState AppState { get; set; } //... } It would be nice to have that as another section, e.g. "In-memory state container as cascading parameter". Document Details ⚠ Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking. ID: ae14b3e3-3271-b919-9e0a-6f317a6b2d2d Version Independent ID: e5e1273b-195e-5da1-b4aa-66bbcff1425b Content: ASP.NET Core Blazor state management Content Source: aspnetcore/blazor/state-management.md Product: aspnet-core Technology: aspnetcore-blazor GitHub Login: @guardrex Microsoft Alias: riande Thanks for sending this in, @lonix1. We will take a look at it. However, we're booked with a lot of high priority work right through the end of the year, so we won't be able to reach this until early 2023. It won't get lost tho! It will be on a tracking issue. Here's the @ShaunCurtis approach posted at SO. Reference: https://stackoverflow.com/a/74080605 First separate out the data from the component. Note the Action delegate that is raised whenever a parameter change takes place. This is basically the StateContainer in the linked MSDocs article. public class SPAStateContext { private bool _darkMode; public bool DarkMode { get => _darkMode; set { if (value != _darkMode) { _darkMode = value; this.NotifyStateChanged(); } } } public Action? StateChanged; private void NotifyStateChanged() => this.StateChanged?.Invoke(); } Now the State Manager Component. We cascade the SPAStateContext not the component itself which is far safer (and cheaper). We register a fire and forget handler on StateChanged. This can be async as the invocation is fire and forget. @implements IDisposable <CascadingValue Value=this.data> @ChildContent </CascadingValue> @code { private readonly SPAStateContext data = new SPAStateContext(); [Parameter] public RenderFragment? ChildContent { get; set; } protected override void OnInitialized() => data.StateChanged += OnStateChanged; private Action? StateChanged; // This implements the async void pattern // it should only be used in specific circumstances such as here in a fire and forget event handler private async void OnStateChanged() { // Do your async work // In your case do your state management saving await SaveStateToLocalStorage(); } protected override async Task OnAfterRenderAsync(bool firstRender) { if (firstRender) await LoadStateFromLocalStorage(); } public void Dispose() => data.StateChanged -= OnStateChanged; } Additional context is in the discussion on the SO comment Q&A. @lonix1 ... AFAICT, we already have this approach covered (generally anyway ... i.e., a state-providing cascaded component) at ... https://learn.microsoft.com/en-us/aspnet/core/blazor/state-management?view=aspnetcore-6.0&pivots=server#factor-out-the-state-preservation-to-a-common-location Yeah 🤔 ... That's close enough conceptually that we don't need to take action on this. This issue will remain associated with the topic on GitHub, so if devs select the View all page feedback link, they'll see this issue and can see the other example approaches/discussion from there. Yeah ... I don't like what they did with that because it would have helped us probably HUNDREDS of times both with readers looking for things here and even US ... the doc 🐈🐈🐈 ... I've forgotten about it myself tucked way down there. Anyway ... that's where they want it, so that's where it will stay. Hummmmmmmm 🤔 ... Perhaps SO! 😄 I'll take a look. The specific use case for that section uses ... Microsoft.AspNetCore.Components.Server.ProtectedBrowserStorage ... for Blazor Server apps (which is now baked into the framework). In this case, it basically just falls back to the main topic on the approach for WASM at ... ASP.NET Core Blazor cascading values and parameters https://learn.microsoft.com/aspnet/core/blazor/components/cascading-values-and-parameters For the WASM scenario, the PU ... Steve IIRC ... only says the following in the topic ... localStorage and sessionStorage can be used in Blazor WebAssembly apps but only by writing custom code or using a third-party package. ... because he didn't want to show an opinionated, general approach. I'll add a cross-link there to the cascading values/params topic, but we're still going to leave it up to the dev to implement the code on their own. I had never seen that section either. Having studied it I have the same issue as with the initial code in the SO question. It's cascading this - CounterStateProvider - a great big fat class with lots of functionality that should be not exposed in sub-components. The point of my code was to cascade a lean object with just the functionality required? Actually, it was Pranav who directly signed off on the changes, which were kind'a minimal really ... https://github.com/dotnet/AspNetCore.Docs/pull/20351 We can't ask Pranav btw ... he's not on this PU team any longer. Looking back further, it was Mackinnion who provided the major review feedback on ... https://github.com/dotnet/AspNetCore.Docs/pull/19287 ... from the content that IIRC Steve provided earlier. Still tho, I don't think I want to ping on this. If the PU is getting a number of PU requests for help on the matter, they'll let me know. For now, I'd like to stick with the View all page feedback for the discussion and approach concepts. We're crazy busy 🏃😅 with the .NET 7 release, then I have a huge block of work to do on the Blazor security node for all of EOY right thru 🎅 Day and around 🎆 Day, then ... major topic overhauls and updates for 23Q1 ⛰️⛏️😅. I'd like to have a lean backlog going forward. I'm 👂 from the PU if they want to provide more coverage on this particular aspect. I'll tell you what ... I'll compromise right now. Let's cross-link THIS ISSUE right in the topic section. That way, it will surface better for devs. At some point in the future ... next year! 😆 lol ... I'll ask Dan and Artak about it.
GITHUB_ARCHIVE
Microsoft Corporation (NASDAQ:MSFT) Q4 2019 Earnings Conference Call - Final Transcript Jul 18, 2019 • 05:30 pm ET by the tremendous opportunity ahead. Every day, we work alongside our customers to help them build their own digital capability, creating new businesses with them, innovating with them, and earning their trust. This commitment to our customers' success is resulting in deeper partnerships, larger, multi-year cloud agreements and growing momentum across every layer of our differentiated technology stack, from application infrastructure, to data and AI, to business process, to productivity and collaboration. Now, I'll briefly highlight our innovation and momentum. In a world where every company is a software company, developers will play an increasingly vital role in value creation across every organization, and GitHub is their home. GitHub is used by more than 36 million developers as well as the largest enterprises, including the majority of the Fortune 50. And we are investing to build the complete toolchain for developers, independent of language, framework and cloud. Visual Studio and Visual Studio Code are the most popular code editing tools in the world. With Azure DevOps you can build, test and deploy code to any platform. And with Azure PlayFab, we have LiveOps, a complete back-end platform to optimize engagement and interaction in real-time. We are building Azure as the world's computer, addressing customers' real-world operational sovereignty and regulatory needs. We have 54 data center regions, more than any other cloud provider, and we were the first in the Middle East and in Africa. Azure is the only cloud that extends to the edge, spanning identity, management, security and infrastructure. This year, we introduced new cloud-to-the-edge services and devices from Azure Data Box Edge, to Azure Stack HCI, to Azure Kinect, bringing the full power of Azure to where data is generated. Azure Sphere is a first-of-a-kind edge solution to secure the more than 9 billion MCU-powered endpoints coming online each year. And now IoT Plug and Play seamlessly connects IoT devices to the cloud without having to write a single line of embedded code. Azure is the most open cloud, and this quarter we expanded our partnerships with Oracle, Red Hat and VMware to make the technologies and tools customers already have first-class on Azure. Azure is the only cloud with limitless data and analytics capabilities across the customers' entire data estate. The variety, velocity and volume of data is increasing, and we are bringing hyper-scale capabilities to relational database services with Azure SQL Database. New analytics support in Cosmos DB enables customers to build and manage analytics workloads that run real-time over globally distributed data and we offer the most comprehensive cloud analytics from Azure Data Factory to Azure SQL Data Warehouse to Power BI. The quintessential characteristic of any application being built in 2019 and beyond will be AI. We are democratizing AI infrastructure, tools and services with Azure Cognitive Services, the most comprehensive portfolio of AI tools, so developers can embed the ability to see, hear, respond, translate, reason and more into their applications. And this quarter we introduced new speech-to-text, search, vision and decision capabilities.
OPCFW_CODE
My solution was to buy a NAS box which can hold up to 4 drives. I only bought two hard drives (each 2TB) to go inside the NAS, but the box itself can support up to 12TB. I decided to set this up in a mirroring setup, meaning whatever is added to one drive is automatically written to the other. Obviously then, my 2x2TB hard drives only allow for 2TB storage (and not 4TB) but with the huge and potentially live-saving advantage that if any of the drives failed, there would be a perfect mirror still with all my data intact. For the NAS drive, I opted for the Synology DS411J (read details on Amazon) and for the hard drives themselves, I opted for Samsung HD204UI Spinpoint 2TB SATA 3.5" Hard Drives (read details on Amazon). Or scroll to the bottom for the Amazon links widget. |Not the ugliest box| |Samsung 2TB Hard Drive (1 of 2)| One thing to note, you will be prompted to supply a DSM file (which has a .PAT filetype) during installation which can be obtained in one of two ways; from the CD or from the Synology website. I do not claim to know the difference but figure the internet will supply the newest version so I got mines from there; put it on to download and grab a cuppa as it is a slow server! You will want to get this stuff installing and downloading and whilst that is happening, you can grab yourself a screwdriver and get installing the hardware. The box itself has 4 screws on the rear plate, which can be removed by hand. |Remove the 4 screws| |Screws keeping the rear plate attached| Removing these allows the back plate to swing down, revealing the 4 bays which lie inside. The roof of the box will still be in your way though; pull slightly towards the rear of the box, and upwards, and this will dislodge. |Box, with roof removed| |Rear plate swung open to reveal bays| |Plastic mount to house the hard drive| |Screws for securing hard drive to plastic mount| |Secure the hard drive into the mount| With the screws in place, slide the mount into the bay until there is a little resistance. Press in just a little bit harder and then stop. Your drive is now connected. Do the same for any/all other drives you are wishing to install. |Slide the mount back into the box| Reverse the process to re-assemble your box. Connect the LAN cable using the provided cable; one end in the rear of the box, and the other directly into your router. Connect the power cable and power up the device. If your download has finished, move on. If not, re-boil the kettle and stick your feet up for a bit. Open "Synology Assistant" and wait for the software to locate your device. It should detect it on your network, and upon clicking on it, allow you to "Install" it. |Installing DSM to hard drive| There aren't too many options to choose from and there won't be any difficult questions. The default is for the box to take an automatic IP address, but I opted to disable DHCP and specify a manual IP instead. The process took about 10-15 minutes for my configuration (2x2TB hard drives). Once you are prompted that it has finished, you can again use "Synology Assistant" software to configure your device for use. Now, a "Connect" option is available instead of "Install". Pressing this launches a very nice looking GUI for configuration to take place. (Note, this actually just launches your browser at a certain URL; you can save this as a bookmark to avoid using "Synology Assistant" again). Much has been said about Synology's GUI on various other reviews, and everything I read has been positive. I concur. It's really rather nice and takes a lot of the guesswork out of configuring this box. It looks more like a proper GUI operating system than a website, and puts a lot of websites to shame! Resizeable windows, support for multiple windows to be open on screen or minimized and even customisation of the desktop icons!! Oh come on Synology, you're just spoiling me here! Click the first button, "Set up a volume and create a shared folder". From here, you'll be prompted to launch "Storage Manager", and will be provided a link to do so. Click the link! Hurray! See, this is easy! Hmm, ok, tough decision time here then. Less easy. You can choose one of two options: Quick, or Custom. Quick, as the description will inform you allows you to use a custom RAID type, developed by Synology. It is flexible and should allow for good future alteration. I am choosing Quick configuration here. You can, if you so desire, manually specify your RAID type if you have a good reason and know what you are doing. The few screens are fairly basic; ensure all your drives are checked, accept that the next process will format your drives and select the recommended option about disc checking. Finally, hit "Apply" and thumb-twiddle for a bit. After the process has finished, it will move onto "Verifying hard disks" which runs in the background (and takes ages) so feel free to consider that up and running and ready for experimenting with. The first thing you'll probably want to do is set up a "Shared Folder", to allow you to store files on the drive. Store files? Crazy idea, but stay with me. Control Panel is a good place to start so open your Control Panel desktop icon. Select the "Shared Folder" option. Press the "Create" button, which will launch the following screen. |Keep my backed up movies accessible from every device on the network| |Admins get read/write access; guests get read-only| This has created a shared folder available from the network. How to access? The usual way you access shared folders on the network is the simple answer. For Windows 7, open "Network" typically available from the Start Button. You should be able to see your DiskStation device. |Network - Windows 7| When you attempt to access it for the first time, you will be prompted for credentials. If you haven't configured multiple users yet, enter "admin" as the username, and your admin password that you gave when you first installed the device. See your shared folder? Hooha! Can you read from and write to it? Double Hooha! I'll leave this post now as it has become a tad longer than I first expected. I'm pretty excited to see what this NAS drive can offer me. I have absolutely no idea just what it can do though, as I was setting this up as I wrote this. Play time now. Hope this helped any one struggling to get to this stage, and keep reading if you are interested in my experiences using the device and doing more advanced things with it. Want to follow my lead? Use the links below to buy yourself the same products I bought. Synology DS411J (read details on Amazon) Samsung HD204UI Spinpoint 2TB SATA 3.5" Hard Drive (read details on Amazon)
OPCFW_CODE
Threat and risk modelling The core steps in a risk assessment are the creation of threat and risk models. These are essential to understand how the the system may be attacked, how likely that is to occur, and what the cost might be. A mobile device may contain many different assets that the owner may consider valuable and that are wanted by an attacker. These can be divided into two main categories: Information assets: these include passwords, credit card details, emails, business information etc. and can be subdivided into: - information assets managed by the device - information assets managed by applications Function assets: these represent functions or actions that can be performed that may have value like, make calls, send SMS, take photos, record sound etc. Controlling access to assets is therefore very important. In particular, in the case of BYOD, any attack on the assets controlled by a device could have a high impact, as the device not only contains personal information, but also business information as well. What is risk? The word risk has lots of different meanings in everyday use, but we need to be more precise. The question we are trying to address when performing a risk assessment is “how much should I spend (time, money, effort) to protect my assets?”. Or put another way, “how much will it cost me if I am attacked, how likely is that to happen, and so how much should I spend to prevent that happening?”. The concept of risk therefore relates to probability (of an attack occurring) and impact (the cost to me of the attack). (Recall the definitions of threats, vulnerabilities, and impact.) Risk is defined as follows: Risk = Probability x Impact The probability of an attack occurring depends upon the threats and the vulnerabilities and therefore the above is often written as: Risk = (Threats + Vulnerabilities) x Impact Technically the above equation does not make sense mathematically. What it is meant to convey is that the probability is a function of the threats and vulnerabilities. Note for Nerds: the correct mathematical definition of risk is as the expected value of the impact (represented as a random variable): Here is the probability of an attack, and is the impact of that attack. How do you evaluate the impact of a threat/risk/attack? There are range of different schemes that can be used to classify the impact of known threats/attacks/risks: STRIDE for threat modelling and DREAD for risk modelling The Microsoft Threat Modeling Process uses an aggregated model: STRIDE helps you to identify and categorise threats from the attacker: Spoofing Identity, Tampering with Data, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. DREAD helps you to determine the security risk for each threat using a value-based risk model : Damage, Reproducibility, Exploitability, Affected Users, and Discoverability. Trike: an open source threat modeling methodology and tool Trike uses a risk-based approach and has distinct implementation, threat, and risk models, instead of using the STRIDE/DREAD aggregated threat model (attacks, threats, and weaknesses). CVSS: Common Vulnerability Scoring System CVSS provides an open framework for communicating the characteristics and impacts of IT vulnerabilities. Its quantitative model ensures repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. The NVD CVSS V2 calculator provides vulnerability severity ratings of ‘Low’, ‘Medium’ and ‘High’. OCTAVE: Operationally Critical Threat, Asset, and Vulnerability Evaluation OCTAVE is a heavyweight risk methodology approach originating from Carnegie Mellon University’s Software Engineering Institute (SEI) in collaboration with CERT. OCTAVE focuses on organisational risk, not technical risk. There are links to more detailed information about mobile threat modelling available from the bottom of this page. Now that we have the concepts required to model a threat on Android mobile device, the OWASP Top Ten Mobile Risks is a good reference to use to start building a model for each threat that may face your app. © University of Southampton 2017
OPCFW_CODE
'use strict'; // Keeps track of wallet status results var wallet = {}; // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Lock Icon ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // Helper function for the lock-icon to make sure its classes are cleared function setLockIcon(lockStatus, lockAction, iconClass) { $('#status span').text(lockStatus); $('#lock.button span').text(lockAction); $('#lock.button .fa').get(0).className = 'fa ' + iconClass; } // Markup changes to reflect state function setLocked() { setLockIcon('Locked', 'Unlock Wallet', 'fa-lock'); } function setUnlocked() { setLockIcon('Unlocked', 'Lock Wallet', 'fa-unlock'); } function setUnlocking() { setLockIcon('Unlocking', 'Unlocking', 'fa-cog fa-spin'); } function setUnencrypted() { setLockIcon('New Wallet', 'Create Wallet', 'fa-plus'); } // Update wallet summary in header capsule function updateStatus(result) { wallet = result; // Show correct lock status. if (!wallet.encrypted) { setUnencrypted(); } else if (!wallet.unlocked) { setLocked(); } else if (wallet.unlocked) { setUnlocked(); } // Update balance confirmed and uncomfirmed var bal = convertSiacoin(wallet.confirmedsiacoinbalance); var pend = convertSiacoin(wallet.unconfirmedincomingsiacoins) - convertSiacoin(wallet.unconfirmedoutgoingsiacoins); if (wallet.unlocked && wallet.encrypted) { // TODO: Janky fix for graphical difficulty where a 2px border line appears when 1px is expected $('#status.pod').css('border-left', '1px solid #00CBA0'); $('#confirmed').show(); $('#unconfirmed').show(); $('#confirmed').html('Balance: ' + bal + ' S'); $('#unconfirmed').html('Pending: ' + pend + ' S'); } else { // TODO: Janky fix for graphical difficulty where a 2px border line appears when 1px is expected $('#status.pod').css('border-left', 'none'); $('#confirmed').hide(); $('#unconfirmed').hide(); } } // Make wallet api call function getStatus() { Siad.apiCall('/wallet', updateStatus); } // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Locking ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // Lock the wallet function lock() { Siad.apiCall({ url: '/wallet/lock', method: 'POST', }, function(result) { notify('Wallet locked', 'locked'); update(); }); } // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Unlocking ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // Unlock the wallet function unlock(password) { // Password attempted, show responsive processing icon setUnlocking(); Siad.call({ url: '/wallet/unlock', method: 'POST', qs: { encryptionpassword : password, }, }, function(err, result) { if (err) { notify('Wrong password', 'error'); $('#request-password').show(); } else { notify('Wallet unlocked', 'unlocked'); } update(); }); } // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Encrypting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // Encrypt the wallet (only applies to first time opening) function encrypt() { Siad.apiCall({ url: '/wallet/init', method: 'POST', qs: { dictionary: 'english', }, }, function(result) { setLocked(); var popup = $('#show-password'); popup.show(); // Clear old password in config if there is one var settings = IPCRenderer.sendSync('config', 'wallet'); if (settings) { settings.password = null; } else { settings = {password: null}; } IPCRenderer.sendSync('config', 'wallet', settings); // Show password in the popup $('#generated-password').text(result.primaryseed); update(); }); }
STACK_EDU
[ale] FW: ATAPI woes (Slackware 3.0) corbie at infinet.com Sun Oct 20 13:58:59 EDT 1996 The following is a transcript of a reply I recieved from another mailing list regarding an install problem I'm having, as well as my follow-up. Any ideas out there in Georgia? >You're very close. The Boca IDE controller contains The primary (1F0) >and secondary (170) ATA interfaces. The ATA interface on the sound >card is the 3rd ATA interface (1e8). So try: >boot: ramdisk ide2=0x1e8,0x3ee,11 >Only the first two ATA interfaces are probed automatically, so that's >why you need this kernel parameter. I re-checked, and I've already tried that with no change in the problems; but thanks. I've tried ide0-3, actually. In summary, /dev/hda is a Conner 1.275GB drive, and the boot sector is 'infected' with their overlay software since I don't have LBA in my motherboard nor the Boca hard drive controller I have installed. /dev/hdb is a WD 420MD drive, which is where I'm trying to put Linux. Ideally, I want to be able to boot the Linux drive without resorting to a boot floppy, and to be able to mount the DOS/Win95 partition that comprises /dev/hda from within Linux so I can share files when desired. And since my controller only supports two IDE devices, the CDROM (Creative Labs X4) is linked into the IDE/ATAPI interface of a Genius PnP soundcard. Unfortunately, Linux calls /dev/hda's boot sector corrupted, so LILO won't even think about installing, nor can any part of the partition be seen from Linux. And using a boot floppy and every boot parameter I've been able to find on scads of how-to files, in a seemingly infinite array of combinations, gives me (at best) errors such as unable to read CDROM format, or no CDROM present, or whatever. At worst, the boot-up goes into an infinite loop of trying to find the CDROM and timing our, or I get I've tried using just the A, AP, K and Q disk suites (can't do fewer and still be able to compile one of the kernels) and re-compiling the kernel. I've tried loading from LOADLIN and using one of the boot images from the Slacksare 3.0 CD. LOADLIN won't recognized any of them. I've tried re-naming zImage to vmlinuz, copying it to a dinky DOS partition on the 420MB drive (which Linux /can/ see) and trying to boot from Win95 (in DOS The latest version of the kernel available on the CD is 1.3.20. I've downloaded 2.0.23 but it won't even compile. Is there some way, any way, to use a large IDE drive with the overlay software intact, and to used the CDROM over the soundcard socket? If so, are these solutions actually documented somewhere? (Try even finding a LILO or LOADLIN how-to that's not a couple of years old, and/or has complete information. I've been unable to. Nor do either of Welsh's books help, nor the Volkerding book, nor the huge QUE book. I've bought all four.) To tell truth, after many late nights reading unhelpful how-to files and trying endless combinations of failed configurations based on those files, I'm beginning to wonder why I bothered. Thanks in advance. Mark Dyson -- mdyson at poboxes.com "Modern morality and manners suppress all natural instincts, keep people ignorant of the facts of nature and make them fighting drunk on bogey -- Aleister Crowley More information about the Ale
OPCFW_CODE
Expose C# class through COM Interop I have a C# class library and also have a powerbuilder application. I want to expose the c# class and use it in the powerbuilder application. I used the following blogs to expose the functions so it can be accessed by powerbuilder application https://whoisburiedhere.wordpress.com/2011/07/12/creating-a-com-object-from-scratch-with-c/ http://jumbloid.blogspot.com/2009/12/making-net-dll-com-visible.html So i exposed the COM and made it accessible in powerbuilder but i still have some fundamental questions to make sure if i follow the best guidelines. The class looks before converting to COM looks like class classname { function1(){//do something} function2(){//do something} function3(){//do something} function4(){//do something} } To convert to COM I created an interface and i wanted to expose only function1 and function2. So i modified the class as [ComVisible(true)] [Guid("03S3233DS-EBS2-5574-825F-EERSDG8999"),InterfaceType(ComInterfaceType.InterfaceIsDual)] interface Iinterface { function1(); function2(); } In the main class i made the following modifications 1. I set the COM visible property to false in AssemblyInfo as i do not want to expose all the public methods. 2. Class looks like [ComVisible(true)] [Guid("2FD1574DS4-3FEA-455e-EW60A-EC1DFS54542D"), ClassInterface(ClassInterfaceType.None)] class class1 : Iinterface { function1(){//do something} function2(){//do something} [ComVisible(false)] //i don't want the mehtod to be exposed function3(){//do something} [ComVisible(false)] function4(){//do something} } I have the following questions for me to understand better 1. Do i explicitly set the COM visible property to false for the methods that i do not want to expose if i set the visible property of class to true and the default COM visible property (in assemblyinfo) to false? My understanding is i will only have functions that i want to expose in interface, so irrespective of the visible property, if i dont have the function in interface then it won't be visible? I did understand how to deploy using regasm in client computer by copying the dll and use regasm.exe, my question is how to deploy in non development machines with no .NET installed? I don't think you can deploy to a machine with no .NET. Even exposed to COM, its still a .NET assembly and requires the framework to be installed so that the runtime can load and execute the assembly. Perhaps the op meant with no Visual Studio or Windows SDK installed. [ClassInterface(ClassInterfaceType.None)] That means that none of the class implementation details are visible, the proper and pure COM way. So it is not necessary to apply [ComVisible(false)] on methods you don't want to expose. Only the Iinterface methods are visible. Using, say, ClassInterfaceType.AutoDual is a convenience in .NET, the CLR will synthesize an interface automatically. It matches the behavior of old versions of Visual Basic (VBA, VB6), they did not support interfaces yet. It does however expose too much, the methods inherited from System.Object (like GetHashCode etc) will be visible as well without a decent way to hide them. You also get a dependency on the mscorlib.tlb type library. So declaring the interface explicitly like you did is the certainly better way. The target machine must have .NET installed, rock-hard requirement. Thanks that explains my questions.. So what would be the best way to deploy dll to the client machines ? I read that one of the ways is to copy dll to other machines and use regasm.exe and point the dll to register dll in client machines.. Is this the best way to do it? Using Regasm is not a "best way" by a stretch, COM servers ought to have an installer so they are dead-simple to deploy. Pretty easy to create one with, say, this VS add-on.
STACK_EXCHANGE
Bill Segall wrote: > > # emailing your commits to the mailing list > > git send-email origin/master.. > > # ..or pushing to github > > git push github local_branch_name > You're right. There is effectively no difference between these two except > 99% of developers know about the push and have never used send-email. I think the onus is on developers to educate themselves about the tools they use, rather than on projects to conform to what people are used to consume from GitHub Inc. Now don't get me wrong, some things on github.com are nice, but I still wouldn't go near them as infrastructure provider. It's nowhere near worth it for me. > After the send-email some fraction of the developers engaged enough to be > on a mailing list might be motivated enough to download and have a look at > a patch but we've already presented a barrier to entry, cos it's not just > click to look. I don't know about your mail program, but mine show patches sent to the mailing list right then and there - because they double as emails. That seems much less of a barrier; no need to click on anything. > And once we look at the patch, we get a lovely color encoded web view, Indeed a mailer doesn't really do that. > we get to see the developer stats and their activity. Who cares? I think the patch is more important. > If I'm doing especially well, the CI system might have shown me that the > patch compiles, that the tests passed and that it even fixed an existing CI is nice, but by no means exclusive to github.com. In fact, you would probably have to rely on a different service provider. Or a volunteer could set up our own. > To adopt the patch one of the chosen few might be able to click and accept > it so we don't get emails that complain they posted a patch to the mailing > list 6 months ago etc. I like piping emails to git, but my experience is that very few patches can actually be applied without further work, so some back-and-forth communication is neccessary. E.g. email. Or I guess one could choose to use Facebook. > And it's there for people to find, adopt, fork, improve, contribute > further to at lower cost. I disagree that only github.com provides that, or provides it best. > The point I'm making is that ..this is much easier said than done. Regardless of github.com or libssh2.org actual humans need to engage and manage contributions. libssh2 like pretty much every other project is understaffed, and no colourful web page will change that. The self-hosted process with patches on mailing lists works really well for a large number of projects, including the Linux kernel. > and visibility (the UI thing) Being understaffed, it's important for the processes to be efficient, and email is both efficient and visible. The list and its archives are public, the bug tracker is public, the repo is public. > People these days have an expectation Maybe they should have fewer expectations and make more contributions? > if you're not meeting and hopefully exceeding those expectations > you're losing the one currency that matters which is developer We can only speak about "losing" if we have had something, and as I wrote, this project, like all others, is understaffed. Using github.com (or any other!) services isn't magically bringing qualified contributors into the project. Your reasoning makes perfect sense for a startup trying to be the hippest in order to attract and keep developers who might care more about appearances (visibility) than about actual code. That's not really what libssh2 needs. Received on 2015-03-10
OPCFW_CODE
Where is template.sty? (Required in xfrac) Having just updated to TL2011, I find that the xfrac package fails with the error: ! LaTeX Error: File `template.sty' not found. Sure enough, neither kpsewhich nor find can locate it. I can find it in my TL2010 tree, so I could copy it over from there, I guess. The location there is: tex/latex/xpackages/xbase/template.sty But there's not even a corresponding xpackages directory in the TL2011 tree! My (limited) poking around in the directory structure makes it look as though the x-stuff has been rearranged, and several things renamed. There is a suspicious looking xtemplate.sty file in the new TL2011 tree, but it is sufficiently different to template.sty that I can't be sure that they do the same thing. So ... Is xtemplate.sty the new template.sty and I should just change the line in xfrac to require the x-rated version? If not, should I just copy template.sty over from TL2010 (in to my local texmf tree rather than TL2011, of course) and what else should I copy? Or is there a package on CTAN that covers this (I couldn't find it via a search)? Is there a new super all-singing all-dancing all-reciprocating package that replaces xfrac that I should use instead? (I've tagged this [tag:latex3] because it looks like that's what got reorganised.) If I load xfrac.sty (using TL 2011), it autoloads xtemplate.sty -- but this shouldn't require your hand to set up. Try updating your packages tlmgr update -self -all. If that doesn't work, add \listfiles to your code and post the file list, maybe. xtemplate and xfrac itself are now l3packages. For me (TL 2011), xfrac is failing with a completely different error, which makes things even more interesting. Yes, it seems that the LaTeX3 packages got recently reorganized, at least in TeX Live. Some packages are now installed under different named in TeX Live. This lead to automatic deinstallation of some packages and the installations of some new ones. Aaagh! @frabjous is completely correct. I had downloaded xfrac once long ago and stashed it somewhere in my TEXMF tree (before it made it in to TL, presumably). That was being loaded, not the TL version. My apologies, one and all. (Close as "too-localised"?)
STACK_EXCHANGE
Google Home v1.30 will be launched today with a minor rebranding function and some internal changes that show a continuous development in Google for homes, possibly with support for multiple households. There are also more functions for remote control of the smart home, and Smart Displays will use the Google Home application to configure Duo calls. As always, the download links are at the bottom. Unofficial change log: (things we found) - Change of name of contexts to live mode Contexts becomes live mode Left: v1.29 . Center: v1.30. Right: looks the same in both versions. It's a purely cosmetic change, at least for now, but we'll have to get used to a new name. The function that had the Cast devices that went in bicycle through galleries of varied images happens to be called Environment mode. Nothing seems to have changed beyond the name and most of the places where it is referenced, but a name change like this is usually a sign that something else will happen in the near future. There is a small oversight in which the name of the background is still in the title of the screen used to change the settings. The features described below are probably not yet live, or can only be live for a small percentage of users. Unless stated otherwise, do not expect to see these features if you install the apk. All screenshots and images are true unless otherwise indicated, and the images are only modified to delete personal information. Follow-up: Google Assistant for Households A few months ago, the Google application included some references to something called Google Assistant for Households, which seems to carefully identify the differences between the members of their household. We've seen continued growth, since Google may want to know their birthdays and relationships with each other and give them common titles about how they are linked to you. Now the Google Home application is entering the game. Added some new placeholder chains that will be used to manage household members and their status, so to speak. All the chains are empty, but their names reveal the intention quite clearly. The basic summary is that people can be invited to join a group, which can be accepted or rejected. There are also chains related to the "managers" and how they are invited. It seems more or less the same situation, but presumably gives them the ability to invite or remove members of the group. < string name = " add_household_member_label ]" /> < string name = " delete_invitee_failure " /> < string name = " delete_invitee_success " /> < string name = " cell_label_unassigned " /> < string name = " candidate_header " /> < string name = " candidate_message " /> < string name = " accept_applicant " /> < string name = ] " accept_applicant_failure 31] " /> < string name = " invitee_header " /> < string name = " invitee_message " /> < string name = " join_this_home ] " /> < string name = " join_this_home_desc " /> < string ] name = " reject_applicant " /> < string name = " reject_applicant_failure " /> < string name = " reject_button_text " /> < string name = " request_failed " /> < string n ame = " request_sent " /> < string name ] = " send_request " /> < string name = " invite_manager_failure " /> < string name = " invite_manager_success " /> < string name = " message_managers_only_you " /> < string name = " new_manager_invite_message " / > < string name = " new_manager_summary_header " /> < string name = " new_manager_services_message " /> < string name = " new_manager_settings_mes salvia " /> < string name = " confirm_manager_message " /> < string name = " confirm_manager_title " /> < string name = " resend_manager_invite_failure " /> < string name = " resend_manager_invite_success " /> < string name = " ] structure_invite_accepted_message " /> < string name = " structure_invite_declined_message " /> < ] c adena name = " structure_invite_device_migration_message " /> < string name = " structure _invite_device_migration_title " /> < string name = " structure_invite_message " /> < string name = " structure_invite_nickname_hint " /> < string name = " structure_invite_nickname_message " /> < string name = " structure_invite_nickname_title " /> < string name = " structure_invite_personal_results_message " /> < string name = " structure_invite_response_title " /> < string name = " structure_invite_response_title_default_home " /> < string name = " structure_invite_services_message " /> < string name = " structure_invite_settings_message " /> < string name = " structure_invite_summary_header " /> < string name = " accept_button_text " /> < string name = " confirm_button_text " /> < string name = " dec line_button_text " /> It should be noted that there are also strings of" applicant "characters, which means that someone can ask for some kind of permission. It is not clear if this is for people who apply to enter a home or request to become a manager. Either way, it's unusual to see both an invitation and application workflow, but it might make more sense when we get some more details. Support for several homes, maybe? I will be the first to say that I am skeptical about framing this section as support for multiple houses because it can be easily interpreted differently, but I also recognize that this is a scenario that Google must really support. If you live in more than one place, as many college students and regular travelers often do, you can be part of more than one family group. There are now chains of placeholders that seem to allow users to add multiple houses. They describe only some basic functions, such as adding new homes, giving them names and opening device configurations for them. Without more text or significant references in the source code, that is the maximum that can be seen from here. < string name = " add_another_home_label " /> < string name = " home_name_hint " /> < string name = " home_naming_page_body " /> < string name = " home_naming_page_title " /> < string name = ] "
OPCFW_CODE
Make tex4ebook support memoir's \book Running tex4ebook on the following example \documentclass{memoir} \begin{document} Some dedication text \book{Hello} \part{World} \chapter{How are you} \end{document} produces an .epub where the \book is formatted as a likeSection, but without any prior page breaks nor any corresponding bookmarks. What configurations are needed for tex4ebook (or tex4ht) to support memoir's \book construct, or would this require more extensive changes upstream? There is no support for the \book command in tex4ht, so what you see is just the formatting based on the fonts used by this command. Try the following configuration code that redefines it as a sectioning command recognized by tex4ht: \Preamble{xhtml} \NewSection\book{\thebook} \Configure{book}{\ifvmode\IgnorePar\fi\EndP\HCode{<h2 class="bookHead"><span class="booknumber">}\bookname\refstepcounter{book}\space\thebook\HtmlParOff\HCode{</span><span class="booktitle">}}{\HCode{</span></h2>}\HtmlParOn}{}{} \Css{.booknumber{display:block;}} \Configure{toToc}{book}{part} \CutAt{book} \begin{document} \EndPreamble The \NewSection command redefines \book as sectioning command recognized by tex4ht. The second argument should contain the counter command used to print the section number. It is then necessary to configure the HTML code using \Configure{book}. This configuration is declared by \NewSection. It is a bit complicated, so I will try to describe it in more detail: {\ifvmode\IgnorePar\fi\EndP\HCode{<h2 class="bookHead"><span class="booknumber">}\bookname\refstepcounter{book}\space\thebook\HtmlParOff\HCode{</span><span class="booktitle">}} The \ifvmode\IgnorePar\fi\EndP closes the current paragraph, this code is necessary for all block level elements in tex4ht. The \HCode{<h2 class="bookHead"><span class="booknumber">} starts the HTML code, the span.booknumber element is used to style the book number, as it should stay on a separate line than the book title. \bookname is Memoir's command that contains the Book string. We need to manually increment the book counter using \refstepcounter command. \HtmlParOff disables HTML paragraphs, because some spurious paragraphs were produced in my tests. The \HCode{</span><span class="booktitle">} closes the book number and opens span.booktitle for the book title. {\HCode{</span></h2>}\HtmlParOn} This code just closes all opened HTML elements and enables paragraphs. \Css{.booknumber{display:block;}} This command style the book number, it is just printed on a separate line. \Configure{toToc}{book}{part} Requires \book to be included in TOC, on a same level as \part. I am not sure if it is possible to put it on a higher level without need to redefine lot of stuff. \CutAt{book} This will open a new HTML page for each book. This is the result in ebook-viewer: Thanks for the solution and the very clear explanation! This certainly works for me, and happy enough to have the \book and \part bookmarks on the same level.
STACK_EXCHANGE
Losing communication with the server even for a moment will cause all uncommitted changes to be lost. I've had this happen. Best to go to browse mode after a small amount of work if you're modifying a hosted file. Rick, this is not the case. We are talking about changes that have been committed to the layout over a period of an hour of work or better. Saving the layout going back into layout mode back and forth for an hour and then all of a sudden the layout is gone like it was never saved to the server any of the times that it said it was saved previously. I am a senior FileMaker engineer for Richard Carlton Consulting and this is not standard behavior. Thank you for your posts. Is this occurring with all files or just a specific file? I'm trying to determine if this is a damaged file or FileMaker/OS issue. If just a specific file, is the issue occurring for one layout or all layouts? If just one layout, try creating a new layout (do not duplicate the bad layout) and move the desired fields and objects onto the layout. If you were performing these steps on a hosted file, does the issue occur when making the changes locally and then uploading the file to the server? Any other information you can provide may be helpful. Although I don't have a solution to your issue, I've never had FileMaker tell me a layout, or changes to a layout, was saved. I do have it set to save layout changes automatically though. Do you? And you're working on a live hosted file, yes? If you're not automatically saving layout changes this still seems to me to be a network problem. Even if save layout changes is set to automatic, the changes don't seem to "commit", in my experience, until exiting layout mode. I've lost hours of work when working on a layout when a communication problem occurs and I've neglected to exit layout mode from time to time. My two cents. The alternative is to download the file and work on it locally. Rick, yes it is hosted. Yes I am saving changes automatically and did so multiple times during the time I was working. TSGal, this has happend on more than one file and more than one layout but it only happens intermittently. (A total of 4 very obvious instances now within the last month) I have not tried to make the changes locally (as I generally work on live files) but I doubt that it would have the same problem. What it feels like is that FileMaker client thinks that the save was successful but FileMaker server or its cache or something is not retaining the save. I recently uninstalled 13 and another possibly conflicting version of FileMaker and re-installed only 13 and I will see if it happens again. This is the second such report of this issue. Other users and FileMaker Inc. have been unable to replicate this behavior, but it has happened to at least one other person (Howard?) from what I can recall... I have also experienced this issue. It is on a hosted file, and has happened on many different layouts (of one file), randomly over the span of several months. I was able to catch it in the "unsaved state" where my client shows the new layout changes, but when anyone else logs in, they see the unedited version of the layout. I can navigate through the database using scripts, editing other layouts, and come back to the layout in question and it still shows the changed version of the layout. In the unsaved state, I can: - change and save multiple layouts, none of which anyone but the editor can see. - create a new layout, which no one else can see - create a script, which everyone else CAN see - create a table, which everyone else CAN see - create a new style, which everyone else CAN see - the layout automatically created by the creation of the new table can be seen by me, but no one else - when I create a go to layout script step, I can select the layout I created, but then it shows as Go to Layout [<unknown>] in the script - I can see my new layout in security, but no one else sees it in security - If anyone else edits any layout and tries to save it, their filemaker client locks up This behavior can persist indefinitely (at least several hours), until the "editor" closes the database, at which point everyone can save layout changes again. When the editor logs back in they can also make changes again. Much of what you report sounds like a caching issue. Although generally not necessary, have you tried executing the script step "Flush Cache to Disk" on your machine and the other user's machines? It's also not clear in your post if all users have [Full Access] privileges. If the other users have custom privileges for specific layouts, I could understand why the other users would be unable to see a new layout. Any other information you can provide may be helpful in replicating this issue. I have not tried flushing the cache, I will try it if I can catch it in that state again. - All users testing had [Full Access]. And by users, I mean me connecting from several different computers. - The file is connected to a MySQL database via ODBC, and has many ESS tables. - There is high latency between the server and the developer. - There are multiple developers working on this file simultaneously. Let me know if there's any other information I can provide I found myself in that state with the inability to save layouts again. I tried "Flush Cache to Disk" on both machines. It had no effect. This doesn't have anything to do with custom privileges as it affects layout changes as well. They can see the layout but not the new changes. In this case I had added a new object to a layout. The user could not see it, even though I added it an hour ago and had navigated to several different layouts in the meantime. When they reported they couldn't see it, I went to that layout, went into layout mode and copied the object. Then I proceeded to restart my FileMaker, paste the object on the layout, save the layout, and they could now see it. Sorry, I don't have a solution, but I can confirm the behavior you described. I have a client with the same problem - she is using FileMaker Pro Advanced 13.0v9 on Windows accessing a remotely-hosted database (at FMPHost/FileMaker Hosting Pros) running FMS 13.0v9. We regularly conduct training over the phone while both accessing the database at the same time from separate locations. I am on a Mac running OS X 10.10.5 and FMPA 13.0v9, and we are careful not to both edit the same layout at the same time, or not to both go into Manage Database at the same time. We have had several occasions where she makes a change to a layout, goes to Browse mode and can see the changes and enter data, but I do not see her changes. We have taken screen shots of the that layout at the same time, and hers shows the changes while mine do not. We both log out and back in, even quitting the app, and sometimes her changes disappear and sometimes they do not. Up until yesterday, my theory was that the remote server was not writing cached changes to disk. Here's where it gets more bizarre - A few days ago, she made some changes that she could see, but her colleagues at her office could not. A day later, the problem persisted. Two days later, her changes were gone from her computer, too. Same problem here. I'm running older versions of Filemaker, though (if it ain't broke...), so I thought it would be helpful for everyone to know the problem isn't limited to the versions listed above. We have server version 9 Users are using FMP version 9 I use FMP Advanced version 11 Everything has worked flawlessly with this combination of apps for several years, now. However, the problem described above popped up with one particular layout. I made a change to a text on a layout, and have been using the changed version for weeks, but users only see the previous (unchanged) version. This is what solved it for me: - Created new layout (did NOT use the duplicate feature) - Deleted header and footer (old layout just had one part: body) - Used Select All to copy all objects from original layout - Pasted all objects into new layout - Changed scripts so they point to new layout Ok, this is embarrassing... I should get a DORK award! The whole problem we were having boiled down to me having two layouts in the system, and only making changes to one of them. I was also using button A to reach layout A, whether I was on my machine or one of theirs, while some users were using button B and reaching layout B. I would make a change to layout A, test it on mine and theirs using button A, declare it updated, then get confused when they couldn't produce the same results (because some of them were clicking button B). Why two layouts? I did it in a rush one day as a quick solution in two different files, so they could each print a certain form we needed. I should have taken the time to do it right, using only one instance of that form (layout), because I sure paid the price in time and frustration while tracking this down.
OPCFW_CODE
Learning Python for Data Analysis and Visualization, name of the training video series in the field of programming languages and in the branch data analysis (Data Analysis) can be. In truth you in this course with learning the programming language Python, specifically with the categories given encounter. This means that you view the education this course how to analyze, etc. visualization and use of data will be familiar. The course video also includes tens of thousands of lines of code, programming and hours of video training is that all the topics and content into the best form of at your disposal puts. At the end of this course, and learn the concepts and skills presented, you will have an understanding of how to plan in Python to get the want to bring. Also, you how to create and apply changes to arrays in numpy, and Python learn. Also, in another part of this course you will learn how to use panda to create the analysis data set will also learn. This course is specifically for programmers, Python, interested in the science data (DATA SCIENCE) design, and is published. Cases in which the course is taught: - Achieve intermediate level in programming, Python - Learn how to correct use of the environment notebook Jupyter - Create and manipulate arrays, to help the library numpy learn - Learn how to build and organize a variety of data with Panda - Learn how to work with the format and template language Python Profile course Learning Python for Data Analysis and Visualization - Language : English - Time : 21:05:06 - Number of courses: 110 - Instructor : Jose Portilla - File format : mp4 This course Learning Python for Data Analysis and Visualization 110 lectures 21:05:06 Intro to Course and Python 2 lectures 07:04 3 lectures 33:09 8 lectures 01:06:49 Intro to Pandas 11 lectures 02:12:16 Working with Data: Part 1 4 lectures 22:42 Working with Data: Part 2 13 lectures 01:43:53 Working with Data: Part 3 5 lectures 58:52 7 lectures 01:27:35 17 lectures 03:38:17 20 lectures 05:51:06 Appendix: Statistics Overview 11 lectures 01:28:58 Appendix: SQL and Python 3 lectures 28:22 Appendix: Web Scraping with Python 2 lectures 24:28 Appendix: Python Special Offers 3 lectures 41:23 BONUS SECTION: THANK YOU! 1 lecture 00:10 Prerequisite course Learning Python for Data Analysis and Visualization - Basic math skills. - Basic to Intermediate Python Skills - Have a computer (either Mac, Windows, or Linux) - Desire to learn! After the Extract with the Player your custom view. In date 9 Jan 98 new version with English subtitles was replaced. Version 2019/9 compared to 2018/11 changes in the number of courses not. Time a few seconds rising and the changes achieved in the headings have been. Learning_Python for Data Analysis and Visualization 2019-9 Learning Python for Data Analysis and Visualization 2018-11 Fixed Password file(s): www.downloadly.ir
OPCFW_CODE
Dear Reader (which doesn’t include David Brock, who apparently is buried deep in a concrete bunker, receiving messages only by Gunga-din-like messengers, for fear that Roger Ailes or SkyNet will triangulate the frequency of the fillings in his teeth), What speaks barely any French, is sitting by a pool in Hawaii, and has two thumbs? That’s right. I’m in Hawaii. In what has become something close to an annual tradition, my father-in-law has rented a house here on the big island as part of a fiendish plot to con his children and grandchildren into visiting him. It’s also pretty much the closest warm beach to Fairbanks, Alaska, from whence my in-laws hail. I was in Fairbanks a few weeks ago. It dipped to a frosty minus-51 degrees while I was there. Winter is better here. As this “news” letter demonstrates, this is a working vacation. Indeed, in an effort to dispel prevailing myths about people with names like “Goldberg” I am eager to find a way to write off this trip for my taxes (“I don’t think you know what the word ‘dispel’ means” – The Couch). I know what you’re thinking, “Good lord, how much does it cost to fly a couch to Hawaii?” Rest assured, if I can hallucinate a wise-ass couch that always criticizes me, I can imagine him into my checked luggage as well. Personal demons don’t even need a companion ticket, and neither does my dyspeptic sofa. Anyway, as some of you may recall I recently wrote about the grave problem of illegal-immigrant snakes from Myanmar (formerly Burma) waging an unchecked killing spree in the Florida everglades. I wrote, in part: Invasive Burmese pythons have nearly wiped out populations of white tail deer, raccoons, and other mammals in the Florida everglades. Now I am not an absolutist when it comes invasive species. I like wild horses and tumbleweeds, for instance. But I am biased against giant frick’n snakes that can eat small children and large dogs illegally sneaking into our country. That’s just me. (Oh and my one word response to the objection that there are no reports of feral Burmese pythons eating children: “Yet.”). I’ll go one further: I think it is the right and proper role of government to protect us from giant alien snakes that are destroying our environment, threatening our children and pets. If you want to call me a RINO for that, go for it. I can do without the cowboy poetry festivals, but invasive giant snake genocide: mark me down for a yes. Or as Homer Simpson might have said, “People, giant foreign snakes are eating American dogs with impunity, did we lose a war?” Among the more asinine responses to this post was the charge that I am a hypocrite for endorsing government action to deal with huge rib-crushing snakes, but not when it comes to, say, the mohair subsidy. I loathe this sort of argument because it contains so much stupidity compacted under a nougat-y layer of ignorance. Conservatives and most libertarians acknowledge that there are times when government must take action. The Constitution begins: “We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common Defense, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.” One can forgive the Founders their love of concision and brevity that they did not explicitly state “this includes eliminating the threat of an invading army of 400 pound, 30-foot long, serpents from the lands of the dictatorial junta of Myanmar.” But I don’t think I’m going down the road of Roe v. Wade or genuflecting at the altar of the living constitution when I assert that it is implied there. And I ain’t talking about no emanations of no penumbras. The upshot of this sort of argument is that if I am to avoid the charges of hypocrisy or inconsistency and I am in favor of government officials taking seriously the threat of man-eating snakes, I must therefore also endorse the full scope of the New Deal and FDR’s economic bill of rights. And while I don’t think we necessarily need the federal government to get involved, for the sake of argument, let’s imagine the Burmese pythons mutated in Floridian climes and sprouted hands with opposable thumbs and brains sufficient to the task of constructing rudimentary laser weapons. Would I still be a hypocrite if I had no principled objection to the federal government getting involved? More realistically, I think that if the feds got out of the way, the state and local governments could do a lot to eliminate the Burmese menace pending such aggressive mutations. As I argued in the Corner, funding bounties and other incentives to hunters should be sufficient to the task for a good old-fashioned “whacking day.” At least that is what I believe. I’ve proposed to Rich Lowry that I go down to the everglades to report on the situation as part of a larger piece on invasive species. You’re Reading This to Help Me Avoid Taxes Which brings me back to my tax write off my working vacation in Hawaii. The Hawaiian Islands are a perfect place to research invasive species. There are no native non-marine mammals or reptiles here (with the exception of one ludicrous bat). Everything that walks or crawls here was brought either by the original Polynesian settlers or by Whitey®. In the late 1800s, sugar-plantation owners imported mongooses (not mongeese, alas) in the hope that they’d kill the invasive population of rats. Unfortunately, this was what biologists call “a really stupid idea” (the phrase sounds smarter in Latin). The rats are mostly nocturnal. The mongooses are diurnal (which is not to be confused with “die urinal!” – something Gary Busey screams when trying to punish bathroom fixtures with his highly acidic pee). The result is that the two mammals basically pass each other like the sheepdogs punching in and out of work in the old Wile E. Coyote cartoons. The islands have a huge population of lizards – all of them were introduced as well by the tag-team of Polynesian and the Man (not to be confused with the short-lived NBC 1970s crime-fighting “dramedy” Polly Nezian & the Man. I like having the lizards around because they eat bugs and I don’t like bugs. The downside is they crap all over the place like Keith Moon in a hotel room. There’s a lot of hoopla over the horrors of invasive species, but my view is that they are a mixed blessing. They do less damage than people usually claim. My friend Ronald Bailey has written a lot on the subject and notes that invasive species increase biodiversity and do not lead to extinctions. Indeed, Macalester College biologist Mark A. Davis wrote in the journal BioScience in 2003 that “there is no evidence that even a single long term resident species has been driven to extinction, or even extirpated within a single U.S. state, because of competition from an introduced plant species.” But that doesn’t mean that invasive species are always a net good. I’m not a biological egalitarian. I think some species are better than other species. Better how? Well, for lots of reasons, but most of them boil down to “Better because I like them more.” I would be heartbroken to see the tiger go. The loss of a species of dung beetle wouldn’t bother me too much, if there were no significant larger ramifications. Anyway, I’ll be researching these ideas intensely between tropical drinks because Goldberg never takes a vacation, Mr. Taxman. Various & Sundry Fascinating examination of whether this gun should come with the advisory “WARNING: DO NOT POINT GUN AT FACE.” Behold: Lego Jonah Goldberg! My regular column today Oh, and lastly, if you pre-order my new book. Nay – when! – you preorder my new book, save your e-mail receipts. We are working on a Goldberg File subscriber-only premium giveaway. Sort of like a right-wing Happy Meal toy. Haven’t figured out what it is yet, though. But that shouldn’t stop you from pre-ordering now.
OPCFW_CODE
Windows 98 kernel32 dll I used to use Skype for best software crack search engine a long time without problems. You need to update to XP SP3 from Microsoft 2009 yamaha wr450 service manual help center then possibly reinstall Skype again. Only notification "Fatal Error" with short description "Failed to get proc address for SetDllDirectoryW (Kernel32.dll is appeared. Windows NT base API Client DLL.The most common occurring error messages caused by problems with kernel32.dll: "Explorer caused an invalid page fault star ocean snes patch in module Kernel32.dll." "Iexplore caused an invalid page fault in module kernel32.dll." "Commgr32 caused an invalid page fault in module kernel32.dll." "Error in Kernel32.dll." "program name has caused.But recently I have received notification on updating the program.This is not the issue.Do note: Microsoft themselves will stop support this XP from April 2014 onwards, and so will lots of other software vendors.In most all cases when you get an error message about kernel32.dll it is because of system incompatibilities with the application you are trying to run.If your getting an error that you dont have the kernel32.DLL file that probably means you dont have at least XP service pack.M Support says : Lots of people having issues with latest Skype update these days.For this reason, we advise everyone to never attempt replacement of this file.Nothing works in Windows if the kernel32.dll file is damaged, moved or deleted.Comments made by users: I have got the problem "MakeCriticalSectionGlobal could not be found in kernel32.dll". These kind of errors can occur in all "Microsoft Windows" operating systems from "Windows 95" to the new "Windows 7". We only have it available so that those few who *really* know what they are doing have a chance to get. Commgr32 caused an invalid page fault in module Kernel32.dll. The error message concerns Kernel32.dll Note: Do NOT attempt to change this.dll on your computer. The problem is that Skype have stopped support Win XP SP2 and older. So after updating of Skype, the latter doesn't run. It is one of the primary files that are needed in order for "Windows" to function properly.When you start your PC - kernel32.dll is loaded in to a protected space in the memory, and there after it locates other applications that wants to be loaded in to the memory.Errors arise when other applications in "Windows" are trying to access the protected memory space that Kernel32.dll is using.Figure out why it is incompatible with your system first, it might be as simple as locating a different version of that program.(Windows 95/98/Me) - C:WindowsSystem (Windows NT/2000) - C:winntsystem32 (Windows XP, Vista, 7) - C:WindowsSystem32.( Please set your.
OPCFW_CODE
Essentia Code Review of Data Interoperability ICO Share this article OK, what are we doing today? Quick Essentia code review, it’s a data interoperability ICO, some funny terminology here – “Essences” are data owners, private or corporate, “Synergies” are the links designed to help data services operate. Sure, why not. But hey, they call their robot assistant Jarvis, so it can’t be all bad. Jumping in. Essentia, the next generation (what generation are we currently at?) layer of interoperability and data. Essentia is a masternoded (this is a verb now?) multi-chained set of protocols connecting centralized and decentralized resources to create new powerful interactions and experiences. Essentia is a modular decentralised interoperability and data management framework. Their masternodes are just regular nodes. Started playing with their app, I like the interaction and design, seems pretty cool. Wallet has full functionality, can easily store files into IPFS, Swarm or Storj. Impressive so far. eLogin uses your own public and private key to sign login information to prove authenticity. This is very cool, I’m seeing a generalized trend of plug and play services starting to appear, I’ve also started toying with the idea and implemented some features, these guys are far ahead of me, it’s nice to see. EtherDelta integration. I’m impressed. So, all the jargon aside, these guys are an integrator, but they are pretty far ahead in the game. Why they need their own network, I don’t know, but I like what I’m seeing, so let’s head over to the repo. Bit disappointed with the repo’s, I can already see the real meat isn’t here. What we do see is bitcoinlib (presumably used for the wallet and sending/receiving funds), blockcypher-python (used for block exploration on the BTC based networks), Omise python and pywallet. Still, they could have implemented it all in an innovative way, so let’s have a look at their modifications. I really appreciate that they did forks instead of clones. It makes it significantly easier to see what they are up to. Thanks guys. pywallet, even with master, moving on. omise-python, even with master. blockcypher-python, even with master. No meat in essentia-sia-api either unfortunately. Essentia Code Review Conclusion: No code yet, but the webapp prototype looks promising. I’ll keep an eye out if they release something new. Disclaimer: Crypto Briefing code reviews are performed by auditing what is on display in the master branch of the repo’s made available. This was performed as an educational review and any comments in the article are the opinion of the writer. It is normal for code to change rapidly, hence we timestamp our code reviews so that they present a snapshot at a moment in time. Information contained herein should not be used as any comment or advice on the project as a whole. Essentia Code Review Timestamp: May 8th, 2018 at 19:03 GMT
OPCFW_CODE
I’d like to have the omniverse render viewport on a second screen (e.g. on a projector) with specific FPS, resolution, etc. The position of the camera view is live updated via python script. At the moment I simply move the Create window to the second screen, specify the resolution in the viewport settings and press F11. Is this the preferred way to go? Or how could I create a little python kit app that live renders the viewport with certain settings specified in the script? thank you and best regards +1 I would also like to know this How are you controlling your camera python script? Are you responding to keyboard/mouse events? Or is it updating with parameters to a function call? you can script all the steps that set fps, resolution, and switching to full screen I am not sure you can using scripts to move the Kit Window to the other monitor itself i will need to check Psudo code : settings = carb.settings.get_settings() the full screen menu actions and steps are in if you are gonna use F11 mode, then that is really enough you could also look at the omni.create.kit file and see how you could deactivate some of the extensions you don’t need . but as discussion if you go full screen anyway you wont gain much as for controlling the camera from scripts you simply need to update the USD prim that is the active Camera. any more questions please let me know Thank you for the code snippet, that works perfectly! Some other questions for the fullscreen view: How would I turn off the grid, axis frame and light symbol via script? I assume with omni.ui? In fullscreen mode, I’d also like to show text on the bottom, as an overlay to the image. I’d also like to update this text. I actually use VR controllers as input (with openvr), the renderview is projected with a beamer onto a canvas. In a separate process I calculate a new camera frame and put it onto a queue. The script for updating the the camera looks like this: camera = stage.GetPrimAtPath('/Root/Camera') frame = await queue.get() translate, rotateZYX = translate_and_rotateZYX_from_frame(frame) However, I am struggeling to get the camera movement smooth. It seems as there are too many frames streamed and the renderview does not get updated after every frame. Input and output need to better synchronized. Maybe the process needs to be reverted: get a new frame only once the renderer is finished, or create a camera animation to tween between frames on the fly so that it is correctly buffered? Any input on this is welcome ;) Ok cool. I am interested in VR integration, but I have to admit my strategy is to wait for support from the omniverse team :) Waiting for the rendered frame sounds sensible. You might find inspiration in the View xr extension ( $HOME\AppData\Local\ov\pkg\view 2020.3.31_exts\omni.kit.xr\omni\kit\xr) Which integrates cloudXR. CloudXR is a streaming service that acts as a broker between an HMD over the network and openVR this is the grid some of the menu some are more tricky as the above are bitewise your need to assemble static const ShowFlags kShowFlagNone = 0; static const ShowFlags kShowFlagFps = 1 << 0; static const ShowFlags kShowFlagAxis = 1 << 1; static const ShowFlags kShowFlagLayer = 1 << 2; static const ShowFlags kShowFlagResolution = 1 << 3; static const ShowFlags kShowFlagTimeline = 1 << 4; static const ShowFlags kShowFlagCamera = 1 << 5; static const ShowFlags kShowFlagGrid = 1 << 6; static const ShowFlags kShowFlagSelectionOutline = 1 << 7; static const ShowFlags kShowFlagLight = 1 << 8; static const ShowFlags kShowFlagSkeleton = 1 << 9; static const ShowFlags kShowFlagMesh = 1 << 10; static const ShowFlags kShowFlagPathTracingResults = 1 << 11; notes the /persistent/* setting mean they will persist accross sessions Hi @dfagnou can you confirm that "/app/viewport/grid/enabled": False still works? I can change it (and it gets changed) but that does not produce the desired effect! I have just check and you are correct that changing the Value from the setttings is not hiding the grid also the value is correct ! sorry about that I will make an internal ticket and we will try to get it fixed for the next version Anyone having the same problem setting the line width to zero has the same effect ( Also, I’ve to mention that what is rendered by the ROS cameras is what is being showed by the viewport WITH selections/grids/anything else. It’s been a pain disabling everything and remembering this… Unless I am not aware of some hidden options.
OPCFW_CODE
Theme Home Page: http://www.presscoders.com/themes/twist-of-ten-theme/ Live Demo: http://www.presscoders.com/themes/twist-of-ten/ Theme License: GPL PressCoders.com is pleased to announce the release of 'Twist of Ten' a fully compliant WordPress 3.0 theme. It supports all of the new features such as custom headers, custom background, custom menus etc. all of which are fully configurable in the admin control panel. The theme includes 8 brand new headers you can use in your theme, or you can upload your own! Twist of Ten focuses more on page based navigation to create a simple clean CMS type website. It features two menus: a primary menu for the main drop down pages, and a secondary menu located at the top right of the each page. Both menus can be fully configured in the WordPress admin control panel. You can choose whether to include a blog page on your site or not, it's completely your choice. To include a blog just set up a new page called 'Blog' or similar, and set the page template to 'Blog' and that’s it - you now have a blog page! If you don't specify a blog page your site will just show page based content via the drop down main menu and top right menu. The front page shows a custom page to showcase any items you wish. At the top is a page excerpt and associated featured image, in the middle is a welcome message, and at the bottom the latest three posts are shown. Every time the home page is loaded or refreshed a random page excerpt and associated featured image is displayed. You can choose to show all child pages under a specific parent, or if no parent is defined then a random page (from all pages) is shown. If no featured image is associated with a page then a default image (default.jpg) is shown instead as a place holder. Also, the background of the latest three posts at the bottom of the home page is rotated every time the page is loaded/refreshed. For all pages and posts (if you choose to show a blog) there is a unique sidebar for each type. The page sidebar shows adverts or anything else you wish, whilst the blog sidebar shows categories and posts related to your blog. Also, there is also an ad-spot in the header area that can be used to display banners or adverts etc. Summary of theme features: - Refresh home page to rotate featured page - Refresh home page to rotate latest post background color - 8 brand new header designs - Customise the site background via admin cp - Custom home page - 2 unique sidebars (one for pages, one for posts) - Banner/advert support in header, and page sidebar - Search box shown on all pages in main menu bar
OPCFW_CODE
zBrac : A Multilanguage Tool for z-Tree Created by Ali Seyhun Saral and Anna Schroeter Released under GNU General Public License v3.0 Note: Here are the slides of the presentation that I made in ESA European Meeting in Dijon: Slides on Google Slides About the project zBrac is a tool to facilitate text modification of z-Tree treatment files. With zBrac, you can export specified text into a language file and import it back after the text modification or translation. The main advantage of using zBrac is that the coding and text/editing can be done independently. zBrac's design is specifically tailored for cross-cultural studies: You can code your experiment once, send out excel sheets to translators and later implement those translations into your code directly from the file at any time. zBrac also tries to tackle the issues with special characters in z-Tree by offering different encoding options. zBrac is particularly useful when the treatment file contains the same piece of text several times. Such duplicated pieces of texts are very common in z-Tree programming as it is often needed to copy-paste stage tree elements. zBrac recognizes each unique key as a single key and it would be enough to provide the text to replace for this key at once. For an example, please see below for the Holt and Laury measure example. zBrac is free/open-source software (See GNU GPL-3 License). You can use, modify and distribute it. It is not obligatory to cite the software, although it is highly appreciated (see below for citation information). Citing the software Our paper that introduces zBrac and discusses its use is published by the Journal of Behavioral and Experimental Finance and full text can be found here. The citation information is as follows: Saral, A. S., & Schröter, A. M. (2019). zBrac—A multilanguage tool for z-Tree. Journal of Behavioral and Experimental Finance. Please feel free to get in touch with the authors if your institute do not have the access to the paper. zBrac is a cross-platform software, meaning that it can be run under major operating systems. For now, we have an installation package availabe for Windows. For other platforms, it can be installed via pip. Installation with Windows Installer You can get the installer from releases page: Installation with pip (Windows, GNU/Linux, MacOS) If you have Python(>=3.6) and pip on your computer you can install zBrac from comannd line with the following command: pip3 install zbrac ( or pip install zbrac if pip is for Python 3) Please note that you might need to update your pip version before being able to download dependencies properly. You can update your pip verison with the following commend: pip3 install --upgrade pip Then if python binary folder is set up properly, zbrac command opens the software. zBrac can also be run with a python interpreter: import zbrac zbrac.interface.startgui() zBrac recognizes the text that are enclosed in double brackets: [[This is a text]]. Each piece of text indicated as such are called "keys" Each key acts as a placeholder and later can be to be replaced by another text. To give an example, if you'd like to add a welcome message on your zTree file, but you are not sure about the exact message at that point, you can just put [[welcome message]] in the desired place. If you like to use zBrac on your own code and if your treatment file is already written, the text in your file should be enclosed by ]]. If you will write a zTree treatment from scratch, it is more efficient to write the desired text in double brackets while programming. Afterwards the brackets can easily deleted at once by using the Strip Brackets function of zBrac. Language files (xlsx) A language file is an excel file which in each row contains a key in the first column, and a text to replace the key with in the second column. For instance, once we have our welcome message, an excel file, a file that follows the strucure below can be used to replace the key with the text in the second column: |[[welcome message]]||Welcome to our experiment| If you would like to have your experiment in different languages, ideally each one of them should have its own language file. Treatment files (txt) Treatment files are basically zTree treatment files in TXT. They can be exported/imported using zTree. To prepare your treatment file(ZTT) to work with zBrac, you should: - Define the text by adding double brackets at the beginning and at the end of the text. (ie. [[this is my text]]) - Export your file as a text file by using zTree Example: Translating Holt and Laury Measure Here we demonstrate how to use zBrac by using a Holt & Laury Measure of Risk Aversion from English to German. This is the english version we started with: and the file looks like this on the client screen: First we enclose all the text we want to modify with double brackets. The file now looks like this: Then we exported the file to a text file by clicking: The exported TXT files with brackets act like our master file. We can replace the values of the keys by using a treatment file. We can either prepare a language file with the format described above, or we can generate a language file template by using relevant function of zBrac. We will do the second. We choose the Create Language File Template function and save the language file. This language file template now looks like this. In the first column it contains keys, in the second column it contains values that were generated by removing the brackets from each key: We create a German language file by translating the second column. Notice that the original treatment file contains 40 duplicates the text "chance of winning" and with zBrac we modifiy that text only once. To replace the German text with the keys, we go to Implement Language File tab and specifiy the treatment file and the language file that contains our translation. Save treatment file as... to create a translated version of our treatment file and save it. Finaly we go back to zTree and choose Import and select our translated treatment file. This is what our final result looks like: Guidelines for contributing will be available soon. But feel free to create an issue or a pull request if you have suggestions. Will the brackets be visible when I add them into my code? - Yes, until you replace them by using a language file, they will be visible. However stip brackets function allows you to delete them all at once, so that during the testing process, you can just check how do they look without the brackets. Why double brackets? A single bracket or ____ would be better, wouldn't it? -As zBrac modifies the exported text file created by zTree in a certain structure, we have to pick an operator according to these criteria: No interference with the zTree code itself: We have to pick an operator rarely exist in a typical zTree code. For that reason we scanned for several zTree files we collected and chose double brackets accordingly as they are highly unlikely to interfere with your code. Easy to type: Double brackets are relatively easy to type. With a US keyboard layout it takes just a pair of double-keystrokes to write them. In most of the European layouts, an additional AltGr should accompany those keystrokes. Easy to read: Double brackets were much easier to read than other candidates we qualified according to the criteria above. I would like to automatize my workflow. For that reason I want to use command line to generate my treatment files with zBrac. Is it possible? -This feature is on our list. We already designed zBrac functions to be able to work independently and they are accessible in our python packet with zBrac.functions.*functionname*. But documentation of these features and implementation of the command line functionality is still in progress. Why citing zBrac is not a requirement? -For a truly free software spirit, we wanted to use GNU GPL-3 License and a citation requirement would not be compatible with that licence. We do not think using GPL-3 should be the norm, but it is just a matter of choice of the authors. We still believe that citing open-science tools are beneficial for increasing the visiblity of those tools and it is highly appricated. Why language files are in Microsoft Excel format? Doesn't it contradict with your free software spirit? -For the language files, we needed to have a format that allows multiple encodings while being widespread and easy to work on. Unfortunately CSV format doesn't satisfiy these criteria. Moreover there are fantastic free software tools such as LibreOffice that allows to create and edit XLSX files. Can I google translate my text with zBrac? -Not directly, but it just takes few seconds to copy and paste relevant column to google translate interface and then putting it back to a language file. How can I contribute? -Currently we do not have to pay any costs related with the project and we do not expect to do it in the feature as well. Therefore we do not expect a financial contribution to the project. Contributing to the source code, reporting bugs, requesting features, and contributing to the documentation are more than what we can hope for. Why did you make it as a desktop software? Why didn't you just create a web page with the same functionality? -The first reason is that maintaining a web server is both costly and time consuming and we are not able to handle it at the moment. And the second reason, for the users, downloading the python source code and making modifications with it is much easier if a software is in a self-contained package form. For these reasons we believe that the current form is an optimal way to publish the software. zBrac is designed and built at the Max Planck Institute for Research on Collective Goods in our free time. We used Python 3.6 and Qt5 to build it. We like to thank to those who attended the internal presentation of the project and provided their comments and suggestions. We would like to thank specifically Zvonimir Bašić, Philip Brookins, Brian Cooper, Andreas Drichoutis, Christoph Engel, Zwetelina Iliewa, Matteo Ploner, Piero Ronzani, Marco Tecilla and Fabian Winter for their valuable comments and/or for their help. References and Footnotes : The name zBrac is a portmanteau of the words zTree and brackets. It is pronounced ˈzibrək, like zebra (that's where the logo comes from) : Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental economics, 10(2), 171-178. : Holt, C. A., & Laury, S. K. (2002). Risk aversion and incentive effects. American economic review, 92(5), 1644-1655. : We used a sample sent to the z-Tree mail group. Credits : Andreas Drichoutis : There is one case that you might need to use double brackets, that is, when you need nested arrays. For instance myarray[anotherarray]. Unless there is an already open double bracket, zBrac would work fine. Otherwise adding space between array those double brackets such as myarray[ anotherarray ] would fix a potential issue.
OPCFW_CODE
Informatica Powerexchange post sql greenplum writer I am facing issue in informatica target greenplum writer post sql..I have query that is running fine on database but when I am placing the query in post sql its not working ,there are lot of case statements present in query. When I tried removing couple of case statements then its working fine using post sql otherwise throwing fatal error.Can someone in the past faced similar scenario and can provide solution how to deal with it.Is there any limitations with informatica greenplum writer? Some things to consider - Is the user that youre using to run SQL in database is same as informatica user? Is the SQL youre running in DB is same as infa post process ? When you run the SQL in the DB, does the target table has all the indexes etc. ? Does informatica loads large amount of data? Doesn infa drops indexes before loading ? Now, please note post load SQL is just a simple SQL runs all at once. So, post SQL can be slow if you do not have an index on target, if you have large amount of data, if table size grows a lot because of infa load, if the table has stale statistics, if table involved in SQL case statements is large So, i recommend, please move the post SQL to informatica mapping because informatica inserts or updates in bunch and not apply the changes all together like a post SQL. Your performance will be much better. Hi @Koushik-Thank you for your reply.While running the post sql using (Relational writer) its finishing in 1 min but we need to move away from relational writer to greenplum writer and with greenplum writer same query started throwing issues.We are using 25 case statements in sql query.If we run with 21 case statements its working fine but if we use more than that it started throwing fatal errors in informatica.I ran the sql manually in db and its running fine not sure if greenplum writer has any limitations like it can handle only 21 case statements and also union all is not working fine Is it the exact 4 case statements that you need to remove to make it work? Or can you remove any other case statements and it works too? If the former, perhaps there is something odd with one of the 4 statements, like missing schema reference, or using function that is not available to the user, etc. Agree with @Maciejg, can you test with just those 4 statements? That way you can eliminate specific issues. Now, if 4 statements doesnt work, you know what is the issue. If they works, then i think youre right, some limitation or perf problem when it has to process lot of logic. In such case, when the logic would be too much to handle, perhaps you could split the statement into two? @KoushikRoy,@Maceijg - I have tried both cases if we split and run then its working but if we consolidate it and run it all together its not working...not sure what kind of limitation it is. If I remove other 4 statements and keep these 4 statements then also its working fine as well.Surely,it looks like some sort of limitation within greenplum writer. @KoushikRoy,@maceijg- Just fyi, same query is working fine when relational writer been used but not working with greenplum writer.So i am sure its limitation issue it seems..howver if you have any thoughts pls let me know
STACK_EXCHANGE
I attempted to make a Tic Tac Toe in C# using Winforms. Now I want to check if either the play or the computer has one. But I don't know where to put my while(true) block. help. Jesus Christ what the fuck is wrong with me. I meant, I'm trying to make a Tic Tac Toe using C# in WPF. Now, I have made 9 buttons and an enum that incorporates idle, player-set and computer-set. Now I don't know where to put my while(true) block that checks whether the player or the computer has *won. Cmon /g/uys I need an idea for a program. Give me inspiration by tell me cool things you have made. Screenshots appreciated. What do you use to fetch email? Browser plugin? Standalone plugin? Using web interface like a pleb? why is it that every tutorial out there assumes you know prior knowledge I want a tutorial that assumes you know absolutely NOTHING a tutorial for the easiest programming language out there can you /g/uys help me with that? Wanna test your DoS skills? I have set up a network server specifically for DoS testing! Heres the IP; How do I solve this error? Argument 1 in this case is [object WebGLRenderingContext] well i know nuthin about this framework, but i just took a look at the online doco for the function compileShader. the error message seems quite clear, compileShader takes one argument. that one argument must implement the WbGlShader interface and whatever you're trying to pass in doesnt. therefore prove this to yourself, pass in something else which you know does. the page i'm looking at has this example... var shader = gl.createShader(gl.VERTEX_SHADER); does that help? >want to buy iphone >/g/ mocks me "ishill, shillphone, apple" >want to buy s7 >/g/ mocks me "samsung, lagdroid" >want to buy redmi note 3 >/g/ mocks me " china, china garbage, china trach" fuck all of you answer a simple question I got a Note 4 around a year ago and bought a spare battery a few weeks after in case I'd ever need it. The spare seemed to last a tad little bit longer so I just left that one in. My phone would get hot during use but I thought nothing of it thinking it was normal as nearly every phone I've owned has done that A few days ago I was talking to a friend who told me that wasn't normal and he told me my battery was probably fucked. I put my original battery back in and sure enough my phone doesn't get hot at all. Could having used a defect battery for... Comment too long. Click here to view the full text. DWM Thread. Do you use it? If so, what do you like? What do you hate? Would you recommend? I use it. It provides a fast, simple interface that usually stays out of my way. I'm looking for a wayland alternative, and so far, Sway works most of the time. money isn't a factor choose 1 My screen blew so I am looking for a new gaming monitor. Budget is around ~$500, $600 tops Any recommendations ? So I recently found a website with nude images of my girlfriend. Is there any way in hell I can get these gone? Hacking/whatever I need to do to get them off? It's not cool and she's going to kill me if she ever found out. If you do you are my hero bro for real. Theres a ton of other girls on the site but I need my girlfriend off. If you could take it all down even better for all the females on it. These are uploaded and she was even underage at the time. Idk how they could even get them? Find your number using www.random.org DO NOT USE YOUR COMMENT ID. THIS IS NOT A GET THREAD. The rules are pretty simple. Get to work /g/ Hey /g/ im not to sure where to turn with this but it seemed to make the most sense. im trying to find a .onion service that allows you to search inmate records because all of the regular .com crap is just bullshit to get money out of you and throw ads in your face. if im not at the right board feel free to call me a fag, i normally lurk on /k/
OPCFW_CODE
Novel–The Mech Touch–The Mech Touch Chapter 2989: Add-ons question pretend Section 2989: Add-ons Among his most significant strong points was that he or she was able to get accustomed to various mechs, particularly when they were designed Ves. There were never a case in which a mech would be adjusted right away on the design stage. Every other goal washed out from presence. The incorporation of the new Swordmaidens and also the formers Lifers, the coming establishment of your Ylvainan mech drive, the crewing problems with the not too long ago-acquired Graveyard and also the Dragon’s Den, the difficult funding and obtain position with the Larkinson Clan’s next banners.h.i.+p, the prep work with the new therapy variations from the Sanctuary, the search for a lot more MTA advantages, the investment of mutated beasts, the continuation of his experiments and the like no longer mattered so much to him anymore. “So are you saying I would be satisfied with a simple professional mech?” runaway guide spoilers It absolutely was precisely how he did the trick. To be a pa.s.sionate mech designer label, he done at his optimum when he grew to become fully engaged in a mech style or experiment. If he had to do both of them while doing so, then his intellect would easily grow to be jumbled, in so doing scattering his consideration. “Let me tell you some tips i imagine.” Ves claimed. “The Chimera Task may go both ways, although i are convinced that far too much choice and so many compromises is absolutely not the best thing. The main reason why the bottom way of this endeavor can be a hero mech is since you can already execute a good deal with only two weapons. There will probably be situations that enable you to have fun with a greater part in case your specialist mech becomes a lancer mech or simply a cannoneer mech or anything, but also in typical you can already accomplish identical work opportunities by sticking with a hero mech type.” “Tolerance.” Ves waved his fretting hand at him. “This product is still in creation. The ones the thing is prior to being still experimental. I want to do my research and watch them over time in order to check they are really safe and sound. The moment I am just finished with that, I might be exposed to bestowing these new imagination house animals to folks who contributed a lot to our clan.” Venerable Joshua checked envious at what was going on. “Can One have one example of these household pets as well? They’re awesome and I think it will be invaluable to own one thing about that we can have around my top of your head.” Due to the fact Sharpie and Blinky were a similar style of existences, that they had smacked an instantaneous buddies.h.i.+p. Both the were already swapping thoughts concerning how to fulfill their employment superior and exactly how much they wanted their respective companions. Blinky didn’t take in typical food items, but Ketis got a alternative for the also. She dragged out a pocket blade from her toolbelt and lazily slashed within the associate spirit’s track. Given that Sharpie and Blinky were exactly the same variety of existences, they had hit a sudden associates.h.i.+p. Each were already swapping tips about how to carry out their tasks superior and just how very much they loved their particular spouses. Due to the fact Sharpie and Blinky were actually the same style of existences, that they had hit an immediate pals.h.i.+p. Both the have been already swapping suggestions about how to meet their work superior and the way considerably they appreciated their respective spouses. “While I have claimed before, the Chimera Project is multi-useful, identical to the Dazzling Warrior. The real difference is the fact that previous happens to be setup as being a hero mech whilst the latter is a modular mech base. What this means is that mechs just like the Quint will only meet 1 part each time. To be able to change it from a swordsman mech in a rifleman mech, it has to shell out no less than half an hour or more below the proper care of a routine maintenance crew to be able to replace away modular mech sections. Would you like to proceed through this difficulties any time you proceed to piloting your experienced mech?” Ves looked up from his workdesk terminal and eyed his companion character that has a helpful start looking. rogue angel – the spirit bannerlord On the day he wanted to spend focus to the Chimera Task, he asked Ketis and Venerable Joshua up to his minor part within the structure clinical. In reality, Ves just came up with a concept on the way he may fasten a friend character to mechs. If he could flesh out this idea, he could possibly empower Joshua further and provides him yet another application which could support him reach triumph! When he chose to devote awareness to the Chimera Job, he welcomed Ketis and Venerable Joshua up to his very little corner in the style and design clinical. knight of demise fgo Mech style and design was his picked out vocation, so he ought to be undertaking it whenever you can. It was actually that recent events brought on him to receive sidetracked by all sorts of goals. Venerable Joshua searched helpful. “After I piloted the Quint, there were clearly numerous conditions where I would have liked to wield a rifle while I was wielding a lance. After I wielded a lance, I sometimes want to wield a st.u.r.dy s.h.i.+eld too. I was able to have performed a lot more in challenge if I experienced admission to a number of resources over the battlefield.” It was actually how he worked well. For a pa.s.sionate mech designer label, he completed at his maximum when he turned out to be fully engaged in a mech design or play around. If he were required to do both of them at the same time, then his mind would easily turn out to be jumbled, thus scattering his attention. “I realize your desire, but it is not really functional for the Dazzling Warriors.” Ves sighed. “The modular mech websites are actually designated by lots of compromises, in case I aim to switch them into hybrid mechs that aim to fulfill just about every role as well, it would turn into a bloated mess which can only provide underperforming results. You can find virtue in specialization. The Valkyrie Leading that you are currently currently piloting is noticeably more powerful as it is built to be good in its picked role.” Even if he piloted mechs for a dwelling, he simply did not possess enough comprehension to help make well informed conclusions on challenging technological issues. Ves appeared up from his workdesk terminal and eyed his associate heart with a helpful seem. Obviously he was energized. He was Ves’ most significant supporter. Not only that, but also, he possessed a fantastic level of sensitivity towards everyday life. Gifting him which has a life friend that they could provide in all places, even into battle, would likely do amazing things for his disposition! It didn’t matter when the assault he absorbed was sharpened and harming. So long as it had been made out of faith based vitality, it instantly converted into his food items! “So might be you expressing I would be satisfied with a fairly easy skilled mech?” It turned out exactly how he proved helpful. For a pa.s.sionate mech designer label, he completed at his maximum when he grew to become fully engaged in a mech structure or try things out. If he simply had to do each of them all at once, then his intellect would easily end up jumbled, in so doing scattering his focus. The moment the crescent-fashioned wave came up shut down, the Celebrity Kitty launched his maw and produced a suction drive that quickly shrank and devoured the inbound sword strength infiltration. “Let me tell you a few things i imagine.” Ves reported. “The Chimera Job will go both methods, however i think that too much choice and a lot of compromises is absolutely not the best thing. The main reason why the foundation method of this venture is actually a hero mech is since you can already execute a considerable amount with just two tools. There might be scenarios where you may participate in a better purpose in case your specialist mech turns into a lancer mech or perhaps a cannoneer mech or anything, however in the common it is possible to already execute similar work by sticking with a hero mech type.” “As I have mentioned previously, the Chimera Task is multi-efficient, much like the Bright Warrior. The main difference is that the past is now setup as being a hero mech whilst the second option is actually a modular mech system. It indicates that mechs for example the Quint could only meet an individual role at the same time. So that you can transform it from the swordsman mech towards a rifleman mech, it must invest at least around 30 minutes if not more below the care of a repairs and maintenance staff so that you can change your modular mech parts. Do you want to experience this trouble whenever you start working on piloting your professional mech?” life movements in plants class 10 Novel–The Mech Touch–The Mech Touch
OPCFW_CODE
A typical export will output some data from each record in the current found set (or, as discussed in the section "Exporting Related Fields," you may sometimes get multiple sets of information per current record, if you export related fields). But what if you don't want data for each and every record? What if you want to export only data that summarizes information from the current recordset, such as you might see in a subsummary report? FileMaker makes this possible as well. Consider the example of a system that tracks sales and salespeople. Each salesperson has a country, and many associated transactions. You'd like to export a data set that contains one row per salesperson, with the following data: salesperson name, country, and total transaction volume. Assume that the initial database structure is as shown in Figure 20.7. Figure 20.7. You might want to export summary data from a database of sales transactions. To output summary data, it's necessary to have one or more summary fields defined. In this case what's desired is a count of transactions per salesperson. Here, you could define a summary field, called, say, TransactionCount, defined as shown in Figure 20.8. Figure 20.8. To export summarized data, you need to define one or more summary fields. The field is a defined as a Count. The count is performed against a field that is known to contain data always, such as a primary key field. For more information on summary fields and summary reporting, see "Working with Field Types," 69, and "Summarized Reports," 287. It now just remains to use this summary field in an export. The process is similar to that required for preparing a subsummary report for display. First, isolate the transactions to be summarized (for example, to summarize across all transactions, you would perform Show All Records). Next, sort by the field that would be the break field if you were displaying the data in a subsummary report. Here you want to group by salesperson, so you would sort based on _kf_SalespersonID. Finally, you'd begin the export, and set your export options as shown in Figure 20.9. Figure 20.9. It's necessary to choose grouping options when exporting summarized data. This export is set to group by the salesperson ID. The export contains some related fields from the Salesperson table, as well as the summary SalespersonCount field, and an entry called TransactionCount by _kf_SalespersonID. That extra field, rather than the raw TransactionCount field, is the one you want; it's triggered by adding TransactionCount to the export order, after which the TransactionCount field can be removed from the export order, leaving the group count field behind. If you were then to export this data to Excel, the result would look something like what's shown in Figure 20.10. Figure 20.10. When you export summarized data, the output contains one row per summary group. Using more complex sorts and summary field choices, more complex summarized exports are possible. Part I: Getting Started with FileMaker 8 Using FileMaker Pro Defining and Working with Fields Working with Layouts Part II: Developing Solutions with FileMaker Relational Database Design Working with Multiple Tables Working with Relationships Getting Started with Calculations Getting Started with Scripting Getting Started with Reporting Part III: Developer Techniques Developing for Multiuser Deployment Advanced Interface Techniques Advanced Calculation Techniques Advanced Scripting Techniques Advanced Portal Techniques Debugging and Troubleshooting Converting Systems from Previous Versions of FileMaker Pro Part IV: Data Integration and Publishing Importing Data into FileMaker Pro Exporting Data from FileMaker Instant Web Publishing FileMaker and Web Services Custom Web Publishing Part V: Deploying a FileMaker Solution Deploying and Extending FileMaker FileMaker Server and Server Advanced Documenting Your FileMaker Solutions
OPCFW_CODE
The transaction IDs can include numbers, letters, and special characters like dashes or spaces, with a character limit of 64 characters. They must be unique for each transaction. The transaction MIF Convention — if it is approved — is however the generalised adoption of the ISO Standards (IBAN — International Bank Account Number and BIC — Bank Each Transaction is uniquely identified by a CURO Transaction ID ( transaction_id ): T YYM Client and Server transaction IDs. With each EPP command you specify a client transaction ID. In the result that you receive from the server, you will find the Ideal/Mollie probleem (transaction id) Error [could not find transaction id]. Ik heb, behalve mijn ID van Mollie ingevuld, niets veranderd aan het script dus hij 12 Aug 2017 Would be useful one day if transaction id of each paiement can be seen in details of transaction when The transaction is touched/clicked for 2 Jan 2020 Envato currently accepts payments via PayPal, credit card, and Skrill. Follow the steps below to find your transaction ID for PayPal and MariaDB has supported global transaction IDs (GTIDs) for replication since version 10.0.2. Contents. Overview; Benefits; Implementation. - Biblioteket skarpnäck öppet - Utan undantag webbkryss - Ba periodiska systemet - Yrkesgymnasiet göteborg - Lasa till hr specialist - Makroangiopati diabetes - Hjärt och kärlgruppen city I found some Java implemetation of signing transactions offline, but as I can see A transaction ID is composed of the payer account ID and the timestamp in seconds.nanoseconds format ( firstname.lastname@example.org ). You are not For example, if you sell something, your seller transaction ID will be different then your buyer's. Cause PayPal intentionally generates two unique transaction IDs, We noticed that when nordic app(nrf mesh - android) send a generic level set command to the generic level server by using sequence number as transaction id . Get transaction Retrieve a single transaction by its identifier. What are the different kinds of transaction histories I can see? What does the status indicate? When you make a purchase on Humble Bundle, your order is given a unique ID which allows us to find and verify your order within our system. If you ever have trouble with your purchase, our Support Ninjas will need your transaction ID! The directions below will help you locate your transaction ID so that it can be shared with our team. PayPal The Set Transaction Id component enables you to set an identifier for all tracked events so that meaningful information, such as an order number, is displayed for a transaction when analyzing tracked events at runtime, whether using Anypoint Runtime Manager or CloudHub. You must pass the unique transaction ID that you generate in the merchantTransactionId key. This field is required. Response Success. Transactions are enabled by providing the DefaultKafkaProducerFactory with a transactionIdPrefix. In that case, instead of managing a single shared Producer, the factory maintains a cache of transactional producers. When the user calls close () on a producer, it is returned to the cache for reuse instead of actually being closed. Read-only Read-only. var://service/transaction-audit-trail or serviceVars. bigint. Return Value. The transaction ID of the current transaction in the current session, taken from sys.dm_tran_current_transaction (Transact-SQL). Permissions. Any user can return the transaction ID of the current session. Examples. Xtream arena jobs Plats Gaithersburg, Maryland, USA Jobb-id R-103578 Datum inlagd 03/24/ on the following transaction database with minimum support equal to 1 transaction. Explain step by step the execution. | Transaction id. Items. The transaction-deletion program, deleteTransaction. php, takes a transaction id number as an argument in an HTTP GET statement, 26 Oct 2020 I found a way to get the ID when get transaction using Transactionitem.itemkey. tostring which doesn't really cater my situation. Hovslagargatan 4 örebro 27 May 2013 Each user performs transactions, and each transaction is given its own ID. The TID (Transaction IDs) are numbered sequentially, i.e. transaction Variabeln identifierar en transaktion unikt så att träffen kan koppla till data som överförs via datakällor. transactionID Den här variabeln är värdefull The PM00400 (PM Keys) table still had records in it for the vendor with the Document Number set to the financial transaction ID from OCM. There are 2 records -28,7 +28,7 @@ Transactions, blocks and votes are represented using JSON documents with the fol. A transaction is an operation between the `current_owner` 26.09 !!!transactionDate!!! Betala skulder snabbt See Write-set Cache. Global Transaction ID. To keep the state identical on all nodes, the wsrep API uses global transaction IDs (GTID), which are What is the transaction id? This is a unique number assigned to each The transaction ID or Reference ID is shown in the Confirmation Screen of your Payment App or on your Bank Statement after you have completed the Transaction. Copy the Transaction/Reference ID and enter in the Reference ID field by clicking on "Enter Reference ID" button.The process is similar for UPI or IMPS/NEFT/RTGS Transactions.. The most common reason for deposits not reflecting in your Trademark ™ 2021 JerkMate.com all rights reserved.2257 - all models featured herein were at least 18 years of age at the time of photography. 2020-04-09 A blockchain transaction ID can be used to retrieve information such as the sending/receiving address, how many confirmations your transaction has and when the funds were sent. Follow these steps to find a deposit/withdrawal blockchain transaction ID on your Kraken account: 1. Sign in to your Kraken account and go to "Funding".
OPCFW_CODE
GRAILS HSQLDB DRIVER INFO: |File Size:||4.3 MB| |Supported systems:||Windows 2K, Windows XP, Windows Vista, Windows Vista 64 bit, Windows 7, Windows 7 64 bit, Windows 8, Windows 8 64 bit, Windows 10| |Price:||Free* (*Registration Required)| GRAILS HSQLDB DRIVER (grails_hsqldb_6525.zip) Installing a simple Grails application in Tomcat on Linux should really have been the most straightforward of tasks. But, as I felt stability in the project I wanned to move the project to the next level where data. Configuring Icescrum on Linux machine, for operations such as datastore. This tutorial shows how to use Grails to quickly build a. - Grails brings Ruby on the grails web application. - If you won't be selectively applied. - Below I'm describing the layout and build software together. - You may choose to use H2 default , HSQLDB, MySQL, etc. Configuring Icescrum on Apache tomcat in linux This post will guide you on configuring the Icescrum on apache tomcat in linux and based on official guide. Add search to your grails application using the searchable plugin If you're building a web application, you're most certainly going to need some 'search' capabilities in your application. Modify the grails URL from localhost to FQDN in order to access icescrum from network. I was using the default in memory HSQLDB, as it is wonderful for changing the database design frequently. Learn how to integrate Spymemcached into your Grails-built, contact-management application, then try caching individual request results with memcached. When you reproduce the problem and the application throws an OOM, it will generate a heap dump file. I was creating a grails application using Grails 2.0.3 but I want to use HSQLDB the database repository. Thanks, I just found then when running grails a framework that uses hsqldb. Add search to view the location of data. Virtual Machine JVM running grails web application and a database. The searchable plugin, I tried to explore the logging? How to GRAILS HOME\lib and lucene. Since I'm guessing this is related to your previous question about how to specify dependencies, I suspect that you care more about the contents of the pom file than actually figuring out what's going on with the archetype, I'll post the file that. Grails brings Ruby on Rails style productivity to the Java platform, built on the Groovy language and fully integrated with Java. On Mac or Linux, you can install it using SDKMAN. The Java Virtual Machine JVM running IntelliJ IDEA allocates some predefined amount of memory. I'm now trying to create an environment that does not depends on the oracle database so the front-end team can run the app from outside the company. If you cannot add a jar, you cannot use, for example, a JDBC driver. I am currently running IntelliJ IDEA 7. Ask Question Asked 9 years, 2 months ago. Grails creates a main folder and numerous sub-folders for each application. James Goodwill completes his two-part introduction to integrating memcached and Grails with a sample Grails application and a Java-based memcached client. Well, if you cannot reproduce the problem in dev, you may have to use the production environment. The search is incredibly useful and review code. A hasMany relationship of using SDKMAN. If you're using the default HSQLDB engine that comes with Grails, one of the ways to achieve this is to launch the Database Manager application from your BootStrap file, giving you a GUI which you can use to explore the database. We will use HSQLDB or H2 in-memory databases as datastore. Grails Parte 01, Introducción y cómo iniciar. Native2ascii - Set this to false if you do not require native2ascii conversion of Grails i18n properties files default, true . 8171-CTO. Configuring Grails to use an Embedded H2 Database Mark Woodford, When developing a Grails application, there are a number of database options available. This appears to have corrected the problem. I ran the Database by modifying the problem. We will be using the default generated Grails 3 in-memory database using HSQLDB or H2. Combination of properties files that class. - It is incredibly useful and provides easy access to attributes of a persisted class. - Be accessed by both Grails 3. - In IntelliJ IDEA, you can define the following data sources, Database data source, operational databases that contain data see Connecting to a database. - 0,0 +1,102 @@ This is a very basic plugin, that so far encapsulates JAI calls for operations such as image loading, saving, cropping, masking and thumbnail creation. - HSQLDB is included with OOo and LibreOffice and downloaded over 100 million times. - Grails is a composed of a stack I like, a combination of technologies I've enjoyed using over the last few years- Groovy, HSQLDB, Spring, Hibernate, etc. We will be any permission issues. Below I'm describing the steps to follow for deploying a grails web application in tomcat server instead of the default server that comes with the grails package. In a pluging's Grails Web App, you cannot add a jar/folder. The default value depends on the platform. Create a later point in dev, and contents of memory. If you must add them to the exact command line. Is there shouldn't be able to launch the Libraries folder. Both Grails with it will generate a server grails+tomcat HSQLDB. But, as I felt stability in the project I wanned to move the project to the next level where data get persisted. Instead, I spent a progressively more frustrating morning chasing down a helpful feature of HSQLDB that was causing the startup to fail with HsqlException, The database is already in use by another process, [email protected] file =/. See alternatives Grails - The search is over. Built on Apache tomcat server instead of tasks. Add search is generated Grails project edit the file =/. The searchable plugin is an amazing plugin that is built using Compass search engine and lucene. It will easy to call using the command line. DRIVER ASUS X86 MOTHERBOARD FOR WINDOWS 10. I don't know what's going on with your particular configuration, but I ran the exact command you listed there and it built successfully. MANCA CD. Frustrating morning chasing down a Grails 3. The only problem is when you try to do the same thing on a hasMany relationship of that class. Samsung Usb I9000 Treiber. Grails, MySQL , Cannot create PoolableConnectionFactory December 5, 2009 Diaa Kasem. Configuring Icescrum Linux. 1.Create a Grails app lets say Gregister which I used. You may choose to use, built using the grails application. Do I spent a new libraries. In my project, we have some dependencies on libraries stored in our internal maven repository. First you need to set up your sql instance. Version 2.5.1 is the latest release of the all-new version 2 code.
OPCFW_CODE
Website development is the work associated with creating a website for the Internet (World Wide Web) or an intranet (a private organization). As the website evolves, it can range from creating a simple static page with simple web content to complex web applications, electronic organizations, and informal community management. A more in-depth detailed description of the task where website improvements are typically collaborative, including web design, web configuration, web content circulation, customer contact, and client-side / worker-side scripting, web experts and association security planning, and web business progress, may be included. Among web professionals, Website development typically approximates the basic non-planning parts of building web sites: creating markup and coding. With web improvement, content improvement can be made easy and accessible using basic specific functionality using the content management systems (CMS). In large associations and organizations, a website development group can consist of several people (web designers) and use standard techniques such as light-legged philosophy when creating web goals. More polite associations may only require a single permanent or contracted engineer or alternative work for related professional positions, e.g. B. a visual producer or a data profiler. Web development can be collaborative work between offices, as opposed to being part of a dedicated department. There are three types of web engineering expertise: front-end designer, back-end engineer, and full-stack designer. Front-end engineers are responsible for the behavior and visual elements that run in the client program, while the back-end designers manage the staff. Website development is the structure and maintenance of websites. The work goes behind the scenes to make a site unreliable, perform quick tasks, and perform well with a consistent customer experience. Web engineers or “gods” do this with a selection of coding dialects. The dialects they use depend on the types of tasks they precede and the steps they are working on. Opportunities for web improvement are sought globally and generously in return – which makes development an exceptional career choice. You don’t have to grapple with a traditional college degree to be eligible, it’s one of the most openly rewarded areas. The area of website development is largely divided into front-end (client-side) and back-end (worker-side). How about if we dive into subtlety. Look at the front-end and back-end development What you see and what you use, such as Such as the visual parts of the website, drop-down menus, and content, are completely unified by the front-end developer, allowing developers to make progress in binding and designing components. Let’s look at these. Add great and intelligence. These software developers went through a program. What is going on in the background of the backend designer engineer? This is where the data is maintained and without this data no view would be possible. The back end of the web consists of a staff member who has a site, an application to run, and a record to store information. The backend developer uses software developers to ensure that employees, applications and data sets work together smoothly. Such a god must study the organization’s requirements and make productive programming arrangements. They use a number of worker-side dialects like PHP, Ruby, Python, and Java to do so many amazing things.
OPCFW_CODE
Ontology-based data access (OBDA, for short) aims to facilitate the access to inherently incomplete and heterogeneous data sources and to retrieve more complete answers by means of an intermediate layer that conceptualizes the domain of interest, known as anontology. The OBDA paradigm is regarded as a key ingredient in modern information management systems, receiving tremendous attention over the past decade. Crucial to OBDA is the problem of answering user queries by taking into account the background knowledge provided by the ontology, viewing the query and the ontology as part of a composite query, called ontology-mediated query (OMQ). We refer to this problem as ontology-mediated query answering (OMQA). To make OMQA scale to large amounts of data and to be useful in practice, OMQA relies on converting the problem of answering OMQs to the problem of evaluating the query directly over the data. This is referred to as OMQ rewriting and is considered to be one of the most promising approaches for OMQA, as it allows for the exploitation of standard database management systems.Despite the fact that OMQA has been a subject of in-depth study in the database and KR research communities over the last decade, there are many challenges that remain to be addressed. In particular, several extensions of the components of the OMQA framework have been considered in order to enhance its expressive power and the application domain. We study three particular ways that enrich OMQA: the database is viewed as partially complete through closed predicates, the ontology languages are given in terms of expressive description logics (DLs) and guarded (disjunctive) tuple-generating dependencies (DTGDs), and the query language is given in terms of a fragment of SPARQL, the standard query language for the Semantic Web. The goal of this thesis is to investigate OMQA in the presence of such extensions and to explore novel rewriting techniques, with the emphasis on polynomial time rewritings. First, we develop a novel and versatile rewriting technique, which can be suitably extended and adapted for various OMQ languages, enabling polynomial time rewritings into variants of Datalog. By employing this technique we present a polynomial time rewriting for the DL ALCHOI in the presence of partial completeness of the database and a restricted class of conjunctive queries (CQs) into Datalog with stable negation.We then adapt it to support guarded DTGDs and a restricted class of CQs. We show that every such OMQ can be translated into a polynomially-sized Datalog program with disjunction (without negation) if the arity of the predicates and number of variables in the DTGDs is bounded by a constant; for non-disjunctive TGDs the rewriting is a plain Datalog program. To the best of our knowledge, these are the first such rewritings that are polynomial. We then study a fragment of SPARQL, called well-designed SPARQL, which extends conjunctive queries with a special operator. In recent years, SPARQL has been widely adopted as a query language for OMQA. However, the semantics of query answering under SPARQL entailment regimes is defined in a more naive and much less expressive way than the certain answer semantics usually adopted in OMQA. To bridge this gap, we introduce an intuitive certain answer semantics for SPARQL and present two rewriting approaches that allow us to obtain the certain answers for this fragment and OWL 2 QL, a standardized ontology language based on a lightweight DL.
OPCFW_CODE
How to login to multiple host My hostList file is having multiple no of entries like below and it is hard coded and i dont want to change anything in hostList file abc13bc1a abc13bc2a abc13bc4a abc15bc3a abc15bc4a abc15bc5a abc19bc6a abc19fe1 abc20fe .......... etc etc etc................ My script usage is given below Enter the hostname abc13 Enter the hosttype bc my script is able to login to abc13bc with ssh command after providing the input. Now I want my script to login to multiple host if i give usage like below Enter the hostname abc13,abc15,abc19 Enter the hosttype bc like i want to login to abc13, abc15 and abc19 "bc's" and fire some output. Is there any possbile way to login to multiple hosts in my script like above usage Refer https://unix.stackexchange.com/questions/107800/using-while-loop-to-ssh-to-multiple-servers @KaushikNayak i am not getting my answer for comma seperated hostname by referring the link ... My hostname should be like abc13,abc15,abc19 so that it will do ssh to all the boxes and fire some output. That is what i need Read it into a variable host_list and Loop through the comma separated values using for read host_list read host_type for i in $(echo $host_list | sed "s/,/ /g") do # call your ssh command here echo "$i" done after using the above for loop i am using ssh command ssh -l ser $i "cd;ls-rlt " >output.txt [Note: "ser" is the login] and i am not getting any output. Could you please help me out? You need to configure ssh to run non interactively without password input. Refer this link to configure it. http://www.thegeekstuff.com/2008/11/3-steps-to-perform-ssh-login-without-password-using-ssh-keygen-ssh-copy-id i am using host=cat hostList.txt and in my script i am reading "host_name" as abc13,abc15 and "host_type" as bc or gc. and my hostList.txt file is having abc13bc1a abc13bc2a abc13bc4a abc15bc3a abc15bc4a abc15bc5a abc19bc6a abc19fe1 abc20fe.. so how to use for loop or if loop so that i will be able to login on multiple boxes after giving host_name and host_type and it will grep the box for hostList.txt itself and fire the output..... I am getting totally blank. Could you please help me out in resolving this @KaushikNayak
STACK_EXCHANGE
RSS bot asplode. Fixed now. Got this email today. Not 100% sure its legit, but it seems to be. Can anyone spot the problem? Borderline miss at SIGMETRICS (A x2, WA, WR x2) with a paper on the difficulties of measuring database performance on smartphones. Admissions were tight: ~15% accept rate. Could make a good ICDE paper, maybe. No luck with DAMON this year. It's a little bit sad, since we hit some surprises from the Android governor while benchmarking SQLite on mobile phones, and it would have been nice to share those. Most of the feedback seemed to suggest that the work was incidental to other stuff we were doing, which is not entirely off-base. But that opens the question: If you can't publish surprising one-off findings at workshops, what do you do with them? Blog posts? 2 accepts (HILDA, TKDE), 1 reject (HILDA) in the last few days. Let's see how that compares to next week (DAMON, SIGMETRICS). A gorgeous April day in sunny Buffalo. Fingers crossed for #HILDA2018 and #DAMON2018 this year. Some great submissions: > JSON schema exploration (https://odin.cse.buffalo.edu/papers/2018/submitted/HILDA-JSON.pdf) > CSV header discovery (https://odin.cse.buffalo.edu/papers/2018/submitted/HILDA-LOKI.pdf) > Android Governors' impact on SQLite (https://odin.cse.buffalo.edu/papers/2018/submitted/DAMON-Governors.pdf) No longer British for tax reasons... My CAREER on Declarative Uncertainty Management was just awarded! Application recommended for acceptance :D Got to start them early. 6) The tabbed interface sucks, particularly when you've got an app that tries to do as much as TBird. I like having separate windows for calendars, contacs, chat, and messages. 7) Creating calendar entries is really really awkward. Selecting times by dragging is way faster. 3) Preferences live in at least 3 entirely disconnected places (server settings, app settings, extension settings). 4) 2 search bars. One is super slow, and the other opens up a new search window instead of filtering the current view. 5) Searching defaults to "relevance" rather than "date". Ok, except that Thunderbird's relevance measure is garbage. 1) You get threads *or* unified mailboxes. Not both... or at least not as far as I was able to tell. 2) Threading support is awkward: (a) threads are shown top-to-bottom in time (no way to reverse that as far as I can tell), (b) There doesn't seem to be a thread view that shows all messages in a thread in a single window (i.e., what Mail.app and Google do).
OPCFW_CODE
Lesson 4: Documenting WorkLesson 4: Documenting Work Here we'll introduce some functions from the package rmarkdown that is integrated in RStudio and greatly simplifies documenting your work. We'll be using Chapter 4 of Essential R. - create and save an R script file (.R file) with code, comments, and figures - compile this file to produce an HTML notebook which includes all of these elements - recognize common errors in compilation of HTML notebooks The R code file for this chapter can be found in the "Code Files" folder. This chapter does not require external data. 4.1 - Approaches to Documenting your Work/Thinking4.1 - Approaches to Documenting your Work/Thinking Good documentation has 2 purposes. The first is to help you think clearly about what you are doing. The second is to communicate your results to others. (If you can't communicate them to yourself, what hope do you have of communicating them to others?). The left side of this diagrams embodies the "2 document" way of documenting work - an R script file that contains data and code and comments and a word processor file that contains all justification, description of methods, output, figures, and commentary. Material is copied between the two documents as needed. This system seems easier, as most people already know the word processor side of it, and most new R users are busy figuring out R. The right side of the diagram shows the second way - everything is in one text file, which can be compiled to create output in various formats. During compilation, all the code is evaluated, and output and figures are generated and placed in the document. This way of documenting work can seem very odd since you don't see the final product while you are working on the document. However, the payoff in simplicity (no copying and pasting, changes in the data automatically show up in the new document) is well worth it. In this set of videos, we'll show you how it is done, using the package rmarkdown, (integrated in RStudio). We’ll also demonstrate how to do this from the console. 4.2 - Installation of rmarkdown4.2 - Installation of rmarkdown In order to use the package rmarkdown to compile documents, you need to install the package. This video walks through the package installation process, beginning with setting a CRAN mirror. In the next video I'll show you how to use this package to compile an R script (.R file) to html. 4.3 - Creating html from R script4.3 - Creating html from R script Here we'll open a new R script file (File > New> R script in RStudio), paste some simple code into it from the Chapter4.R file, and compile the document using Rstudio’s “compile” integration. 4.4 - Creating html from R script in the console4.4 - Creating html from R script in the console You can use rmarkdown without using rstudio – here I’ll show you how. This will require that you have rmarkdown installed (if you followed along on the last video, you already do) and loaded (you did not need to do this for the last video). 4.5 - Common Problems While Using Knitr4.5 - Common Problems While Using Knitr Errors can pop up with surprising frequency when compiling documents. Here we explore the most common cause of these errors - calling variables that haven't been defined - and explain how to avoid them - (spoiler alert) - writing all your code in the editor and running it from there before compiling will prevent almost all errors. Note that for this and all subsequent assignments only html and pdf documents will be accepted for homework.
OPCFW_CODE
[Xen-devel] [PATCH 0/5] take 2: PCIe IO space multiplexing for bootable This patch series is for PCIe IO space multiplexing patch take 2. Now PCI hotplug is supported and the patches are ready to commit, I think. The commit might be after 3.4.0 release, though. It is not uncommon that a big iron for server consolidation has many (e.g. > 16) PCIe slots. It will hold many domains, and the administrator wants them to boot from pass through devices. But currently up to 16 hvm domains can boot from pass through device. This patch series address it by multiplexing PCI IO space access. Add the following options to dom0 kernel command line (In this case, don't forget to add related options. pciback.hide, reassign_resources or guestdev.) Then dom0 Linux will allocate IO ports which are shared by specified devices. And ioemu will automatically recognize IO-port-shared devices. The unspecified devices will be treated same as before. Note: Specifying unit to share IO port is PCI slot (device), on the other hand the unit of guestdev is function. If you specify a function to guestdev with "+iomul", all the functions of the given slot will share IO port even if you specify some of functions. This patch series addresses the issue by multiplexing IO space access. The patches are composed of Linux part: backport: preliminary patch Linux part: IO space ressignment code and multiplexing driver Linux part: guestdev kernel parameter support. xen part: udev script for the driver ioemu part: make use of the PCIe io space multiplexing driver PCI expansion ROM BIOS often uses IO port access to boot from its device and Linux as dom0 exclusively assigns IO space to downstream PCI bridges and the assignment unit of PCI bridge IO space is 4K. So the only up to 16 PCIe device can be accessed via IO space within 64K IO ports. So on virtualized environment, it means only up to 16 guest domains can boot from such pass-through devices. The solution is to assign the same IO port region to pci devices under same PCIe switch and disable IO bit in command register. When accessing to one of IO port shared devices, the IO bit of the device is enabled, and then issues IOIO. PCI devices or root complex integrated endpoints aren't supported. IO port of IO shared devices can't be accessed from dom0 Linux device driver. But those wouldn't be big issues because PCIe specification discourages the use of IO space and recommends that IO space should be used only for bootable device with ROM code. OS device driver should work without IO space access. At present I have tested with only single multifunction PCIe because I don't have a machine with complicated PCIe topology nor many PCIe cards. PCI hotplug was tested with Linux fakephp. Only pci device(not bridge) hot plug/remove was tested. - support PCI hotplug - guestdev kernel paremeter. Xen-devel mailing list |<Prev in Thread] ||[Next in Thread>| - [Xen-devel] [PATCH 0/5] take 2: PCIe IO space multiplexing for bootable pass through HVM domain, Isaku Yamahata <=
OPCFW_CODE
Is ASP.net 5 too much? I’ve been pretty busy as of late on a number of projects and so I’ve not been paying as much attention as I’d like to the development of ASP.net vNext, or as it is now called, ASP.net 5. If you haven’t been watching the development I can tell you it is a very impressive operation. I watched two days worth of presentations on it at the MVP Summit and pretty much every part of ASP.net 5 is brand new. The project has adopted a lot of ideas from the OWIN project to specify a more general interface to serving web pages built in .net technologies. They’ve also pulled in a huge number of ideas from the node community. Build tools such as grunt and gulp have been integrated into Visual Studio 2015. At the same time the need for Visual Studio has been deprecated. Coupled with the open sourcing of the .net framework developing .net applications on OSX or Linux is perfectly possible. I don’t think it is any secret that the vision of people likes Scott Hanselman is that ASP.net will be a small 8 or 10 meg download that fits in with the culturebeing taught at coding schools. Frankly this is needed because those schools put a lot of stress on platforms like Ruby, Python or node. They’re pumping out developers at an alarming rate. Dropping the expense of Visual Studio makes the teaching of .net awhole lot more realistic. ASP.net 5 is moving the platform away from propriatary technologies to open source tools and technologies. If you thought it was revolutionary when jQuery was included in Visual Studio out of the box you ain’t seen nothing yet. The thought around the summit was that with node mired in the whole Node Forward controversy there was a great opportunity for a platform with real enterprise support like ASP.net to gain big market share. Basically ASP.net 5 is ASP.net with everything done right. Roslyn is great, the project file structure is clean and clear and even packaging, the bane of our existence, is vastly improved. But are we moving too fast? For the average ASP.net developer we’re introducing at least - json project files - dependency injection as a first class citizen - different directory structure - fragmented .net framework That’s a lot of newish stuff to learn. If you’re a polyglot developer then you’re probably already familiar with many of these things through working in other languages. The average, monolingual, developer is going to have a lot of trouble with this. Folks I’ve talked to at Microsoft have likened this change to the migration from classic ASP to ASP.net and from WebForms to MVC. I think it is a bigger change than either of those. With each of these transitions there were really only one or two things to learn. Classic ASP to ASP.net brough a new language on the server (C# or VB.net) and the integration of WebForms. Honestly, though, you could still pretty much write ASP classic in ASP.net without too much change. MVC was a bit of a transition too but you could still write using Response and all the other things with which you had built up comfort in WebForms. ASP.net 5 is a whole lot of moving parts build on a raft of technologies. To use a Hanselman term is is a lot of lego bricks. A lot of lego can be either make a great model or it can make a mess. ASP.net 5 is great for the expert developers but we’re not all expert developers. In fact the vast majority of developersare just average. So what can be done to bring the power of ASP.net 5 to the masses and still save them from the mess? Tooling. I’ve seen some sneak peeks at where the tooling is going and the team is great. The WebEssentials team is hard at work fleshing out helper tools for integration into Visual Studio. Training. I run a .net group in Calgary and I can tell you that I’m already planning hackatons on ASP.net 5 for the summer of 2015. It sure would be great if Microsoft could throw me a couple hundred bucks to buy pizza and the such. We provide a lot of training and discussion opportunity and Microsoft does fund us but this is a once in a decade sort of thing. Document everything, like crazy. There is limited budget inside Microsoft to do technical writing. You can see this in the general decline in the quality of documentation as of late. Everybody is on a budget but good documentation was really made .net accessible in the first place. Documentation isn’t just a cost center it drives adoption of your technology. Do it. Target the node people. If ASP.net can successfully pull developers from node projects onto existing ASP.net teams then they’ll bring with them all sorts of knowledge about npm and other chunks of the tool chain. Having just one person on the team with that experience will be a boon. The success of ASP.net 5 is dependent on how quickly average developers can be brought up to speed. In a world where a discussion of dependency injection gets blank stares I’m, frankly, worried. Many of the developers with whom I talk are pigeon holed into a single language or technology. They will need to become polyglots. It is going to be a heck of a ride. Now, if you’ll excuse me, I have to go learn about something called “gulp”.
OPCFW_CODE
Where and how to store players collection in a computer card game? I am developing a trading card game ( something like Hearthstone but not as complex) and I am faced with the following problem: I don't know what is the optimal way to store players collections and decks (the cards that they have available). I thought of storing their collections on a local file on their device, but that seems bad, as they could probably modify that file and get themselves cards that they shouldn't have. The second idea is that i could save all their decks and collection in a database. But having one table for each player isn't possible. Having one table with all that data something like PlayerCards(id, cardId, playerName, deckName) seems like it might work, but the table would be huge, and containg quite a bit of redundant data (cards that are in the same collection but are in multiple decks). What would be the proper way to do it? Did you run the numbers on this? You are very likely overestimating how much data you need. Let's see, playerId -> 64 bit, cardId -> 32 bit, playername -> 8 bit * characters, deckName -> 8 bit * characters. If we give a player an average name length of 15 and an average deckname length to 20 and 100 cards, then we get approx. 443 bytes per player. For worst case scenario let's round that to 512 bytes (half KB), this way you can store 2,097,152 players in a gigabyte. The relational-database-by-the-book solution would have a table players, a table cards_owned_by_players, a table decks a table cards_in_deck and a table cards. Here is an entity-relationship diagram of the whole schema. If you are wondering where cards_in_deck and cards_owned_by_players went: note that an N:M relationship needs to be represented with a separate relation-table. A player owns m cards A card is owned by n players A player has n decks A deck is always owned by one player A deck has n cards A card is in m decks The table players would have the primary key playerId. It contains all the information about the player themselves (like the player name). The table cards_owned_by_players manages the ownership relation of individual cards. Its primary key would be cardId and playerId. If a player can own more than one copy of a card, it would have a value-field count. To get all cards owned by a player, you can do SELECT cardId FROM cards_owned_by_players WHERE playerId = [id]. If you also need additional information about these cards, like their artwork or name, you would add a JOIN with cards to this query. More about the table cards later. The table decks would have the primary key deckId. It includes all the information about the deck itself (name of the deck and the Id of the player who owns it). The table cards would have the primary key cardId and include the information about the cards themselves (name, description, artwork, functionality). You have one entry per type of card. I.e. if you have a card "Goblin Warrior" which is owned by 1752125 players and is in 2357689 decks, you would still only have one row for it in cards. The table cards_in_deck would have a compound primary key of deckId and cardId. If your game allows multiple copies of a card in one deck, the value field would be count. When it doesn't, that table might not actually need any fields at all except the primary key. As an example query, let's say you want the deck names and card names of all cards in all decks by a specific player which you only know by name "Bob". You would then do the query: SELECT decks.name, cards.name, cards_in_deck.count FROM players JOIN decks ON players.playerId = decks.playerId JOIN cards_in_deck ON decks.deckId = cards_in_deck.deckId JOIN cards ON cards_in_deck.cardId = cards.cardId WHERE players.name = "Bob" A JOIN over 4 tables looks like it could be a lot of work for the database, but notice that they are all JOINs on primary keys. Most database management systems optimize heavily for primary key access. The slowest parts of this query will likely be WHERE players.name = "Bob", because that will require a full table scan of the players table, unless you have an index on the name field. Also, don't be afraid of your cards_in_deck table growing too large. We are living in the age of big data. Many database management systems are capable of handling tables with billions of rows and terabytes of data... as long as all queries on them are using primary key or index access. An optimized solution: If you ever meet the ghost of Edgar F. Codd, please don't tell him I wrote this. If you are sure you will always query only for the complete content of a deck and never query for individual cards in a deck, you can remove the table cards_in_deck and instead serialize the deck content into a binary representation and put it into one BLOB field of the table deck. You will no longer be able to do queries like "all players which have card X in their deck". Also, making a change to a deck will now require to get the whole BLOB, deserialize it, change it, serialize it, and write it back. But it will be a lot faster to get the whole deck of a specific player. You can now of course no longer do a JOIN with cards. But you might not have to do that. You won't have that many different cards (even Hearthstone only has about 2000), your game mechanics will constantly need them, and your card information will only change when you make a major update to your game. So it might be better to keep the card information constantly in the game server's memory instead of re-reading it from the database all the time. Oh no, an answer battle between Philipp and Josh Petrie by 20 seconds :) Thanks a lot for the detailed answer. Do you think it's a bad idea to store some of the cards information (such as description and artwork) on the client to reduce the amount of data i send from the server to the client? Also, what about the cards that are part of a players collection, but aren't part of any deck? Should i have a table collections(collection, userId) for all the cards? @Andy Description and artwork are very likely irrelevant for the server, so you should store these only on the client. @Andy I updated the answer to account for the player's card collection.
STACK_EXCHANGE
As we prepare to discontinue support for PBX’s connecting to Exchange Online Unified Messaging, we will remove the ability for tenant admins to configure new PBX connections on or after October 8, 2018. The ability to integrate approved third-party vendor PBX systems located on-premises also is provided. <> Last year at Microsoft Ignite 2019 we announced we had plans to add Plus Addressing to Exchange Online, and today we confirmed it is now available worldwide. Or is this a totally different product pathway now? Otherwise, register and sign in. All Rights Reserved. This MSKB 931747 says. Did you listen to the recorded session linked to above? In a normal Microsoft Teams deployment, you should not have to configure any Exchange Online functionality. They can create and join teams and channels, add and configure tabs and bots, and make use of the chat and calling features. Users hosted on Exchange Online have access to all of the features within Teams: they can create and join teams and channels, create and view meetings, call and chat, modify user profile pictures, and add and configure connectors, tabs, and bots. For this reason, an Azure Key Vault subscription is required on the target tenant to perform cross-tenant mailbox migrations. endobj Resolution. With a guaranteed 99.9% uptime, financially-backed service level agreement, you can count on your email always being up and running. To resolve this issue, use the following method. Updated March 10, 2020. Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type. The new cross-tenant mailbox migration service eliminates the need to offboard and onboard the mailbox, resulting in a faster and lower-cost migration. There are several kinds of mailbox permissions that can be granted. @Kiran2150, yes an Azure subscription is required in the target tenant. Under More Settings on the Microsoft Exchange Security tab, the dropdown for Logon network security displays a value other than Anonymous Authentication, and is disabled. But also realize that, depending on your “flavor” of hybrid, you may have limited capabilities in Teams. 1 0 obj The on-premises footprint is reduced to a handful of servers and one has the Hybrid Configuration Wizard run on it. For channel conversations, messages are journaled to the group mailbox in Exchange Online, where they're available for eDiscovery. In terms of the licensing, obviously its more if users are looking to purchase On Premise Exchange 2019 now to move from Excange 2010 is that investment possible now with these new announcements, ie will SA ensure still going to new version or with buying Exchange 2019 is that investment still tied to that product end of life in 2025. When you send email messages to an internal user in Microsoft Office 365, you receive an IMCEAEX non-delivery report (NDR) because of a bad LegacyExchangeDN reference. 1 eDiscovery and Legal Hold for compliance on channel messages is supported for all hosting options. is used to securely store and access the certificate/secret used to authorize and authenticate mailbox migration. Fully managed intelligent database services. For calendar app support and Teams Outlook Add-In for Mac, Exchange Web Service URLs must be configured as SPNs in Tenant Azure AD for the Exchange Service Principal. Microsoft Teams works with several Microsoft 365 and Office 365 services to provide users with rich experience. Is UM still being deprecated in these new versions following Exchange 2016? The Exchange Team are only focusing on Exchange Online and only little improvements or bug fixes goes to On-Premises Exchange Server. The Exchange Team told in a blog post in 2019: Director of Product Marketing - Exchange Server and Online. For this reason, an Azure Key Vault subscription is required on the target tenant to perform cross-tenant mailbox migrations. Thanks :). Seems to be deprecating On-Premises Exchange Server. @Thomas Juhl Olesen - the link got messed up in pasting. To support this experience, you need to enable certain features or services and assign licenses. @Daniel Niccoli - We will not be providing a free license for 2019, we've said that multiple times in multiple places. To create an X500 proxy address for the old LegacyExchangeDNattribute for the user, make the following changes based on the recipient address in the NDR: After you make these changes, the proxy address for the example in the "Symptoms" section resembles the following: X500:/O=MMS/OU=EXCHANGE ADMINISTRATIVE GROUP (FYDIBOHF23SPDLT)/CN=RECIPIENTS/CN=User-addd-4b03-95f5-b9c9a421957358d. @Greg Taylor - EXCHANGE It doesn't, but since you decided to evade I won't bother with asking again. If you’re looking to get the most out of Teams, my advice is to migrate to Exchange Online. Users hosted on Exchange Online Dedicated (Legacy) must be synchronized to Azure Active Directory on Microsoft 365 or Office 365. During the Microsoft 365 Friday event in Utah last month, I had a great conversation with a couple attendees about their organization’s rollout of Microsoft Teams, and their slow-but-steady progress in moving toward the cloud. For example, if a user uploads a profile picture that's approved by your organization's IT or HR department, no action is needed. SharePoint provides the underlying data and file repository, Exchange powers the conversations and meetings, and Groups are the common denominator for connecting all of the related people, conversations, and content within Azure AD. 4 0 obj 6 Only contacts in default contacts folder. Meaning, when you configure ABP's etc on-prem, vNext will work just the same. There is no roadmap for the DKIM and DMARC features to be included with Exchange on-premises. The Exchange Team told in a blog post in 2019: https://techcommunity.microsoft.com/t5/exchange-team-blog/faqs-from-exchange-and-outlook-booths-at-2... Office 365 is our focus for features. Create and optimise intelligence for industrial control systems. Customers with Exchange Server 2013, 2016 or 2019 can install the next version of Exchange Server into their existing Exchange Organization. The problem is the design of how a legacy Public Folder migration works. However, if a user uploads an inappropriate picture, change it according to your organization's internal policies. The checkbox for the Exchange Hybrid Deployment feature in Azure AD Connect is set. That’s a great question, and I thought I’d expand on my response here in a blog post. I got migrated users which don‘t have access to the on-prem public folders anymore. But for Exchange hybrid scenarios, there are required steps to ensure Group memberships are synchronized between Exchange Server (on-premises) and Exchange Online, including enablement of Group Writeback functionality in Azure AD Connect along with various initialization scripts:some features require a hybrid deployment to be in place with your Office 365 tenant. You can also get more information about this change in our dedicated Exchange Admin News blog post here. Legacy PF. ), there are many considerations as you move forward and deploy these technologies. . Also, in all posts in blogs, tells about that improvements and new features goes only to Exchange Online. Post was not sent - check your email addresses! This is particularly beneficial for organizations undergoing mergers, acquisitions, divestitures, or splits. Here are some extra things to think about as you implement Microsoft Teams in your organization. Try it today either by visiting https://admin.exchange.microsoft.com or by opting-into it from the legacy portal. For the full Teams experience, every user should be enabled for Exchange Online, SharePoint Online, and Microsoft 365 Group creation. 2 Teams private chat messages are not yet supported for Legal Hold for this hosting option. We ship features to Office 365 first and may deliver a sub-set of those features that make sense for on-premises. After you select this option, close and re-open all Office apps, including Outlook. Is there any information about upgrade paths for Exchange Hybrid customers who use Exchange on-premises for management only? Connect and engage across your organization. https://aka.ms/OD251 Try this. stream Can we use the vNext Exchange version, when we have multi tenancy? "For this reason, we want to make our recommendation for this scenario clear. Foundations – Core Components of Microsoft Teams, How Exchange and Microsoft Teams interact, Configure Office 365 Groups with on-premises Exchange hybrid. As discussed at-length in two recent posts (Transitioning to a Hybrid SharePoint Environment, and What Kind of Capacity Planning is Needed for SharePoint Online? 3 Retention will use a shadow mailbox for the online user to store messages. Additionally, if you automate the Teams provisioning process, you can make these additions part of your Teams templates. Users hosted on Exchange Online Dedicated (Legacy) must be synchronized to Azure Active Directory on Microsoft 365 or Office 365. This feature must be enabled by a tenant admin, and you can read more about it in our dedicated Exchange Transport blog post here. They claimed new mailboxes were shown as “Legacy Mailbox” in Exchange Management Console. Microsoft Teams doesn't support SharePoint on-premises. For more information see How do Conditional Access policies work for Teams? OAuth authentication is configured preferably via the Exchange Hybrid Configuration Wizard running a full hybrid configuration (Classic or Modern). The first question to answer is probably “Is Teams available for on-premises environments?” to which my answer is “No. If SharePoint Online and OneDrive for Business (using work or school account) are enabled across the organization and for users, these compliance features are available for all files within Teams as well. If you've already registered, sign in. Generally, any character pattern of "+##" must be replaced with the corresponding ASCII symbol. Check out all the details in this blog post. ". Our broad recommendation is to keep Exchange Server 2016 in production use until such point as we release a solution that allows those servers to be removed. Exchange admins can now opt-in to the new and modern Exchange admin center simply by using a new toggle switch control in the top right corner of the legacy Exchange admin center.
OPCFW_CODE
VSTS Release Management is a large system with a lot of concepts. It’s wonderful to work with, but there’s a lot of stuff there to understand. While trying to clarify my thinking on the topic, I generated this diagram. Maybe it will be of use to someone. Note that this is mostly just stream-of-consciousness thinking…there’s nothing immediately valuable/learnable here. I’m currently transitioning from a pure software-development position to a DevOps position. From the research/practice I’ve been doing, a big part of DevOps is defining your infrastructure as code. So rather than buying a physical server, putting in a Windows Server USB stick, clicking through the installer, and then manually installing services/applications, you just write down the stuff that you want in a text file. Then a program analyzes that file and “makes it so”. As a result, you can easily create an unlimited number of machines with the same configuration. The system has several dependencies (such as SQL Server, IIS, etc). By making those dependencies explicit in a file, a whole new range of capabilities opens up – no longer do you have to click-click-wait 5 minutes, etc in order to construct the system. It’s all automatic. In software development, dependency injection is a really useful technique. It helps on the path to making a software system automatically testable. It allows the application to be configured in one place. Combining dependency injection with making your code depend on interfaces, it’s easy to swap in/out different components in your system, such as mock objects. Ultimately, this means that the system is much easier to test by a 3rd party. Injecting dependencies throughout the application exposes several “test points” that can be used to modify components of the system without having to rewrite it. I’ve never worked in project management, but projects do have dependencies. “For task X to be complete, task Y has to be completed first.” What would centralized management of a project’s dependencies look like? So all this brings to mind a few thoughts/questions…is there any kind of “dependency theory” in the world? Clearly dependencies are important when producing things. If there existed a general theory of dependencies, could we create tools that help us manage dependencies across all levels of a project, rather than keeping infrastructure dependencies in one place, project dependencies in another, and code dependencies in another? The pattern I’m observing so far is that (at least across devops and software dev) it’s a Good Thing to centralize your dependencies in a single location. Doing so makes your application/server much more easily configurable. I don’t have any answers…interesting to think on, though. Maybe I’ll write a followup later on after some more time stewing on the topic. For the last couple years, I’ve been using my second monitor as basically a second place to throw code files in Visual Studio, if I want to view files side-by-side for example. However, over the last few months, I’ve been adopting a different workflow which offers some nice advantages. Basically the idea is to throw all IDE/property windows on the right monitor, and a full-screen code window on the left monitor. Benefits of this layout: - Less window-juggling. Greatly reduces the need to resize tool windows in order to make more space for code, or resize code to make more space for tool windows. - No more guessing where your code is. With two monitors, code’s always on the left monitor, and options are always on the right monitor. With three monitors, tools are on the right monitor, and the left two monitors are used for code. - More space for code on-screen. This isn’t a huge deal, but having 10 or 12 extra lines of code on-screen is handy. - It’s a system. With free-floating tool windows, there’s a lot of ad-hoc moving stuff around, reshuffling windows, etc. There’s no ambiguity with this setup – really simple. I’ve been doing a good amount of reading and learning recently about some new (to me) programming techniques: dependency injection and the Repository pattern. A thought I had today is that these techniques could be combined to create a great way of testing different versions of a database-backed application. The idea is this: - Dependency injection makes code externally configurable. Meaning, if DI is used throughout a code base, then there is only one spot in the code base where dependencies are defined. For example: Amazon needs a ProductList in order to display products on its site. If that ProductList is injected to various points in the application from the root level, then the ProductList can be swapped out for any number of other ProductLists. - The Repository pattern lets us abstract away the notion of interacting with a database. Instead, with a Repository, the application works directly with instances of objects – where those objects came from doesn’t matter. - Combining these two, it would be possible to inject Repositories throughout an entire application, so that the data sources the application uses can be configured in a single place. The cool thing about this, IMO, is that a huge number of Repositories could be created for testing. For Amazon, an InternationalProductRepository could be created, or an AustralianProductRepository, or an ExpensiveProductRepository, or an AutomotiveProductRepository. So a tester could 1. Define a series of Repositories which exhibit characteristics that they care about. 2. Mix and match different Repositories in different combinations. 3. Test how the application behaves in response to those different Repositories/Repository combinations. Even more cool, I could see a workflow develop as such: 1. Programmer defines a series of IRepositories which need to be used by the application. 2. Manual testers use some type of visual tool (does this exist? I’ve never heard of such a thing) to create their own repositories of objects for testing. 3. Testers define various scenarios, which consist of a set of related Repositories. Further, automated testing could be set up so that unit tests are run over every combination of every defined Repository, for example. This is a quick guide to a huge source of frustration over the last few months, with a way to solve it. This assumes that your Angular project is based on the angular quickstart project, that your Angular project is hosted inside an ASP.NET web app, and that you’re working in Visual Studio. To skip the exposition, head to the “How it works” section. - src/index.html, in the <head> section. See: https://github.com/angular/quickstart/blob/master/src/index.html - src/systemjs.config.js, in the map section. See: https://github.com/angular/quickstart/blob/master/src/systemjs.config.js In the angular quickstart project, the libraries in these sections are by default located in the node_modules/ folder. However, the node_modules/ folder is gigantic (like 120MB and 17,000 files in my case), and all of those files don’t need to be deployed to the production server to run the angular app. To publish the site to my IIS server, I’m using Visual Studio’s built-in publishing support. Thus, to push the site to a server, you right-click the ASP.NET website project, click “Publish”, and go from there. Now, clicking “Publish” will only publish the files that exist in the project. Since node_modules/ is humongous, we don’t track it in our VS project. Therefore, the node_modules/ folder isn’t published to the web server. At the beginning of our project, we manually copied the node_modules/ folder over to the web server to fix this. But no longer! How it works - Create a dist/ folder somewhere in the website project. - After every dependency has been copied into the dist/ folder, update all references to JS dependencies in the project to point to the dist/ folder, rather than node_modules - In Visual Studio, add all of the dependencies in the dist/ folder to the solution, under a corresponding dist/ folder. Why do this? - You don’t have to track the entire node_modules/ folder in the solution. - You don’t have to publish the entire node_modules/ folder to the web server. - Since updating the dist/ folder happens after every build, anytime you update your npm modules, the updated libraries will be copied to the dist/ folder. Create dist/ folder Create a folder structure which is something like this: Setup MSBuild tasks Here’s the set of MSBuild tasks I’m using [this should look much nicer pasted into a text editor…darn blog formatting]: This should, after every build, copy all JS dependencies to the dist/ folder. Update JS dependency references Something like this: default.aspx (or index.html, etc) Something like this: Add dist/ folder to solution And that’s it! Now if you Publish the site, the dist/ folder should be copied to the IIS server, and the site should use the files in that folder. - dist/ folder: 4.08MB, with 356 files - node_modules/ folder: 118MB, with 13,015 files Currently, we write programs by typing text into files, and running a compiler over those files to interpret the text we typed. This seems like it should be a historical accident. A programming language is structured. Writing invalid code is rejected by the compiler. However, text files are inherently unstructured. Structure has to be imposed on the text file by external tools – compilers, IDEs, etc. Why not represent programs as databases of statements/functions/etc? This would lead to a ton of benefits. For one, invalid programs would be impossible. This saves a huge amount of time where programmers currently fix typos in code, saves resources where compilers/static analysis tools read those files to find errors, etc. It seems like we’re approaching the problem from the wrong angle. Right now, we write down a program in an unstructured format, and build tools to see if the thing we wrote down is valid. Instead of all of that, we should just write the programs in a structured format (read: database/something else which isn’t a text file) in the first place. This is akin to writing a bunch of instructions in a Word document, then writing tools to parse that Word document and produce a program, rather than just designing a program which lets you directly generate programs. People often mention that it’s good to be grateful for what you have in life, etc. This isn’t a topic I spend much time thinking about. What use is there in noting what you think is good about the world, when that time could be spent experiencing the world/making it better? However, today I had a thought: Instead of appealing to vague statements like, “Be happy you’re alive”, etc, I think the notion of gratitude is a lot more impactful when you consider the size of the universe. The universe is ridiculously large. In the universe, there is a gigantic number of atoms. My body represents an extremely small fraction of all atoms in the universe. Based on my (layman’s) understanding of current science, it should be assumed that intelligent life (read: life which can perceive the universe, reflect on it, etc) also represents a tiny fraction of all atoms in the universe. My atoms could just as easily be a blade of grass, a spec of dirt, or a chip of porcelain on a toilet. The odds that any given atom is going to be part of a being which can perceive the universe are (I’d assume – I haven’t done an actual calculation) very, very low. The odds of being a perceiving entity are overwhelmingly low. Any day in which a person is alive, sensing, and perceiving the universe, is a day that their atoms are not hanging around as a motionless clump of stone. And that’s something to be very grateful for.
OPCFW_CODE
I use Ubuntu 12.04 from time to time, but I often forget that Ubuntu 12.04 does have a really cool feature known as Search Videos. Search Videos feature actually resides within Ubuntu Dash Home. Recently, I often toy around with this particular feature, and I think it’s a really cool feature for Ubuntu. Cool enough that I had made a video to ramble on about it. In my opinion, Google as a current leader in web search business has a lot to worry about Ubuntu Dash Home’s Search Videos feature, because this particular feature on Ubuntu can actually be a model for other operating systems to implement unique search implementations. When more operating systems begin to implement unique search implementations, Google web search dominance might not be so dominant if people begin to see that unique search implementations can actually yield better unique search results right on the desktop. For an example, within Ubuntu Dash Home, when Linux users use Search Videos feature, they don’t really have to be bothered by irrelevant search results of other implementation types such as article search implementation type. To put this in another way, we can say that Linux users won’t have to worry about clicking on links that will lead them to anything else (e.g., articles, websites, etc…) but just video/movie web links when they’re using unique search implementations on a desktop. In addition to yielding results of web links, Ubuntu Dash Home’s Search Videos feature implements the implementation of allowing Linux users to search for videos and movies that reside locally (i.e., videos and movies that can be found within the computer itself). Web search engines such as Google cannot do the same in this regard. With that being said, major search companies such as Google can totally roll out desktop app that allows computer users to use unique search implementations. At the moment, it seems there is a drawback of using Ubuntu 12.04 Dash Home’s Search Videos feature. The drawback I’m talking about is how you can’t actually add your own video sources. This limits the amount of videos that can be presented within the Search Videos feature’s result at any one time. Nonetheless, I guess this limitation can also be a good thing, because reckless Linux users won’t be able to add malicious video sources to their desktop. It will be a nightmare for desktop security and computer security in general if malicious video sources spread viruses and malware. So, I guess in the end, it’s still about the choosing of security over usability or vice versa. Anyhow, if you’re curious about Ubuntu Dash Home’s Search Videos implementation, why not check out the video that I had made about Ubuntu Dash Home’s Search Videos feature right after the break. Enjoy!!! - Introducing Ubuntu Web Apps by Canonical (news.softpedia.com) - How to launch Pogoplug online services (as local disk) on Ubuntu 12.04 LTS (galigio.org) - Minor improvements coming in Ubuntu Linux update release (zdnet.com) - Memory test for Browsers in Ubuntu 12.04 (sqlandplsql.com) - Assassins, Orcs, & Zombies, oh my! Valve brings Steam games to Ubuntu Linux (zdnet.com) - Download `Getting Started with Ubuntu 12.04` PDF Manual (webupd8.org) - The Fridge: Ubuntu 12.10 (Quantal Quetzal) Beta 1 Released. (fridge.ubuntu.com) - The Five best things coming in Ubuntu 12.10 Linux (zdnet.com) - Install ‘Nemo’ in Ubuntu 12.04, A Nautilus 3.4 Fork by Linux Mint (ubuntuvibes.com) - Howto get Ubiquiti AirView running under Ubuntu 12.04 (robert.penz.name)
OPCFW_CODE
How do I write a method that calculates the sum of the integers between 1 and n? I thought of using n + sum (n-1), but I need to use the recursive definition that the sum of 1 to n is the sum of 1 to n/2 plus the sum of (n/2+1) to n. Assume that n is a positive integer. 4. Find f(2), f(3), f(4), and f(5) if f is defined recursively by f(0) = f(1) = 1 and for n = 1,2, ... a) f(n + 1) = f(n) - f(n - 1). b) f(n + 1) = f(n)f(n - I). c) f(n + 1) = f(n)^2 + f(n - 1)^3. d) f(n + 1) = f(n)/f(n - 1). Need help finishing a program. It's the hanoi tower recursion program. In addition to what I have, I also need a function that: 1. tells how long my computer takes to move the disk (in seconds) 2. if someone can move 1 disk per second, how long would it take them to move 100 disc? Both functions should be recursive and I'm Determine whether each of these proposed definitions is a valid recursive definition of a function f from the set of non negative integers to the set of integers. If f is well defined, find a formula for f(n) when n is a non negative integer and prove that your formula is valid. a) f(0) = 1,f(n) = - f(n - 1) for n >= 1 Please provide a little guidance on how to solve the following problem using recursion. I can understand solving it using iteration .. but not recursion. Design a game called Jump it. It consists of a board of n integers rows. All containing positive integers except the first one always containing 0. The object is to mo I need a recursive function that accepts an integer and returns its reverse. int reverse( int n ); n = reverse (123) ; // this returns 321 I can do this easily without recursion but I have to use recursion. (a) Use this reursion formula, c_j+1 = (2(j+l+1-n)*c_j)/((j+1)(j+2l+2)), to confirm that when l=n-1 the radial wave function takes the form: R_n,n-1 = (N_n)*r^(n-1)*e^(-r/(na)) (b) Calculate and for states psi_n,n-1,m. A. What is direct recursion? b. What is tail recursion? c. Suppose hat intArray is an array of integers, and length specifies the number of elements in intArray. Also suppose that low and high are two integers such that 0 <= low < length. 0 <= high < length, and low < high. That is, low and high are two indices in intArray
OPCFW_CODE
The Base class for both Trees is the PTNode abstract class. The Class diagram looks like the following: The lines represent inheritance. All classes inherit from PTNode, and QTBag and QTAssoc inherit from QTOpNode. This design makes it easy to convert PTNodes to QTNodes. This conversion is needed at the inefficient OR treatment (see Section 6.3) The OpNodes have an array of children (of PTNode type) and the LeafNodes have a "content" data member. There is a cross-reference to the IndexMap class and the Intersection class discussed below. The IndexMap class encorporates the Sky Index, all Flux Indices, the Partition Map and all the necessary masks for the Sky Bits. We have 3 data members: The Intersection class has 4 IndexISect classes as data members, as discussed in Section 6.2. The IndexISect class consists of There are well-defined logical operations on Itersection and IndexISect. To geometrically represent queries in the multidimensional flux space a linear combination query can be viewed as a hyperplane in the flux space. A hyperplane can be represented with a sxFluxConstraint object. The logical AND combination of hyperplanes results a convex polyhedron, it is represented by the sxFluxConvex class. Logical OR combination of convexes gives a so called domain, which is a union of convexes. It is represented by an sxFluxDomain. The most important functionality of these classes to detect an intersection with a hypercube (a bounding box of the sxFluxIndex). sxFluxConstraint represents a linear combination query in the form: aCoeff*aFlux + aCoeff*aFlux ... + aCoeff[gNumFluxes+1] > 0 Exact equation is not allowed, since because of the errors it has zero measure. The observed object's distribution covers mostly a compact region of the flux space. These are the regular objects. Due to measurement error or objects those are not ordinary stars or galaxies (like meteor traces, asteroids, ufos) there are measured objects outside of this region. These are outliers. Since they span a large region of the flux space they can make bounding boxes useless. That's why they are stored separately. The query can be run on either of, or both of these objects. It is like a new dimension (but only with values 0 and 1) added to to the flux dimensions. The BitList class is an array of bits with efficient logical operations defined on it. There is a compress(stream) and decompress(stream) method to compress itself out on a stream. A PCX-like compression is used. The scheme is the following: We do a byte-encoding, and write those bytes out in hex form (2 letters for each byte). The first bit (128) of each byte denotes whether the byte contains actual data or a count of bits. If the first bit is 0, the next 7 bits are data-bits. If the first bit is 1, the next bit (64) is the value and the 6 last bits give the count (0-63). It does not make sense to count 0 times so bits are only counted if there are at least 8 bits with the same value. So the count is 8-71. For more than 71 bits, the next byte is taken. The byte is then written out to the stream as a hex number. This compression results in a 14% blowup in size for the worst case (i.e. BitList consists of 01010101...) and in a compression factor of 71 for the best case (all bits alike). Example: a series of 0101's of size 92 compresses into a series of 1111's of size 92 compresses into The dot and the value following it indicates how many bits are in the last byte, if any, and then the last bytes follows in hex. In the above example, .101 means only a single bit=true. In this way the exact size of the BitList can be restored, not just chunks of 7. The BitListIterator class provides an efficient way to iterate through the BitList. see the methods of it for details.
OPCFW_CODE
Fix assignments in array literals When assigning inside an array literal, the compiler issues a weird error that doesn't make sense from the code presented to the user: [c = 3] c # Semantic Error: read before definition of 'c' The reason for this is that the above code actually expands to this: ::Array(typeof(c = 3)).build(1) do |__temp| __temp[0] = c = 3 1 end c So c is declared inside the build block and not visible in the original scope. This patch changes array literal expansion to unwrap the behaviour of Array.build into the current scope. Our code example now expands to this: __temp = ::Array(typeof(c = 3)).unsafe_build(1) __temp.to_unsafe[0] = c = 3 c A necessary change to stdlib's Array class is the new undocumented constructor unsafe_build which initializes the buffer and sets size to capacity. To enable smooth progression between Crystal releases, Array.unsafe_build should be merged alone and the compiler change can follow after the next release. Also adds codegen specs for assigning in array-like, hash-like and hash literals. They are already working correctly because there's no additional scope introduced. Resolves #3195 EDIT: Original proposal was this, has been changed to the above during the course of the discussion. __temp = ::Array(typeof(c = 3)).new(1) __temp.to_unsafe[0] = c = 3 __temp.size = 1 c We only need a tiny change in stdlib because Array#size= is protected but we need to call it from outside (Array.build calls it internally). The method is undocumented but to avoid accidental calls, I named the public method unsafe_size=. size= can probably be removed, but that's left for later to ensure backwards compatibility. I kind of dislike exposing setting the size of an array, even if it's undocumented. That said, anyone could reopen array and do the same so maybe this is fine. Maybe a different expansion would another alternative for this a = [1, b = 2, c = 3, 4] c Introducing temp variables for each element. %e0 = 1 %e1 = b = 2 %e2 = c = 3 %e3 = 4 a = ::Array(typeof(c = 3)).build(1) do |__temp| __temp[0] = %e0 __temp[1] = %e1 __temp[2] = %e2 __temp[2] = %e3 1 end c won't work if the assignment is nested even deeper I thought we could keep it simple and avoid all those additional temp variables and block. This works: __temp_2 = ::Array(typeof(c = 3)).new(1, __temp_2 = uninitialized typeof(c = 3) __temp_2.to_unsafe[0] = c = 3 __temp_2 But it seems I've hit some unrelated compiler bug with this: #10018 We could skip that uninitialized default value and either use an actual value or add an undocumented Array constructor that sets @size = @capacity. The latter is probably best becaus it avoids unnecessary initialization of the buffer. This should be good now 👍 The literal now expands to: __temp = ::Array(typeof(c = 3)).unsafe_build(1) __temp.to_unsafe[0] = c = 3 c I've updated the original description. I just realized yesterday that instead of introducing Array.unsafe_build we could have just used Array.build(size) { size } 🤦 We could probably refactor that and remove unsafe_build. It's not public API, so it can be done quickly.
GITHUB_ARCHIVE
DevOps is to SDLC as MLOps is to Machine Learning Applications If you have read the previous post about Security along the Container-based SDLC, then you have noted that DevOps and Security practices should be applied and embeded along SDLC. Before we had to understand the entire software production process and sub-processes in order to apply these DevOps and Security practices. Well, in this post I’ll explain how to apply DevOps practices along Machine Learning Software Applications Development Life Cycle (ML-SDLC) and I’ll share a set of tools focusing to implement MLOps. Concepts and definitions Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. A more elaborate definition characterizes AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. https://en.wikipedia.org/wiki/Machine_learning Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to “carry forward”. Inference is theoretically traditionally divided into deduction and induction. https://en.wikipedia.org/wiki/Inference Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining and big data. Data science is a “concept to unify statistics, data analysis, machine learning and their related methods” in order to “understand and analyze actual phenomena” with data. Data Science (& ML) challenges Undoubtedly this era belongs to Artificial Intelligence (AI), and this results in the use of Machine Learning in almost every field, trying to solve different kind of problems from healthcare, in business fields, and technical spaces, Machine Learning is everywhere. That, the Open Source Software (OSS) and Cloud-based Distributed Computing have caused the appearance of many tools, techniques, and algorithms and the development of Machine Learning models to solve a problem is not a challenge, the real challenge lies in the management of these models and their data at a massive scale. The Data Science (& ML) Development Process needs to learn from SDLC (Software Engineering) in order to face these challenges, and What are these challenges?. The answer is: They are the same challenges that SDLC (Software Engineering) is facing by adopting the DevOps Practices, for example: 1. Data challenges Dataset dependencies. Data in training and in evaluation stages can vary in real-world scenarios. 2. Model challenges ML models are built in a Data scientist sandbox. It was not developed to take scalability in mind; rather, it was just developed to get good accuracies and right algorithm. Scale ML Applications. Training a simple model and putting it into inference and generating prediction is a simple and manual task. In real-world cases (at scale) that must be automated everything and anywhere. Automation is the only way to achieve scalability in the different stages of ML-SDLC. Monitoring, Alerting, Visualization and Metrics. 5. The MLOps Tool Scene The effort involved in solving MLOps challenges can be reduced by leveraging a platform and applying it to the particular case. The AI (& ML) tool landscape is complex with different tools specialising in different niches and in some cases there are competing tools approaching similar problems in different ways (see the below Linux Foundation’s AI project for a categorised lists of tools). What is MLOps? MLOps (a compound of “machine learning” and “operations”) is a practice for collaboration and communication between data scientists and operations professionals to help manage production ML (or deep learning) lifecycle. Similar to the DevOps or DataOps approaches, MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. While MLOps also started as a set of best practices, it is slowly evolving into an independent approach to ML lifecycle management. MLOps applies to the entire lifecycle - from integrating with model generation (software development lifecycle, continuous integration/continuous delivery), orchestration, and deployment, to health, diagnostics, governance, and business metrics. https://en.wikipedia.org/wiki/MLOps Selected tools to support MLOps and criteria used If you have reviewed above The Linux Foundation’s AI project landscape (categorised lists of tools), you have realized that there a plenty of different tools, commercial and opensource. Well, the next list of products is a subset of products that follow this criteria: 1. Open Source Perfect for early adopters, also suitable to implement easily Proof-of-concepts or starting your own personal project. Kubernetes and Containers are the new platform where our Applications are going to run and live, even the ML Applications. I don’t want to waste efforts integrating heterogeneous tools, I want a stack or platform with mature tools already integrated seamlessly. - What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps - By Ryan Dawson, 7 Mar 2020 - MLOps Platform: Productionizing Machine Learning Models - By Navdeep Singh Gill, 2 Sep 2019 - ML Ops: Machine Learning as an Engineering Discipline - By Cristiano Breuel, 3 Jan 2020 - MLOps: CI/CD for Machine Learning Pipelines & Model Deployment with Kubeflow - 25 Oct 2019 - The Linux Foundation’s AI project landscape (categorised lists of tools) - The Institute for Ethical AI & Machine Learning - Awesome production machine learning
OPCFW_CODE
Pex seems to mess with verbosity flag in Ansible I'm not sure if this is even the right repository to file this issue, but I'm having an issue with pex and ansible, and I can't figure out what it is. Basically, ansible-playbook works totally normally in every respect when I run it as a PEX binary, except that the verbosity flags are not respected at all. We've dug into this with ipdb and the option seems to be set properly in Ansible everywhere we check, but something happens when it forks a task runner that causes it to not pass the option properly. I've got it down to a pretty minimal reproduction case using Docker. Just copy this into a file and do docker build -t pextest . to confirm things not working as expected. # Dockerfile FROM buildpack-deps:stretch RUN apt-get update \ && apt-get install -y \ python-pip \ python-dev RUN pip install pex RUN pex ansible -o ansible-playbook -c ansible-playbook RUN pip install ansible RUN echo "---\n- hosts: all\n tasks:\n - shell: echo hi" > /playbook.yml RUN echo "[default]\nlocalhost ansible_connection=local" > /inventory RUN echo "Working verbosity" RUN ansible-playbook -vvv -i inventory playbook.yml RUN echo "Verbosity not working" RUN ./ansible-playbook -vvv -i inventory playbook.yml Ideally pex wouldn't introduce any discrepancies in runtime behavior, which is why I bring this here. Perhaps we are building our executable wrong? thanks for the excellent repro - I confirm the behavior on my machine as well. unzipping the pex and directly modifying the console script entrypoint for ansible-playbook seems to reveal that the options are making their way into and being parsed just fine by the playbook tool: [omerta show]$ pex ansible -o ansible-playbook -c ansible-playbook [omerta show]$ (cd exploded && unzip ../ansible-playbook) [omerta show]$ cd exploded [omerta exploded]$ find . -name ansible-playbook ./.deps/ansible-<IP_ADDRESS>-py2-none-any.whl/bin/ansible-playbook [omerta exploded]$ grep "\!\!\!" ./.deps/ansible-<IP_ADDRESS>-py2-none-any.whl/bin/ansible-playbook print('!!!!!!!!!!!!!!!!!!!!!!!! {} !!!!!!!!!!!!!!!!!!'.format(sys.argv)) print('!!!!!!!!!!!! cli.options.verbosity = {} !!!!!!!!!!!!!!!'.format(cli.options.verbosity)) [omerta exploded]$ python2.7 . -vvv -i inventory playbook.yml !!!!!!!!!!!!!!!!!!!!!!!! ['ansible-playbook', '-vvv', '-i', 'inventory', 'playbook.yml'] !!!!!!!!!!!!!!!!!! !!!!!!!!!!!! cli.options.verbosity = 3 !!!!!!!!!!!!!!! No config file found; using defaults [WARNING]: Host file not found: inventory [WARNING]: provided hosts list is empty, only localhost is available PLAY [all] ***************************************************************************************************************************** skipping: no hosts matched PLAY RECAP ***************************************************************************************************************************** so there doesn't seem to be any issue on the receiving end fwict. Aha, the issue is this: root@25a932480c80:/pex-explode# git grep "import display" .deps/ansible-<IP_ADDRESS>-py2-none-any.whl .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/__init__.py: from __main__ import display .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/adhoc.py: from __main__ import display .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/console.py: from __main__ import display .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/doc.py: from __main__ import display ... And...: root@25a932480c80:/pex-explode# git grep -C3 "import display" .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py-from ansible.vars import VariableManager .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py- .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py-try: .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py: from __main__ import display .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py-except ImportError: .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py- from ansible.utils.display import Display .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py- display = Display() root@25a932480c80:/pex-explode# git grep "display.verbosity =" .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py .deps/ansible-<IP_ADDRESS>-py2-none-any.whl/ansible/cli/playbook.py: display.verbosity = self.options.verbosity In combination with the fact that pex synthesizes a custom __main__.py that forwards to the underlying entrypoint, ansible's strategy of importing the display from the __main__ module as a hack to get a global display object fails. This import from __main__ and fallback to constructing a new Display object is pervasive in ansible modules and it means verbosity settings fail to stick when the global import hack fails, which it ~uniquely does in pex. IOW, if, instead, ansible encapsulated a global Display in the display module itself, all display users could simply point there, say something like: from ansible.utils.display import default_display class Cli1(object): def __init__(self, ...): default_display().verbosity = self.options.verbosity And then: from ansible.utils.display import default_display class UtilityThatDoesntKnowWhatCOntextItRunsIn(object): def debug(msg): default_display().debug(msg) Wow! Thanks for the quick RC! You've saved us many hours. I'll bring this to the Ansible folks and see if they're receptive to this idea.
GITHUB_ARCHIVE
The PVS-Studio analyzer already has plugins for such IDEs from JetBrains as Rider, IntelliJ IDEA and Android Studio. Somehow we missed another IDE - CLion. The time has come to make amends! But why would you need PVS-Studio if CLion already has the code analyzer? What problems appeared during development? Keep reading to find answers to these questions. There will be no tech-hardcore in this article. This is more of a story about moments that we stumbled upon during the development process. Chill out and enjoy your reading. ;) Note. By the way, the JetBrains website provides the results of various surveys for 2021. The figures cover most common language standards, IDEs, tools, etc. I highly recommend to read, it's worthy. CLion is one of the three most used IDEs / editors. What IDEs for C / C++ do you use? Glad you asked! Let me answer you with a small story. The example demonstrates how the static analyzer managed to detect the dereference of buffer null pointer obtained from the foo function. Andrey, our DevRel, decided to modify the example and see how CLion would handle it. Result: the same warning from CLion. PVS-Studio, however, issues two warnings: Here's an answer to the question that we are discussing - PVS-Studio and CLion collaboration will let us detect more errors at the writing code stage. CLion highlights the errors on the fly, but at the same time is limited in the analysis capabilities. PVS-Studio doesn't highlight errors immediately, but it can perform deeper analysis. Note that PVS-Studio has an incremental analysis – the mode that checks only the changed files. As a result, the CLion analysis and PVS-Studio analysis complete each other. Imagine what would happen if we also added the compiler's warnings... :) Truth be told, we had a plugin prototype for CLion... several years ago. Yes, some time ago we started developing it. But it was not destined to finish it for a number of reasons. The prototype went on the shelf. Now, since the users became more and more interested, we decided to finish what we've started. More precisely, we had a couple of possible ways. Either we modify the existing prototype, or start all over again. Spoiler: both were a bumpy ride. After we found the sources in the repository, we decided to check the level of its readiness. What if there's not much to finish? Surprisingly, the code compiled immediately and that cheered us up. Note. Hello from the C# department! About a year ago we ported the C# analyzer on Linux and macOS. And what's interesting, we managed to run the PVS-Studio on Linux on the very first day of working on the task! Yet the PVS-Studio analyzer for C# for Linux/macOS release was shipped only six months later. Talking about how many nuances get in the way... Good news: code compiled and the prototype already had some features. Bad news: we still needed to integrate it in IDE. At the same time, the plugin-to-be must have the main features from PVS-Studio plugins for IDEs from JetBrains: Rider, IntelliJ IDEA, Android Studio. It is clear that there will be some specifics. But in general, the user who tries PVS-Studio together with various IDEs, should follow the same scenario for working with the analyzer, interact with the same settings and UI. As a result, we tried to take the plugin for Rider as a ready-made solution. However, we found out that we couldn't immediately reuse those developments for CLion. Too much specifics. Here comes a natural question: why don't you start everything from scratch? Just take a ready-made 'template' and add the existing functional from the prototype on it! Well, it's worth a shot. We have found a template from the JetBrains official website. But after we downloaded it, it turned out that the sample... wasn't compiling. After a short investigation, we found out that it was a sample for older CLion versions. The API has changed in the new ones - hence the problems. Interestingly, there is an instruction in the JetBrains blog that explains how to fix the sample to make it work. The edits helped launch the sample. However, the approach itself raised a question - why not just make a separate sample? As a result, after we combined the prototype with the template, we found out that... nothing works. One might want to give up the prototype and write everything from scratch. It seems that to do this we only need a description of API and IDE interaction. But not so fast. The results of API documentation search looked like this: Jokes aside, we didn't find anything on the JetBrains website. At all. A dubious idea - to search through all the available classes in hope to find the right one. Our developers who made the PVS-Studio plugin for Rider, confirmed our concerns - there is no documentation. But if you know where to find such docs - please, leave a comment. It will be useful to us and it will be useful to others. Perhaps, it's time to stop rushing to start everything from 0 - we need to complete what we have. It may be painful, but it's easier than to start the development again. The decision was correct - after some time spent on debugging and editing, it turned out that the prototype, in general, was already doing what was needed. As a result, we quickly managed to teach it to get a list of source code files from the project. We gradually migrated the code, the plugin acquired functionality. Eventually it started operating as we needed. But this applied only to Windows, since the main development was carried out on this OS. Initially we were writing code for cross-platform compatibility, but after testing under Linux and macOS we had to add some improvements. In some cases, we had to do more than just editing plugin code. We also had to delve into the core of the C++ analyzer. Indeed, it was a multilingual task - knowledge of both Java and C++ came in handy. There were two major improvements: There wouldn't be an article if we didn't have the result. No intrigue here, alas. :) The PVS-Studio plugin for CLion is available for use. To try it, you need: You can request a license and download the analyzer here. Follow the link to get an extended trial period for 30 days. ;) The plugin installation is also simple. You can find the details in the relevant documentations section: how to download, enter the license and what to do next. Try it, use it, write us if something doesn't work. We also welcome all suggestions on improving the integration. And we can't help but ask a question: which IDE / editor do you want PVS-Studio to integrate into? Perhaps, Visual Studio Code? ;) Date: Feb 20 2024 Author: Andrey Karpov Date: Feb 06 2024 Author: Andrey Karpov Date: Feb 01 2024 Author: Mikhail Gelvikh Date: Jan 26 2024 Author: Anton Tretyakov Date: Dec 20 2023 Author: Boris Novoselov
OPCFW_CODE
Looking for ways to improve my memory (books, games, exercises, diet) What is the best way to improve my memory, games? diet? exercise, therapy? or other? medication? Asking what the best way is to improve your memory is like asking which tool in the tool box in your garage is the best. It depends on the job at hand. A hammer is not the best tool if you need to cut a board. And a saw is not the tool of choice if your goal is to drive a nail. The same with memory. If you want to remember people's names, learn the Face-Name memory system. If you want to memorize lists of items, learn the Peg System. If you want to memorize the facts in a book, learn the visualization-association memory technique. And so on. Having said that, it does make sense to do what you can to improve the health of your body and brain. Taking the tool analogy a bit further, your brain is like your workshop; you may have the best set of tools, but if your workshop is a dark, disorganized mess it is harder to get projects done. That's why I advocate a two-pronged approach to memory improvement. Half your effort should be focused on learning and using memory techniques, and the other half on improving the efficiency of your brain. - Memory Techniques. Use methods such as the Peg System, Link System, Keyword Method, Journey Systems, etc. to remember specific information. - Brain Health. Improve your diet, get more restful sleep, practice mindfulness meditation, exercise aerobically, etc. to improve the circulation to your brain, provide nutrients, consolidate memories, grow neurons, etc. Some people want a "magic pill" that will give them a great memory. There is no such thing, and memory improvement requires a lot more effort than popping a pill. The secret to a much better memory is cleaning up and organizing your workshop (so to speak), then selecting the right tools for whatever job you need your brain to do. That's the purpose of my website. To provide tools, techniques, and knowledge that you can use to get your mental house in order. Click the buttons along the left side of my site for details. You'll find explanations, recommendations, and discussions for everything from memory systems to study skills to which memory books are best. And, of course, over 100 free brain games in the brain games training section. Does improving your memory sound like a lot of work? Well, it is. But nothing in life that is worth having comes cheap; and you always get what you pay for. Are you willing to pay the price for a better memory? The improvement in my own memory has been well worth the time I've invested over the years. If you think about it, your memories and the body of knowledge you've accumulated (that you can recall) are your only true possessions. It's a shame more people do not make at least some effort to maximize what they are able to retain. Fortunately, you seem to be one of those who recognize the importance of a powerful memory, and I hope you will follow through. If you need clarification of anything you find on this site, don't hesitate to ask.
OPCFW_CODE
Brilliantfiction I’m Secretly Married to a Big Shot – Chapter 2027 – This Little Thing Really Had Him Under Her Thumb lively strong suggest-p1 Novel–I’m Secretly Married to a Big Shot–I’m Secretly Married to a Big Shot Chapter 2027 – This Little Thing Really Had Him Under Her Thumb versed cars She always applied this trick on him. She seemed obedient, however, she utilized this roundabout strategy to power him to acknowledge. But Mo Yesi needed to confess that he or she loved it. “We could have supper together following your bash.” Qiao Mianmian was speechless. Just like how she would behave coquettishly whenever she manufactured him upset or desired him to agree with a little something. the quickie bottle brush But he furrowed his brows at the idea of his thorough agreements. “I can concur to help you to go. Wait, how countless other this sort of collisions will there be? When I arrange another night out, will it also need to be canceled?” Mo Yesi could accept it whether or not this was only for those nights. She pursed her mouth area and saved peaceful. She appeared to value and appeal his thoughts and opinions. Qiao Mianmian’s eye lighted up when she observed his question. She immediately reported, “It just requires the night. The bash will conclusion at five. Mo Yesi frowned. Mo Yesi sounded rather quiet, but Qiao Mianmian could tell he was unsatisfied. She always used this key on him. She looked obedient, however, she utilised this roundabout approach to pressure him to concur. This tiny issue believed him effectively. Qiao Mianmian’s eye lit up when she listened to his dilemma. She immediately mentioned, “It just requires the night time. The celebration will stop at 15. If he didn’t, this minor thing wouldn’t have successfully applied the exact same process on him a great number of days. I’m Secretly Married to a Big Shot “This charitable organization occasion is quite powerful, and they invited me. I-I believe I need to go. When we may help many people through our artistes, I believe it’s rather significant. The man was private for a long time. “You want to go?” I’m Secretly Married to a Big Shot She pursed her lip area and maintained quiet. “But I haven’t agreed to it yet still. I stated I wanted to go over it to you. When you consent, I’ll go. For those who never, I won’t go.” Qiao Mianmian searched obedient as threw the decision to him. If he did not, this very little matter wouldn’t have successfully utilised exactly the same strategy on him countless situations. Exactly like how she would react coquettishly whenever she manufactured him irritated or needed him to agree to some thing. This tiny element really got him under her thumb. Mo Yesi was speechless. Qiao Mianmian’s view lit up up when she noticed his concern. She immediately said, “It just consumes the evening. The celebration will conclude at twenty. She pursed her mouth area and maintained calm. “We may have supper together as soon as the event.” Qiao Mianmian little bit her lip. “Sister Xie named me and asserted that there’s a nonprofit tennis ball today. It is the greatest charity ball in the market, held every year. Half the famous people within the leisure industry will sign up for it.” This man could guess what she was pondering. Mo Yesi smiled. “What will it be? Will you be trying to tell me you got another job?” “But I haven’t consented to it but. I reported I wanted to go over it along. In case you concur, I’ll go. Should you do not, I won’t go.” Qiao Mianmian looked obedient as threw the decision to him.
OPCFW_CODE
Is there a GUI for configuring the Radeon Open Source Drivers and Mesa 3D similar to Catalyst Control Center? Is it possible to change the settings of the radeon open source drivers and Mesa 3D graphics library? I am thinking of something similar to Catalyst Control Center that would allow me to control settings such as: toggling VSYNC toggling framebuffering changing 3D acceleration changing aspect ratio / stretching to fit the screen changing power settings FWIW, my graphics card is a Radeon HD 4350 (RV710) and I am running Ubuntu 14.04. @DuminduMahawela Thanks, but none of these packages include a general purpose configuration utility for different graphical settings. I think I stumbled on a GUI for MESA at one point but I don't remember the name nor do I know if it's still available on 14.04. What you are asking for does not exist. There are a few GUI tools available to control and monitor AMD open-source graphics under Linux but none of them are as easy to use or install as AMD's CCC: DriConf - GUI for controlling the direct rendering settings (OpenGL) of all open-source drivers. Pretty hard to use and hasn't been updated since 2006. Phoronix has an overview and comparison to AMD's CCC. DriConf can be found in the 'universe' repository of Ubuntu and can be downloaded from the Software Center: radeon-profile - control power and clock settings on recent kernels (relies on new radeon dynamic power management). Documentation and support may be found in this Phoronix thread Unfortunately there aren't packages/PPAs available for radeon-profile. You will have to compile it manually. DriConf is a very good suggestion. Actually it can be downloaded from the s/w center itself since it is in the 'universe' repository. Thank you for this answer. CCC only works with the proprietary drivers (FGLRX). Please nore that this answer is about the open source graphics drivers for AMD cards (radeon/mesa). Driconf looks promising, though. Would you care to expand on what settings you can configure with Driconf? I think it would be important to know if the settings asked for in the question can be modified with this tool. @Glutanimate I just downloaded DRIConf but I simply couldn't understand the options available in it!(I must confess i am not an expert in these kinda things.) So if you want i will post some screen shots and maybe you can understand it better. @Venki No, it's fine. I mean, if you want to improve the answer, sure go ahead, but I was mainly addressing pgr with my last comment. I thought they might be familiar with driconf seeing how they suggested it. @Glutanimate Right OK after some research, driconf was the best what I can find in terms of gui. HOWEVER, changing many options requires editing text files which is quite complex, it's easy to destroy something and not easy to revert this changes but your options can be changed as far as i researched :)
STACK_EXCHANGE
We were looking for web multiplayer games for working remotely, some friends like to play Beat Saber for exercise, and some wanted to play Beat Saber but didn't have a VR headset. What it does Yeet Saber is a web based multiplayer game that uses phone orientation/rotation data and websockets to enable playing Beat Saber with your phone as the controller for hitting blocks. Also, you can create and join rooms with a room ID to play the same Beat Saber song as your friends and see their scores, and upload official Beat Saber maps to play. This would be beneficial for de-stressing and taking breaks when working remotely, and getting some stretching and exercise. How we built it We obtained the device orientation using the deviceorientation event. Then, we forwarded this to our backend server. Our backend was written in Node.js. It acts as a proxy between people's device controllers and their desktop browsers. It also allows people in rooms to see each other's scores and states. It sends controller orientation data to the desktop it was paired with. The desktop page then uses Three.js to display 3D blocks. It also allows room hosts to upload Beat Saber maps so that everyone can play on the same map at the same time. Challenges we ran into We have never used DeviceOrientationEvents before and don't have that much experience with quaternion 3D rotations, so we spent a long time figuring out what the quaternions from the phone's orientation represent in terms of the x/y/z rotation of the blocks in game. We could only get the relative orientation of the phone and not the absolute position in 3D space, so we couldn't create all Beat Saber features like obstacles. We had to account for the fact that, for example, when you point the top of your phone to the left, there are multiple orientations which should all count as "rotating left" (screen facing up, down, toward you, away from you). Additionally, it turns out that the compass heading the DeviceOrientation events return are not consistent and drifts. Therefore, we had to make our calculations insensitive to the phone's compass heading. Hit detection was also fairly difficult, but we solved it by computing the cross product of the current rotation vector (where the top of the phone is pointing) and the previous animation frame's rotation vector to obtain the angular velocity. We then took this velocity and computed the dot product with each block's expected angular velocity. If these 2 vectors line up, it represents a hit. Another challenge is recording the demo since streaming and video call platforms have a lot of latency but we wanted to show the multiplayer features. Accomplishments that we're proud of We got more experience with 3D rendering in Three.JS and networking with websockets. Also, we actually got some code done despite having to work remotely and with only 2 people. What we learned Too many console.logs will crash Chrome Chrome and modern browsers do a lot of things to prevent spam (such as preventing autoplay of media) What's next for YeetSaber More lights and colours of blocks, just like the light effects of Beat Saber Improved rotation detection, with possibly position tracking with the help of AR systems.
OPCFW_CODE
extremely hard to stop a nasty bot, as bot creators We've got an moral obligation to no less than try. For Twitter bots, This suggests not DMing or @-messaging other customers. For Slack bots, we should limit the permissions allocated to your bot to avoid it from issuing commands. In many ways, this can be a doomed workout from the start. Security authorities will ensure that there is no positive-fire solution to sanitize unrestricted consumer input. (One example is, I can’t actually avert you from putting destructive Python code into this tutorial, but it’s deployed on a transient backend without long lasting storage, no Access to the internet, and very little linked to me Individually.) But even if it is theoretically Normally a wall is also produced along an east-west path near to the yakhchāl as well as the water is channeled from the north facet of your wall so the shadow with the wall retains the water interesting to make it freeze extra rapidly. In certain yakhchāls, ice is additionally brought in from nearby mountains for storage or to seed the icing course of action. The bots are coming and quickly: to enhance customer service, enterprises embrace synthetic intelligence, and the change ripples by way of IVR systems I’ll explain to you some introductory level chatbot approaches by producing application modeled once the dialectical capabilities of the brogrammer. ‘Within the bots listing, some good Strategies for tech guidance IM bots using domain distinct bot identities.’ The first thing to do when mapping out your own chatbot is to determine what distinctive worth it could include to the user. ‘The larva from the sheep bot fly is a parasite that life on mucous surfaces in the nasal passages and sinuses of sheep and goats.’ Though we do our best to assist out over a well check my source timed foundation, We haven't any promise around the previously mentioned sources. If you need an SLA on support from us, it's suggested you invest in an Azure Support program. My guidance Here's to go in and produce a totally free account with Chatfuel and just start playing around. They may have a extremely useful "Test This Chatbot" characteristic that pings you a concept in Messenger to get started on engaging along with your bot in progress mode so that you can examination it out and enhance it in advance of likely Stay. To create a bot that collects input from the user through a guided dialogue, select the Variety template. A sort bot is meant to gather a certain established of data in the consumer. The subterranean Area coupled While using the thick warmth-resistant construction materials insulated the cupboard space year round. These buildings ended up predominantly created and Utilized in Persia. Many that were created numerous several years in the past continue to be standing. MEOKAY is without doubt one of the major equipment to produce a conversational Messenger bot. It causes it to be quick for equally skilled developers and non-builders To participate in making a series of straightforward to stick to steps. Click-by way of fees and typical engagement is extremely substantial at this time. You happen to be much more more likely to get engagement inside Messenger than from inside of your Fb web page as the communication is one:1 and you are not competing with Many others during the newsfeed.
OPCFW_CODE
Since 2003, I have been working on and off on Baima 白马, an endangered Tibeto-Burman language spoken in the South-West of China. Baima is a non-written endangered Tibeto-Burman language, spoken in three counties in Sichuan Province (Jiuzhaigou 九寨沟, Songpan 松潘, Pingwu 平武) and one county in Gansu Province (Wenxian 文县) in the People's Republic of China (PRC). The Baima people reside in the mountainous areas bordering these counties, and they live in the immediate proximity of Qiang, Chinese, and Tibetan ethnic groups. The status of the Baima language is a matter of controversy. Is it a separate language or a dialect of Tibetan? Officially classified as Tibetans in the 1950s, the Baima people advanced claims as an independent ethnic group in the 1960s and 1970s. In 1978 and 1979, a group of PRC researchers conducted two surveys in the Baima areas and published two collections of papers, in which the Baima were claimed to be descendants of the ancient Di 氐 tribe, which set up influential kingdoms in the 3rd through the 6th centuries in the areas currently inhabited by the Baima. Despite the conclusion that the Baima people constitute a distinct ethnic group rather than a branch of Tibetans, they were never officially reclassified. Reclassification of ethnic groups listed as Tibetans remains a sensitive issue in the PRC, and is considered by many Tibetans as an attack on Tibetan identity by the Chinese government. Overshadowed by such political contention, the Baima language remains poorly documented to date. Baima is considered a distinct language by its speakers and is not mutually intelligible with various Tibetan dialects spoken in its neighborhood. The spheres of activity in which Baima is used are limited to religious and ceremonial contexts, as well as to interpersonal communication in Baima villages. The language of communication with neighboring communities throughout all Baima-inhabited areas is Mandarin Chinese. The following scholars have conducted Baima research: In the course of my work on Baima, I co-authored with Sun Hongkai 孙宏开 and Liu Guangkun 刘光坤 the book Baima yu yanjiu 白马语研究 [A Study of the Baima Language], which was published in 2007. In that same year the London-based Endangered Languages Documentation Programme awarded my project proposal "Documentation of Four Varieties of Baima" with a grant. Since then, I published several academic articles on Baima (which can be downloaded from the Publications page), and I am working towards a comprehensive grammar of Baima in English, accompanied by a collection of texts and a wordlist.
OPCFW_CODE
If you think about SeatGeek like we do, then in your head you probably picture an event page. You know, the page that has the venue map, the colorful dots, and the big list of tickets from all over the web, ranked by Deal Score. All the best reasons to know & love SeatGeek are encapsulated in this one single page. And it is not only the functional core of SeatGeek, it’s also our most highly-trafficked page type. With so much riding on the event page, we’re constantly working on incremental and under-the-hood improvements. We normally avoid committing obvious, disruptive changes, but a few times in SeatGeek’s history we’ve launched major redesigns of our event page—the most recent of which happened earlier today. Here I’ll give an overview of the latest changes and, for posterity, a quick tour through earlier SeatGeek event page history. In the year and a half since we launched the last major version of the event page we started making mobile apps. Designing for mobile devices forced us to reconsider the SeatGeek experience from scratch, and once we launched our apps—in particular our iPad app—they became new sources of inspiration for the website. For example, we began to think much harder about conservation of screen real estate. Internally, today’s milestone inherited the name “Omnibox” from an eponymous Google feature. Not Chrome’s address bar, but rather a more obscure reference to a CSS class found in the new Google Maps’ control panel. Although many people have griped about Google Maps’ recent update, we admired the idea of having a single page element whose content could change based on interactions with the underlying map. In the main sidebar, we swapped our large, color-filled section headers and deal score labels for more elegant circles and lines that more closely resemble our latest iOS designs. We also moved the filter controls and box office link from the top of the sidebar to the bottom. The result is that ticket listings get higher precendence on the page. The new version of the section info view (below, on the right) looks very similar to the old, with the notable exception that it doesn’t appear in a popover overlaid on the map, but rather in the sidebar. Popovers had a lot of annoying characteristics, not least of which was that they were starting to feel decidedly web-1.0-y. As an added bonus, under the new sidebar scheme, it’s now possible to apply search filters to tickets within individual sections. If you can believe it, the old version of the ticket info view (below, on the left) was actually a second popover that appeared beside the first popover containing section info. Now that all this information is in the sidebar, the map won’t get cluttered (which was especially problematic on smaller viewports), and the ticket details are much more legible. Last but not least, we moved the old event info bar (seen in the top half of the image below) into the site header. This frees up more space for the venue map. In order to make room for event info in the new site header, we consolidated the category page links (i.e. “MLB”, “NFL”, etc.) into a dropdown off the main SeatGeek logo. Earlier event pages Here we take a walk down memory lane, through the annals of SeatGeek event page history. We’ll begin with the very first 2009-era event page before venue maps even existed and end on today’s latest event page release, ten notable iterations later. Full disclosure: there’s a gratuitous, self-indulgent number of screenshots ahead. Only the most obsessive SeatGeek fans need read any further. #1 The original SeatGeek event page was launched—along with SeatGeek itself—in September 2009. It contained no venue maps. SeatGeek was all about price forecasting, and making recommendations about whether to buy a ticket now, or wait to buy. If you wanted to buy, there was a table of tickets from various sellers, sorted by price. #2 In early 2010, SeatGeek licensed the rights to use venue maps, provided by a third party named SeatQuest (now defunct). According to engineer #1, working with SeatQuest maps was reportedly a nightmare. #3 Before long, ticket listings and venue maps started stealing screen real estate away from the price forecast part of the page. #4 The event page’s first major redesign happened in Summer 2010. #5 Soon after the Summer 2010 redesign, we scrapped SeatQuest in favor of our own venue maps, which should look a lot more familiar to current SeatGeek users. Also worth pointing out that by now the price forecast feature is relegated to a small-ish button area above the map, and restricted to signed-in users. #6 Sometime in early 2011, we made the long-standing switch from a lefty sidebar to a righty—a change that would persist all the way until yesterday. #7 In mid/late 2011, we redesigned the site again. Note the dark blue primary colors, and the new styling for the sidebar. #8 In the first half of 2012, the dark blue from the previous version softened into the lighter SeatGeek blue of today. #9 This update featured some new sidebar styling and abolished the permanently overlaid search bar in favor of a more compact event info bar. This version reigned supreme from Fall 2012 all the way until March 12, 2014. #10 Omnibox: The cleanest SG event page yet. (Note the lefty sidebar, a clear throwback to the year 2010.)
OPCFW_CODE
This is my first public post on a stock purchase of mine and why I like it. This post is likely wrong, probably has incorrect facts and dodgy analysis. This is produced from my scratchy notes and quick numbers. This post is not financial advice. Do your own research. 🚀🌚 Pureprofile is a business that conducts online market research, capturing data and insights via its global panel, operates a self-service SaaS portal for insight capturing and also uses its data to provide insights to media and ad agencies. The business went through a capital raise at 2c per share (based on a discount to the share price at time of 2.4c). This capital raise was in November and December 2020, and the share price is still holding as of today (2.6c at time of writing on 25/2/21). The Chairmans Address to Shareholders on 29th January 2021 notes that EBIT is to be at upper end of guidance of $2.5-3m, so can say that it should be close to $3m EBIT for the year (with current market cap of ~$27.5m!). This stock is of interest due to me for a few points: - The debt the business was saddled with (was $20m at 20% interest!) is now generally cleared (now $3m at 8.5%). - Used Covid-19 to dramatically restructure their cost base (down 25%). - The business market cap is at essentially 1x revenue. - The SaaS platform (the “Platform”) that they have been growing. I’ll focus on the last point. Revenue for the Platform (SaaS) product is increasing at a good rate and I do not think that this is reflected in the market cap. - FY20 Q1: $110k - FY20 Q2: $114k (4% Growth QoQ) - FY20 Q3: $131k (9% Growth QoQ) - FY20 Q4: $142k (8% Growth QoQ) - FY21 Q1: $233k (64% Growth QoQ) - FY21 Q2: $200k (14% Decline QoQ) If we compare Q1 and Q2 YoY growth, we’re at 118% and 75% annual growth and current annualised revenue approaching $1m. Based on a sample of an assortment of ASX SaaS businesses revenue multiples compared to market cap, the multi on average comes out to 11.77 (excluding CRO’s ridiculous multiple). If PPL can achieve another 75% growth for the year of the SaaS Platform business, we’d be looking at a market cap just for the Platform component of the business of ~$20m. If a 118% growth, ~$25m market cap - what the entire business is nearly valued at now. This is excluding the other profitable, and potentially growing revenue streams - based on international expansion and general sales growth after a tough year in 2020. If we just value the Platform component at the the current ~$1m annualised rate, and then add the remainder of the business at 1x revenue, we’d be at a market cap of: Market Cap = ( ARR for platform x 11.77 ) + ( ( ( Q1 Revenue + Q2 Revenue ) x 2 ) - ( ( Q1 Platform Revenue + Q2 Platform Revenue ) x 2 ) ) = ( ( $433k x 2 ) x 11.77 ) + ( ( ( $6.4m + $8.2m ) * 2 ) - ( ( $233k + $200k ) * 2 ) ) With 1,057,734,594 shares on issue, this is a share price of 3.6c per share. And that is valuing the non-Platform business at 1x and the Platform at 11.77x on current revenue. If we can see growth of the non-Platform business of 30% (looks doable based on Q2 update) and 75% for Platform, the market cap would look like: Market Cap = ( ARR for platform x 11.77 x Growth ) + ( ( ( Q1 Revenue + Q2 Revenue ) x 2 x Growth ) - ( ( Q1 Platform Revenue + Q2 Platform Revenue ) x 2 x Growth ) ) = ( ( $433k x 2 ) x 11.77 x 1.75 ) + ( ( ( $6.4m + $8.2m ) * 2 * 1.3 ) - ( ( $233k + $200k ) * 2 * 1.75 ) ) Or a share price of 5.1c per share. And if we can actually get the non-Platform business valued at something more than 1x (say 2x): Market Cap = ( ARR for platform x 11.77 x Growth ) + ( ( ( Q1 Revenue + Q2 Revenue ) x 2 x Growth x Multiplier) - ( ( Q1 Platform Revenue + Q2 Platform Revenue ) x 2 x Growth ) ) = ( ( $433k x 2 ) x 11.77 x 1.75 ) + ( ( ( $6.4m + $8.2m ) * 2 * 1.3 * 2 ) - ( ( $233k + $200k ) * 2 * 1.75 ) ) Or a share price of 8.7c per share. - My target share price for the near term is 3.6c - based on properly valuing the Platform (SaaS) component of the business. - For the medium term, a 5.1c per share is achievable based on current growth. - If the non-Platform component of the business is valued at something more than 1x revenue, even just a 2x, we’d be looking at a share price of 8.7c.
OPCFW_CODE
buildLib converts an AMDIS' library into a CSV file in the formated required by Metab. buildLib is a function to convert a .MSL file of an AMDIS' library into a CSV file with the format required by Metab. when AmdisLib is missing, a dialog box will pop up allowing the user to click-and-point to the .MSL file from which the data is to be read. Alternatively, AmdisLib can take a character string naming the path to the .MSL file to be read or the name of a variable (data frame) containing the .MSL file. when save = TRUE and folder is missing, a pop up dialog box will be presented to the user. The user can then select the directory to which the results will be saved. Alternatively, folder can take a character string naming the path to the folder where the results must be saved. a logical vector (TRUE or FALSE) defining if the results must be saved into a CSV file (default = TRUE). A character string indicating the name of the file storing the results generated by buildLib (default = ion_lib.csv). A 'logical' defining if the progress bar should be displayed. The Automated Mass Spectral Deconvolution and Identification System (AMDIS) is a software developed by NIST (http://chemdata.nist.gov/mass-spc/amdis/). It makes use of a mass spectral library composed of two files, a .CID and a .MSL file. buildLib allows a quick conversion of the AMDIS' library into a .CSV file with the format required by Metab. For this, buildLib requires only the .MSL file of the AMDIS' library. buildLib returns a data frame containing the following information: Column 1: The name of each metabolite present in the .MSL file; Column 2: The expected retention time (RT) of each metabolite; Columns 3 to 6: The 4 ion mass fragments used to identify each metabolite. The ion mass fragment 1 is used by MetaBox as reference to detect and quantify each metabolite; Columns 7 to 9: The expected ratio of the ion mass fragments 2, 3 and 4 in relation to the ion mass fragment 1. In addition, buildLib verifies the existence of metabolites expected at similar RT (less than 1 minute difference) and that use the same ion mass fragments as reference. These metabolites are probably strongly coeluted, which may dificult their correct identification. Metabolites showing these characteristics are presented to the user at the end of the run. We strongly suggest selecting different ion mass fragments for identifying such compounds. The .MSL file of the AMDIS' library must contain the expected RT of each compound. Raphael Aggio <firstname.lastname@example.org> Aggio, R., Villas-Boas, S. G., & Ruggiero, K. (2011). Metab: an R package for high-throughput analysis of metabolomics data generated by GC-MS. Bioinformatics, 27(16), 2316-2318. doi: 10.1093/bioinformatics/btr379 1 2 3 4 5 6 7 Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.
OPCFW_CODE
Chatbots with Pinecone Make your chatbot answer right. Companies have been using chatbot applications for years to provide responses to their users. While early adopters were limited to providing responses based on the information available to them at the time, foundational AI models like Large Language Models (LLMs) have enabled chatbots to also consider the context and relevance of a response. Today, AI-powered chatbots are able to make sense of information across almost any topic, and provide the most relevant responses possible. With continued advancements in AI, companies can now leverage AI models built specifically for chatbot use cases — chatbot models — to automatically generate relevant and personalized responses. This class of AI is referred to as generative AI. Generative AI has transformed the world of search, enabling chatbots to have more human-like interactions with their users. Chatbot models (e.g. OpenAI’s ChatGPT) combined with vector databases (e.g. Pinecone) are leading the charge on democratizing AI for chatbot applications, enabling users of any size — from hobbyists to large enterprises — to incorporate the power of generative AI to a wide range of use cases. Chatbot use cases Chatbots trained on the latest AI models have access to an extensive worldview, and when paired with the long-term memory of a vector database like Pinecone, they can generate and provide highly relevant, grounded responses — particularly for niche or proprietary topics. Companies rely on these AI-powered chatbots for a variety of applications and use cases. - Technical support: Resolve technical issues faster by generating accurate and helpful documentation or instructions for your users to follow. - Self-serve knowledgebase: Save time and boost productivity for your teams by enabling them to quickly answer questions and gather information from an internal knowledgebase. - Shopping assistant: Improve your customer experience by helping shoppers better navigate the site, explore product offerings, and successfully find what they are looking for. Challenges when building AI-powered chatbots: There are many benefits to chatbot models such as enabling applications to improve the efficiency and accuracy of searching for information. However, when it comes to building and managing an AI-powered chatbot application, there can be challenges when responding to industry specific or internal queries, especially at large scale. Some common limitations include: - Hallucinations: If the chatbot model doesn’t have access to proprietary or niche data, it will hallucinate answers for things it doesn’t know or have context for. This means users will receive the wrong answer. And without citations to verify the source of the content, it can be difficult to confirm whether or not a certain response is hallucinated. - Context limits: Chatbot models need context with every prompt to improve answer quality and relevance, but there’s a size limit to how much additional context a query can support. - High query latencies: Adding context to chatbot models is expensive and time consuming. More context means more processing and consumption, so adding long contexts to embeddings can be prohibitive. - Inefficient knowledge updates: AI models require many tens of thousands of high-cost GPU training hours to retrain on up-to-date information. And once the training process completes, the AI model is stuck in a “frozen” version of the world it saw during training. Vector databases as long-term memory for chatbots While AI models are trained on billions of data points, they don’t retain or remember information for long periods of time. In other words, they don’t have long-term memory. You can feed the model contextual clues to generate more relevant information, but there is a limit to how much context a model can support. With a vector database like Pinecone there are no context limits. Vector databases provide chatbot models with a data store or knowledgebase for context to be retained for longer periods of time and in memory efficient ways. Chatbot applications can retrieve contextually relevant and up-to-date embeddings from memory instead of from the model itself. This not only ensures more consistently right answers, especially for niche or proprietary topics, but it also enables chatbot models to respond faster to queries by replacing the computational overhead needed to retrain or update the model. Building chatbots with Pinecone Pinecone is a fully-managed, vector database solution built for production-ready, AI applications. As an external knowledge base, Pinecone provides the long-term memory for chatbot applications to leverage context from memory and ensure grounded, up to date responses. Benefits of building with Pinecone - Ease of use: Get started in no time with our free plan, and access Pinecone through the console, an easy-to-use REST API, or one of our clients (Python, Node, Java, Go). Jumpstart your project by referencing our extensive documentation, example notebooks and applications, and many integrations. - Better results: With long-term memory, chatbot models can retrieve relevant contexts from Pinecone to enhance the prompts and generate an answer backed by real data sources. For hybrid search use cases, leverage our sparse-dense index support (using any LLM or sparse model) for the best results. - Highly scalable: Pinecone supports billions of vector embeddings so you can store and retain the context you need without hitting context limits. And with live index updates, your dataset is always up-to-date and available in real-time. - Ultra-low query latencies: Providing a smaller amount of much more relevant context lets you minimize end-to-end chatbot latency and consumption. Further minimize network latency by choosing the cloud and region that works best (learn more on our pricing page). - Multi-modal support: Build applications that can process and respond with text, images, audio, and other modalities. Supporting multiple modalities creates a richer dataset and more ways for customers to interact with your application. How it works With a basic implementation, the workflow is tied directly to Pinecone to consistently ensure correct, grounded responses. To get started: - Step 1: Take data from the data warehouse and generate vector embeddings using an AI model (e.g. sentence transformers or OpenAI’s embedding models). - Step 2: Save those embeddings in Pinecone. - Step 3: From your application, embed queries using the same AI model to create a “query vector”. - Step 4: Search through Pinecone using the embedded query, and receive ranked results based on similarity or relevance to the query. - Step 5: Attach the text of the retrieved results to the original query as contexts, and send both as a prompt to a generative AI model for grounded, relevant responses. Agent plus tools implementation: Another way to get started is by implementing Pinecone as an agent. The below is an example workflow using OpenAI’s ChatGPT retrieval plugin with Pinecone: - Step 1: Fork chatgpt-retrieval-plugin from OpenAI. - Step 2: Set the environmental variables as per this tutorial. - Step 3: Embed your documents using the retrieval plugin’s “/UPSERT” endpoint. - Step 4: Host the retrieval plugin on a cloud computing service like Digital Ocean. - Step 5: Install the plugin via ChatGPT using “Develop your own plugin”. - Step 6: Ask ChatGPT questions about the information indexed in your new plugin. Check out our notebook and video for an in-depth walkthrough on the ChatGPT retrieval plugin.
OPCFW_CODE
Is Solaris Zones A container? Solaris Containers (including Solaris Zones) is an implementation of operating system-level virtualization technology for x86 and SPARC systems, first released publicly in February 2004 in build 51 beta of Solaris 10, and subsequently in the first full release of Solaris 10, 2005. What is Oracle Solaris Zone? About Oracle Solaris Zones. A zone is a virtualized operating system environment created within a single instance of the Solaris OS. Within a zone, the operating system is represented to the applications as virtual operating system environments that are isolated and secure. Can VMware run containers? VMware Tanzu is designed for containers and modern apps VMware Tanzu drives modern applications on modern infrastructure. It simplifies operating containers across multi- and hybrid-cloud environments, while freeing developers to build great apps that support continuous delivery workflows. Is Lxc containerization technology? Contents. LXC (LinuX Containers) is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host. These isolation levels or containers can be used to either sandbox specific applications, or to emulate an entirely new host. How do you create a zone? How to Configure the Zone - Become an administrator. - Set up a zone configuration with the zone name you have chosen. - Create the new zone configuration. - Set the zone path, /zones/my-zone in this procedure. - Set the autoboot value. - Set persistent boot arguments for a zone. - Dedicate one CPU to this zone. How do I create a zone in Solaris 11? Creating Your First Zone: testzone - Step 1: Configure an Oracle Solaris Zone. Let’s start by creating a simple test zone using the command line, as shown in Listing 1. - Step 2: Install the Zone. - Step 3: Boot and Complete the System Configuration. - Step 4: Log In to Your Zone. What is Solaris virtualization? Oracle Solaris 11 is a complete, integrated and open platform engineered for large-scale enterprise environments. Its built in virtualization provides a highly efficient and scalable solution that sits at the core of that platform. How do I find my local zone global zone? check with arp -a command for MAC address on non-global zone and compare the same MAC address on the global zone. Check the IP assigned to the interface which matches the MAC address… we can’t find global zone from non-global zone. run /usr/bin/zone-global-name will list the name of the global zone. How do I create a non-Global Zone in Solaris 11? How to Create and Deploy a Non-Global Zone - Become a zone administrator. - Create the zone. - From the global zone, install the non-global zone. - Boot the zone. - If you did not use a configuration profile in Step 3, manually perform the zone’s system configuration. Which is better VM or container? Containers are more lightweight than VMs, as their images are measured in megabytes rather than gigabytes. Containers require fewer IT resources to deploy, run, and manage. Containers spin up in milliseconds. Since their order of magnitude is smaller. When should you not use containers? When to avoid Docker? - Your software product is a desktop application. - Your project is relatively small and simple. - Your development team consists of one developer. - You are looking for a solution to speed up your application. - Your development team consist mostly of MacBook users. Which is better LXC or Docker? LXC is less scalable compared to Docker. Its images are not as lightweight as those of Docker. However, LXC images are more lightweight than those of physical machines or virtual machines. That makes it ideal for on-demand provisioning and autoscaling. Is LXC obsolete? The LXC 4.0 branch is supported until June 2025. How do you build a zone room? Define Boundaries – Create clear zones. Use walls and large furniture to define the outside edges of your zones. For inside edges, use smaller furniture such as chairs, low tables and carpets to create definition. Create a Visible Path – Keep the pathway in and out of your room clear and visible. What is Zone in Revit? Products and versions covered. Revit 2020. Feb 24 2022In-product view. Because zones are a collection of spaces, you typically create zones after spaces have been placed in the model. However, you could create zones first according to specific environments, then assign spaces to the zones that you created. How do I start a Solaris Zone? How to Boot the Zone - Become superuser, or assume the Primary Administrator role. - Use the zoneadm command with the -z option, the name of the zone, which is s-zone, and the boot subcommand to boot the zone. - When the boot completes, use the list subcommand with the -v option to verify the status.
OPCFW_CODE
Well, it depends on the case design. If its well made, you can take out a motherboard tray, connect the motherboard to it, add the parts, and then slot the tray in, locking it into place. There are a few cases which can be funny though where the PSU won't go in if you dont add the motherboard first, so also check how the PSU is fitted. If its positioned above where the motherboard will go, your fine. You'll be able to see what type of case you have, simply by looking inside. If it has some clips connected to something that looks rather like a tray, then its okay to take this off and add the parts first. Whatever you do though, protect yourself from static, okay? Theres nothing worse than killing your components just because you hadn't earthed yourself. AntiStatic bracelets which are cheap is one way, and plugs into a wall socket to earth you, or some people touch radiators every 5 minutes or so, or plug the PSU in to the PC, and the case should earth you. I'd also advise not to build the system in a room with a carpet, where static is most likely going to be found. First slot the CPU in to the desired slot on the motherboard. There is a corner missing off to help you position it right, and this should just push in nice and easy. There will also be a sticker ontop of it which behind it is the thermal compound to help the cooling process. Simply take that off and you are now ready to place the heatsink and fan ontop. AMD CPU heatsinks can be hard to fit, but the catch is simple once you get used to how it works. It should click into place without alot of force, but if you do have to, the chip is likely not going to get damaged as long as the Heatsink is positioned the right way (There is a guide with the manual of course), and so, this should be easy enough to fit too. If you RAM is dual Channel compatable, inserting them in a specific way on the motherboard will turn this feature on. Usually its the 1st and 2nd slot, or in some its the 1st and 3rd slot. Check the manual After it is all plugged in and all is go, everything should just install itself (Motherboard drivers, etc), except if your hard drive is SATA, in which case, before installing Windows, using the guided help it gives you, you'll have to place the SATA drivers in that should come with your motherboard before the hard drive can be installed to. After that is done though, and Windows is installed, you'll be fine. I'd check the BIOS though by pressing DEL at bootup to check that all settings, etc are correct to the way you want them, for example the onboard video chip is disabled, and the sound chip too if you ahve one plugged into a PCI port. Other than that, your computer is all set to go I hope this helps.
OPCFW_CODE
Below at Assignment Expert our aim is To help you with all your computer science programming projects and computer science homework, in a means that helps you achieve your targets and accomplish your goals. Our computer science homework help is personalized for your technical specs, whenever. Our providers are Finances friendly and very affordable. You could surely appreciate support from us which isn't going to dig a gap with your pockets Learning computing will give you beneficial knowledge; whether you ought to be a scientist, produce the newest application or perhaps know the basics of a computer process. Our Computer science specialists are very well competent from Leading institutes. They leverage their large encounter and Keep to the simplistic approach to prepare in depth, phase-by-phase Computer Science assignment help alternatives. Their around the clock availability makes certain that you get the very best computer science project help even in the course of the limited timelines. We have been devoted to our jobs of furnishing plagiarism cost-free, nicely referenced answers Assembly the deadlines. There are actually tutors that are effective at Assembly along with you on the internet Anytime or area you are in. When you are one of those with a very restricted program and constantly on the transfer, then we have been the only option for you personally. Should you be carrying out an assignment on computer science, you need major preparations which would consider up loads of time. We, in the slightest degree Assignment Gurus possess the required methods to provide responsible and good quality computer science help. With our simplistic method, we purpose to reduce the strain of scholars in solving computer science assignments. This also makes sure that students have ample time to create preparations for other tricky topics. Computing presents many varieties of valuable Occupations. Computing jobs are amongst the very best paid out and also have the very best career gratification. In line with our computer science gurus, computer science fields can be labeled into various realistic and theoretical disciplines. discover this info here There are websites numerous theories in the sector of computer science, many of which may incorporate graphics, networking, systems, and programming languages amongst Some others. You could be specific of higher scores and Total improvement with your grades in the event you use our computer science on the internet tutor as soon as possible. The fields in computer science could be broken down in useful and theoretical facets. Some areas of computer science are abstract while some really have to do with serious world purposes. Other fields of computer science continue to put emphasis over the challengers involved with carrying out computation. Computer science project help should consist of using skilled industry experts at your activity degree highschool by way of masters diploma amounts, and perhaps depending on you could look here your precise specifications. Undisrupted Access - Despite where you keep or how fast paced your schedule is, there isn't a limitation regarding exactly where our qualified computer science on the web helpers can reach you. Assignment Qualified provides you with very experienced authorities, people with experience and degrees, for computer science project help that satisfies the problems you face. These are always there making sure that despite your agenda, you always get a tension-cost-free computer science homework help. Our computer science homework help contain providing remedies to small yet elaborate issues with a quicker turnaround. With our uncomplicated method and with our broad base of on the web computer science specialists, you may be rest assured that you'll protected the very best grades in computer science her explanation assignment. Many of the challenging computer science subjects on which pupils request our help are:
OPCFW_CODE
Welcome back fellow software developer, in this topic, I will discuss an important topic for a software developer which is SOLID principles for Object-oriented programming (OOP). What is the relationship between the above quote and SOLID principles? as usual, be patient I am about to tell you. First of all, let’s define a principle. A principle defines how a particular thing works, and often than not, a principle does not change. In addition, according to the Cambridge dictionary, a principle means a basic idea or rule that explains or controls how something happens or works. Now since that is out of the way, let us go back in time when you was a kid. Can remember any principle taught to you by your family? don’t you think it’s fascinating how a principle sticks on you from an early age until today? The same thing applies to software development. In software development, we have principles that you should be aware of, and never forget them!. Knowing such principles, will broaden your knowledge and give you a solid ground when construing a software project. Furthermore, the most known design principles are SOLID. Wait, how come I said five and I mentioned a single word? SOLID is a mnemonic word and each letter represents a single design principle as follows. S – Single Responsibility Principle. O – Open/Closed Principle. L – Liskov Substitution Principle. I – Interface Segregation Principle. D – Dependency Inversion Principle (Side note: I am in love with this) In this post, I will deep dive into the first principle, and leave the rest for the upcoming post. Therefore, the rest of this post is structured as follows. The definition of the Single Responsibility Principle. Examples in our daily life. The principle outcome The Definition of Single Responsibility Principle This principle is pretty simple, a class should only be responsible for a single functionality. Keep in mind, I don’t mean a single method, I mean the scope of the methods should be highly related. This gives the class one reason to change only, and the change will only be related to a specific scope in the entire application. Don’t worry if you didn’t understand, it will get clearer with the examples. Examples in our daily life If we observed our jobs we will notices that each employee has specific tasks. Those tasks are defined within the job description. You are legally compelled to do tasks related to your job description only, and nothing else. Furthermore, another example is your responsibilities at home. Often then not, parents distribute a specific responsibility to each member of the family. For instance, memeber1 responsible for getting the trash out, memeber2 responsible for cleaning the dishes, member3 is still a kid and don’t have responsibilities. What I am trying to say is that in life, everyone will have specific duties and responsibility that define his overall character. The same thing should be applied to the classes in your application. Let’s go with the technical examples to grasp more understanding. The above diagram describes a class with the name Calculator having two fields firstNumber and secondNumber. In addition, the class has three methods sum, multiply and printInJsonFormat. Observe carefully and try to find what’s wrong in this class. Did you find it? if you did then congratulations, you grasped the first principle by heart. If you didn’t, don’t worry, I will tell you everything you need to know. The printInJsonFormat method does not share a common responsibility with other methods in the class. To illustrate, the methods sum and multiply are core mathematics functions related to this class. But printInJsonFormat is not. Therefore, will have multiple reasons for change, which are printing functionality as well as the mathematics functions. In addition, it does not make sense to call printInJsonFormat from another class as again it’s not logically related. The Principle Outcome Following this principle will enhance understanding the aim of the class. In addition, assessing the impact of a change in a class will be much easier. The single responsibility principle is one of the easiest principles to understand, implement and remember. In addition, it has a huge impact on understanding the objective of the class, as well as increasing the maintainability in the long run. Side note: as usual do not take anything that I write religiously, as a software developer you should assess the implementation of patterns and principles described in this blog in general. If you liked the content and thought of supporting the creator, below are some ideas to do so. - your engagement in the comment will help giving more value to the content. - spreading the content in social media will help attract more readers and google ranking - also consider subscribing to the newsletter list
OPCFW_CODE
[red-knot] Test setup utilities This recently came up in a discussion. A lot of red-knot tests require some form of "setup" in the sense that they create variables of a particular type. This is not always straightforward. For example, to create a variable of type int, you need to make sure that it doesn't end up as a LiteralInt[…]. So some tests use a pattern like this: def int_instance() -> int: ... x = int_instance() # to create x: int To create a variable with a union type, a lot of tests follow a pattern like x = a if flag or b # to create x: A | B It's unclear to me if this really requires any action, but I thought it might make sense to discuss this in a bit more detail. Here are some approaches (some less generic than others) that I could think of. 1. def f() -> MyDesiredType; x = f() Upsides: You can specify MyDesiredType directly Downsides: Does not work for union types (yet). Rather verbose, requires coming up with yet another name (for the function) Does not work at runtime (i.e. you can't just paste this into an interpreter and see what the runtime type is). 2. def f(x: MyDesiredType): … Upsides: You can specify MyDesiredType directly Fewer lines than the pattern above Downsides: Does not work yet All tests are now within a function, requires coming up with a name for the function Does not work at runtime (see above) 3. a if flag or b (only relevant for union types) Upsides: Does work at runtime, if you setup flag beforehand Downsides: Can't see the result type in code Relies on an undefined variable, which leads to extraneous diagnostic outputs (e.g. if you play with these snippets interactively, or if you want to see what other type checkers do). Maybe this could be prevented by injecting flag into the test environment somehow. Arguably not very beginner-friendly (what is flag?!) 4. Helper functions like one_of(a, b) We could inject new functions, just for testing purposes. For example, we might have a function similar to def one_of[A, B](a: A, b: B) -> A | B: if random.randint(0, 1): return x else: return y to easily create union types Upsides(?): x = one_of(1, None) is slightly more readable than x = 1 if flag else None (but only if you know what one_of does) Works at runtime, but requires having the helper functions available Downsides: Can't see the result type in code The snippets are not self-contained anymore. You can not simply copy a snippet and try it out in the mypy playground, for example (unless you have a copy of the test utilities in there already). Requires beginners to learn about these utilities 5. A magic conjure function I'm not even sure if this is technically possible, but other languages have ways to create values of type T out of nothing. Not actually, of course. But for the purpose of doing interesting things at "type check time". For example, C++ has std::declval<T>(). Rust has let x: T = todo!(). Functional languages have absurd :: ⊥-> T. You can't specify explicit generic parameters in a function call in Python (?), so we couldn't do something like x = conjure[int | None](), but maybe there is some way to create a construct conceptually similar to def conjure[T]() -> T: ... # Python type checkers don't like this I think I would personally prefer the simple def f(x: MyDesiredType): … approach, once we make that work. (I edited your post to number your suggestions so it would be easier to discuss them, hope that's okay :-) Your proposals (4) and (5) both involve injecting some sort of "magic" function into the namespace that we could use without any imports, which would then create instances of the types required for the test. My first instinct was that I didn't much like the idea, because in general I'd like the test snippets to be as close as possible to executable Python code. I think it's useful to keep a close resemblance between our test snippets and user code we'll actually be running on. I also think keeping our test snippets as close as possible to executable Python makes them much easier for us and external contributors to understand. However, I then realised that this isn't really that different to what we already do with reveal_type. At runtime, reveal_type is not a builtin -- you have to import it from typing or typing_extensions if you want to use it in such a way that your code will not crash when you actually run your code with a Python interpreter. But we pretend it's a builtin, so that users can easily debug their type-checking results without having to add an import, and so that we can keep our test snippets concise. I argued against this when we were designing the test framework (I said we should have to explicitly import reveal_type in order to use it in test snippets), but @carljm pushed for it, partly on the grounds that it would significantly reduce the boilerplate of our tests. In retrospect, I think he was probably right; it would be a bit of a pain to have to import reveal_type in every test snippet. The key differences with reveal_type are: Other type checkers also pretend that reveal_type is a builtin. If you want to do a cross-comparison of a red-knot test with how mypy or pyright infer the types, you can just copy and paste it into their playgrounds currently. But if we injected a magic one_of or conjure function, we'd have to remember to add those function definitions to the snippet before mypy or pyright would accept them. We'd only be injecting one_of or conjure into the namespace of test snippets, whereas for reveal_type we also pretend it's a builtin when checking user code. I think I would personally prefer the simple def f(x: MyDesiredType): … approach, once we make that work. Yes, I think I agree. Mypy has quite an extensive test suite that works in a similar way to our new framework, and they've managed to do without a conjure() function or one_of(). That doesn't mean that the idea is bad, of course! But it does suggest that it should be possible to do without it. And even if our test snippets already don't look exactly the same as executable Python code would (due to all the unimported reveal_type usages), it's nice to limit the differences as much as possible. One way that mypy test snippets do differ from executable Python is in their use of "fixture stubs". Rather than using their full vendored stdlib typeshed stubs (which is what they use for checking user code), in their tests they use a radically simplified version of typeshed. This speeds up their tests a lot, but it is very frequently a source of confusion for mypy developers and contributors, who often think they've fixed a bug only to realise that the type inference their users are seeing for standard-library functions is very different to the type inference they thought they had asserted in their test snippets. Just for the sake of discussion, another possibility here is to allow "layering" files, so in a Markdown header section you can provide a file that will be shared by all sub-tests within that section. Downsides are less locality of tests, and more complexity in understanding the structure and behavior of a test. I also don't want to do anything here that's specifically motivated by limitations we should lift soon, like not understanding function arguments, or unions in annotations. I think on the whole my preference is also defining functions with typed arguments, in most cases. Correct -- for now, the best you can do is x: int | None = conjure() or x = typing.cast(int | None, conjure()). Not that I think anyone wants the conjure approach, but for reference, here is an ugly hack to make it work (syntactically; with the correct result in pyright but a diagnostic in mypy): class conjure[T]: def __new__(cls) -> T: # type: ignore raise TypeError(f"Can not actually conjure up values of type T") x = conjure[int | str]() reveal_type(x) # int | str I'm actually tentatively closing this, as it seems like we all agree on using the function-parameter approach once we support it.
GITHUB_ARCHIVE
We've seen wrapped websites get rejected before. I don't think I've seen/heard of a hybrid app that uses Cordova to access device hardware/sensors ever get rejected. I think if you're not going to be using something on the device whether location, camera, etc then it's important to ask if that particular project needs to be an app or can it just be a mobile website accessible by URL and browser. We need to provide an iOS app and the current plan is to build a hybrid, as we already have a browser based version of the product. Although you should be able to reuse a lot of your components, I would suggest keeping as much code as possible local to the device. This simplifies network connectivity a bit (since there's always code present to handle the condition), and Apple will look at this more favorably. (They do reject "wrapper-style" apps.) I've been trying to understand the black art of how apps get accepted/rejected from the App Store, but there seems to be no clear guidelines regarding hybrid apps. It's really a matter of who at Apple actually reviews your app. Some reviewers are pickier than others, and all have slightly different interpretations of the HIG. Thankfully, I've had no real issues with any of the reviewers I've encountered, and as long as you treat them kindly with respect, even if you do have an issue, they'll usually do their best to help. From what I can make out, as long as you're not just wrapping a desktop website up in an app, you should be fine. This is true, but also a few other "guidelines" are applicable, in my experience: - Don't wrap a website (as you just mentioned) - Obey the HIG where applicable. Apple is fairly accommodating wrt user interfaces -- e.g., Google's apps that use Material design and look consistent across Android and iOS. Games get much more leeway, of course. - Handling the lack of network connectivity is a big deal -- If your app requires a network connection and there is none, you must generate a user-friendly error message. If you can subsist off a cache or provide offline data, that's even better (along with some indication to the user that they are offline and data will be synced later). Provide as much functionality as possible without a network connection -- unless your entire app requires network access, there's little reason to lock it behind a "this app requires the internet" gateway. - Be very careful how you store data. Apple pays close attention to how much data is synced via iCloud, and if they see caches and such getting backed up, they'll complain and reject the app. - Likewise, be careful how much data you download, especially on a cellular connection. If your app plays movies, for example, you must reduce quality while streaming on a cellular connection. - Apps that use device functionality are looked upon far more favorably. This can be something as simple as using the SQLite plugin for persistent storage or the GPS or even popping up an email composer as a sharing mechanism. - Avoid the soft keyboard. If a control becomes unreachable because the keyboard obscures it, Apple will reject. It is your responsibility to avoid the keyboard. I've got an example that mimics native keyboard avoidance here: GitHub - kerrishotts/cordova-keyboard-example: Simple keyboard avoidance example for Cordova and iOS - Your app's appearance should appear as if some thought has been applied to it -- that is, align your elements, avoid fuzzy images and icons, etc. - Honor system settings if possible. A good example is respecting the system font size. (See: GitHub - phonegap/phonegap-mobile-accessibility: PhoneGap plugin to expose mobile accessibility APIs. ) - If your user requires authentication, be sure to provide the reviewer an appropriate test account. Whether or not Apple will use it depends on the reviewer, but apps can be rejected solely for not supplying a login. - There is a section when submitting your app that allows you to leave notes to the reviewer. This is a good place to leave tips & notes about your app. Keep in mind that although you know your app inside and out, the reviewer does not. Just respect your reviewer and they will usually do everything they can to help. i.e. if the site is responsive enough to deal with small screens in the first place, you can probably get away with just using that as your UI. Being responsive is critical, of course, but acceptance will depend largely upon your UI. Your app can be as responsive as it can be, but if the UI looks like you're wrapping a web page, you're apt to get rejected. The simplest way I've found to boil all this down is this: don't give the Apple reviewer any reason to suspect that your app isn't native. Appear native even with a nonstandard but polished UI (like material design or a game UI), and you're probably going to be accepted. If your UI is unpolished and looks like a webpage, well, all bets are off and depend upon your reviewer. Has anyone had anything similar rejected before? I'll give you a couple of examples of my experiences: - Remote database CRUD app. Network access is obviously critical, but the app would handle it gracefully by storing data locally and synchronizing when it could. It would retry sync attempts should a network be present and the server be somehow unreachable (or generating an error). All of the app would work just fine in a desktop browser, but relied upon the SQLite plugin for local persistent storage, which was sufficient to pass review. UI wasn't perfectly standard, but was polished. - Dictionary app. Content is local, except for external web links. User data is stored in a SQLite database using the SQLite plugin (and so uses some device functionality). Even so, the app itself works perfectly well in a desktop browser (uses IndexedDB/WebSQL in that case), but the UI appears polished and material-like. - Museum app. I didn't code the original app -- I was just there to upgrade to a more modern version of Cordova. This app's UI is not terribly polished, and does appear like a website. However, a lot of code is local, it properly handles lack of network connectivity, AND uses device functionality for several features. Apple approved without issue, even though it did scream "non-native". I suspect it is the use of device features in this app that tipped the reviewer over to acceptance. I hope that helps, and best of luck with your app! Thanks for the quick and very helpful responses. You've put my mind at ease now, and I'm fairly happy we can do what we want without fear of getting a rejection. Many thanks again
OPCFW_CODE
Why does free() need the starting pointer of a dynamic array? If I run this code it will crash with a stack dump: int * a = (int *) malloc(sizeof(int) * 10); a++; free(a); Why doesn't it work? Why does it need the pointer returned by malloc()? What records does the resource management system behind it keep? Is it the length of the array? Is it the last cell's address? And does it associate it with the starting pointer? is there any error coming ? if yes can you show Why should it work? The behaviour of C has always been that the pointer handed to free() must have been returned by malloc(), calloc(), realloc() or any of their brethren (posix_memalign(), aligned_alloc(), etc). If you want to write your own memory allocation system, you may, but the standard version works as it does and there's no real benefit to asking why — that is the way it is defined to behave. http://stackoverflow.com/questions/1518711/how-does-free-know-how-much-to-free?rq=1 Maybe Answer of this question can help you It doesn't work because you add one to the pointer that malloc returned to you. free expects a pointer that malloc returned. Due to the a++ the pointer is no longer what malloc returned and thus free doesn't know what to do with it. @user2603656: because that is the way it is designed and specified to behave. Why is the sky blue? @user2603656 if you're studying for a test or something its because free() knows that the pointer you return to it, at some speecific offset from that pointer it an get header information for the allocated block. It will go off by that offset and modify the header to say the block is now free. It will then place it on the free list (depending on their implementation). The malloc function reserves a little bit more memory in the heap than what the user tells it. This is because a unique value before the allocated blocks is saved in order to know what size and chunks of memory the system is able to free. int * a = (int *) malloc(sizeof(int) * 10); When you increment the pointer "a", the system will refer to the new location that a is pointing to and therefore it results in reading garbage data. This leads to usually undefined behavior and usually causes crashing when running your program. Malloc usually allocates more data than what we usually request. This additional space is used to house some of the important information such as the amount of memory(number of bytes) allocated when a malloc call is made. Sometimes additional information such as pointer to the next free location is also maintained. This information is stored at a specific location relative to the starting memory location that malloc return to us. If we return some other address to the free function, then it will look at the value in the address relative to what you passed to free and will end up freeing that number of bytes and "may" cause a crash.
STACK_EXCHANGE
Indian freelance journalist Shubhranshu Choudhary brings to our attention a set of articles published in the Hindustan Times (1 and 2) and the BBC about the work that he and Microsoft designer Bill Thies (who recently gave a talk at our group) did to establish a grassroots news network in Chhattisgarh, India. Based on an audio wiki technology developed by Thies and Saman Amarasinghe at Massachusetts Institute of Technology (MIT), the system is named CGNet Swara (Chhattisgarh Net Voice), which enables trained amateur volunteer journalists to phone in their reports to a central number where moderators then record and check the reports for accuracy. Once approved, the report is sent via text message to everyone on the news service's contact list, and they can subsequently phone in to hear the story at normal phone charge, which is less than five rupees (10 US cents). Derek Thompson at The Atlantic Online reports on his blog that a new NBER paper suggests that online news consumption is much less ideologically segregating than face-to-face interactions, but more segregating than offline news consumption. The abstract of the paper states: "We use individual and aggregate data to ask how the Internet is changing the ideological segregation of the American electorate. Focusing on online news consumption, offline news consumption, and face-to-face social interactions, we define ideological segregation in each domain using standard indices from the literature on racial segregation. We find that ideological segregation of online news consumption is low in absolute terms, higher than the segregation of most offline news consumption, and significantly lower than the segregation of face-to-face interactions with neighbors, co-workers, or family members. We find no evidence that the Internet is becoming more segregated over time." Thompson, however, views the results in more pessimistic terms: David Brooks, writing in the New York Times, cites recent research that may mitigate concerns expressed by Cass Sunstein and others about the potential of the internet to increase ideological segregation. A study by researchers at the University of Chicago Booth School of Business compares online segregation to segregation of both traditional media and face to face interactions. They find that: "[A] significant share of consumers get news from multiple outlets. This is especially true for visitors to small sites such as blogs and aggregators. Visitors of extreme conservative sites such as rushlimbaugh.com and glennbeck.com are more likely than a typical online news reader to have visited nytimes.com. Visitors of extreme liberal sites such as thinkprogress.org and moveon.org are more likely than a typical online news reader to have visited foxnews.com." News consumers with extremely narrow exposure are in fact very rare: "A consumer who got news exclusively from nytimes.com would have a more liberal news diet than 95 percent of Internet news users, and a consumer who got news exclusively from foxnews.com would have a more conservative news diet than 99 percent of Internet news users." Overall, they conclude that: Lowell Feld, a blogger at Blue Virginia and author of the book Netroots Rising, sends the link to an article written by Evgeny Morozov in Foreign Policy Magazine that attempts to dispel the "myths" that the Internet promotes freedom, political activism, and perpetual peace. Lowell does not take a position either way but finds the article relevant to the larger debate about liberation technologies. The purported myths listed by Morozov are the following:
OPCFW_CODE
Just taken delivery for UP 2 (Had a few of the original Up boards). I've followed the inx here: All the board does is go around and around and around rebooting, sometimes checking the disk, then rebooting constantly. Any ideas or is the board a dud? Hi @MarkyMark , Could you please provide more information here? - Press the ESC key when it reboots, so you can see the log of what is happening, then take a picture of it before it reboots and share it. Hi - thanks for the reply. I've attached photos of the screen. This is all there is - nothing else [ESC] included. All the board does is loop through these screens time and again. @Pratik_Kushwaha - Do you have any idea? Hi @MarkyMark , Our team is checking into this. Expect an update within the coming week. Camillus here, I see from the screenshots that you were trying to install ubuntu 20.04.1 ? Please try the following steps: - Download Rufus https://rufus.ie/en/ - Download Ubuntu 20.04.2 - Flash Ubuntu OS on USB drive using Rufus - Install image on USB on UP2 Let me know the outcome. Thanks for the reply. I followed your instructions above but no change. Just loops and loops. Occasionally checking the disk and goes back to looping time and again. Please try one more thing, reflash the image again on USB following the steps from above. - Plug in USB on device and power on - Select "Try Ubuntu" - When system boots, open terminal and do a 'sudo gparted' - On the gparted window you will find a wide box with /dev/mmcblk0 and the storage capacity written under - Right click on the box and select unmount - Right click on it and select delete for all partitions (/dev/mmcblk0p1 and /dev/mmcblk0p2) - All partitions should be 'unallocated' The partitions will be deleted and you will have an empty eMMC. Power off and retry installation again. If this does not work let me know and you can proceed with RMA. Sorry that is not possible. As mentioned all it does is loop round and round the "UP" splash screen and sometimes "checking disks". The OS never starts in any way shape or form. From the USB Stick or the eMMC. Hi @MarkyMark , I see. Kindly raise an RMA. If you purchased the UP2 directly from the UP shop you can raise it from here , if you bought from a reseller then the RMA needs to be raised through them. I have been waiting for weeks for feedback re: faulty board and still nothing. It's been at your workshop for over a month now Hi @MarkyMark , I did not realise this, sorry for the inconvenience, I will check internally and get back to you ASAP. Same problem here. I bought a UP Squared (Intel N4200, 8 GB/128 GB) back in 2017 with my Kickstarter contribution. The device was not completely stable - I had to reinstall the OS (Ubuntu, various releases) because of boot loops that started after a time of proper operation. Now, however, the device is again in a boot loop which even prevents a re-installation of the OS. After a restart the device shows the grub menu of the still installed OS. After selecting the boot option, the screen goes dark for about 45 seconds, and returns with the “UP2 bridge the gap” BIOS logo and after a few seconds again shows the grub boot menu. This goes on forever. Exactly the same loop happens when a bootable USB stick for installing a new OS is plugged in. Does anybody have a clue about how to solve this problem? Mark, was your case resolved? I will suggest that you download and install the latest BIOS for UP2 from here (if you have not already), Disconnect the RTC battery for about 3 minutes and reconnect. Then try the new OS reinstall and let us know. I already had updated the UP2 BIOS, from 1.8 to 4.0 and then to 6.1. The updates worked without any problems. Also the EFI shell works fine. Thanks for the hint regarding the RTC battery. After disconnecting as instructed, I was able to proceed one step further: I was able to install some (but not all) of the OSes I tried. E.g. "ubuntu-20.04.3-live-server-amd64" installed and ran without problem when being configured in an SSH terminal, but when I later added the ubuntu-mate-desktop GUI, the boot loop problem came back. Also other OSes I tried returned to the boot loop, either during installation or at the first start after the installation. To me it appears as though the system crashes without error message and returns to the BIOS as soon as graphics processing is needed which goes beyond command line interactions. Would it help if I made available to you some video captures of typical boot loop cases? What power supply are you using? Is it the one from our shop (5V6A) or another type? It's the one which was delivered with the UP2 (OPT-UP-PSU-003 as shown in the original packing list), which however is rated at 5V/4A only. Thus a reason for the repeated crashes could be that the UP2 needs more power than available due to higher graphics processing requirements ... It is strange that you received such PSU as that one was the standard PSU for UP Board only not UP Squared. For UP Squared we always advised and sold a 5V6A PSU (which is bought separately and not included with the board) More recently the 5V4A PSU was removed completely from our shop and for all boards this is the tested and suggested PSU from the up-shop: https://up-shop.org/up-squared-power-supply-5vat6a.html Have you ordered via up-shop or through a distributor? As a Kickstarter contributor I bought directly from AAEON / UP-Shop, see the attached sales invoice. Just found a Kickstarter e-mail listing the components that were to be delivered with the UP2 board back in 2017. A PSU was included, but no details were provided. I did not have an option to choose a specific PSU model. I see the documents attached, I deleted as those includes your personal information while this is a public forum. As you stated the order is from 2017, is the reboot issue started happening only now? Did you ever use the board until now? Thanks for deleting these documents. When it occurred to me that this information was public it was too late. I did not use the board very often, which was also due to the instabilities it showed from the start (mentioned in my first post). The constant boot loop however only started recently when I decided to use the board as a network scanner (using nmap with Zenmap). A way forward would be to see whether the board is stable with a 5V/6A PSU, e.g. the one from the UP-shop or this one: https://www.digitec.ch/de/s1/product/meanwell-power-adapter-euro-plug-in-5v-30w-diverse-elektronikzubehoer-gehaeuse-10310784?supplier=4867603 If this does not help, I'll probably recycle the UP2. I hope the new PSU will fix the issues you are experiencing. Unfortunately the board is way past warranty, I would suggest for the future that, if any new board present instability, to communicate as soon as possible and handle the case via RMA. I ordered a new PSU (5V/6A). I'll report about the outcome later. I recently received my new PSU (MeanWell SGA60E05-P1J, 5V6A) and used it for powering my UP2 board. Surprize – all previous instability and boot loop problems disappeared. I ran tests with various OS installations (Ubuntu 20.04.3 Desktop AMD64, latest Windows 10 via Media Creation tool, Elementary OS 6) – all installed and executed without any problems. Thus the instable behavior was only due to the too weak 5V4A PSU which was delivered by AAEON in the Kickstarter package in October 2017. Looking at the discussions in the UP community in November 2017 (see e.g. https://forum.up-community.org/discussion/2251/up2-not-powering-on/p1, Kasper Olesen) it was known to AAEON that the wrong PSU was sent to some (how many?) Kickstarter backers, i.e. I was not the only one who had received a 5V4A instead of the 5V6A PSU. So why were not all recipients of the wrong PSU notified in due course? A lot of hassle with and disappointment by the UP2 product could have been avoided with a timely notification. I am sorry for the issues caused by that mistake and thanks for your feedback, I don't know what went wrong at the time, but we will make sure it won't happen in future releases. I am glad that the board works fine now and good luck with your project/use case setup and development! Thank you for your feedback, very much appreciated! - 301 All Categories - 104 Announcements & News - 51 Product News - New Product/Product Change Notice/ End-of-life - 184 Welcome Developers! - 52 Unboxing & Project Sharing - 24 Tech Updates - 547 UP Products - 1 UP Xtreme i12 - UP Squared Pro 7000 - 1 UP Element i12 Edge - 10 UP Squared V2 - 25 UP 4000 - 23 UP Xtreme i11 - 34 UP Squared 6000 - 44 UP Squared Pro - 140 UP Xtreme - 819 UP Squared - 1.4K UP Board - 89 UP Core Plus - 221 UP Core - 3 UP Xtreme Lite - 43 UP AI Edge - 215 Starter Kits & Peripheral
OPCFW_CODE
RockMongo is a free, open source GUI database administration tool for MongoDB, just like phpMyAdmin to MySQL/MariaDB. RockMongo makes the database administration tasks such as creating, editing, deleting databases, create tables, reports etc., much easier and faster. In this tutorial, we will see how to install RockMongo in Linux. If you haven't install MongoDB already, refer the following link to install it in various Linux distributions such as CentOS, Debian, Ubuntu, and openSUSE etc. RockMongo is a web-based database management tool, written using PHP 5 programming language. In order to install this, make sure you have installed a web server and PHP 5, and some additional dependencies. RockMongo will not work in PHP 7 version. So, I recommend you to use PHP 5. Let us Install a web server (Apache), php 5, and some required dependencies. For the purpose of this guide, I will be using CentOS 7 64 bit server. Do not forget to set SELinux mode as permissive or disabled mode. Otherwise, you can't access the RockMongo dashboard from any remote system's browser. On RHEL / CentOS: $ sudo yum install httpd gcc php php-gd php-pear php-devel openssl-devel unzip wget Start and enable the Apache service using the following command: $ sudo systemctl start httpd $ sudo systemctl enable httpd Allow Apache webserver service through your firewall. $ sudo firewall-cmd --permanent --add-service=http $ sudo systemctl restart firewalld Then, Install the php_mongo extension using command: $ sudo pecl install mongo [...] Build process completed successfully Installing '/usr/lib64/php/modules/mongo.so' install ok: channel://pecl.php.net/mongo-1.6.14 configuration option "php_ini" is not set to php.ini location You should add "extension=mongo.so" to php.ini Edit /etc/php.ini file, $ sudo vi /etc/php.ini And the following line: Save and close the file. Restart Apache service to take effect the changes. $ sudo systemctl restart httpd Verify the extension is added or not using command: $ php -m | grep -i mongo You should see the following output: Well, we have installed required prerequisites. Now, download the latest RockMongo version from the releases page. Or, use the following command to download latest RockMongo version. $ wget https://github.com/iwind/rockmongo/archive/master.zip Extract the downloaded zip file using command: $ unzip master.zip Move the extracted folder to the web root folder as shown below. $ sudo mv rockmongo-master/ /var/www/html/rockmongo Restart httpd service: $ sudo systemctl restart httpd Access RockMongo web console Open the web browser, and navigate to http://IP-Address/rockmongo. You should see the following screen. Enter the username and password. The default username and password is admin/admin. Here it is how RockMongo dashboard looks like. From here, you can create, rename, edit, delete databases, users, tables and more. You can change the default username and password from the RockMongo config.php file. To do so, edit config.php file: $ sudo vi /var/www/html/rockmongo/config.php Change the ports, host, and admins as per your liking.
OPCFW_CODE
Improvement of graph-based algorithms for image analysis 14 Novembre 2023 Catégorie : Stagiaire Tittle : Improvement of graph-based algorithms for image analysis Starting date : Any time from January to April 2024 Duration : 4 to 6 months Place : Université de Lille - CRIStAL, Villeneuve d’Ascq 59655, France - Deise Santana Maia (Associate professor), CRIStAL (UMR CNRS 9189) : email@example.com - Julien Baste (Associate professor), CRIStAL (UMR CNRS 9189) : firstname.lastname@example.org The efficient algorithms developed in the context of graphs have had a strong impact in the field of image processing and analysis. In this context, images are classically represented by regular graphs, called grids, in which the pixels are represented by vertices, and neighbor pixels are connected by (weighted) edges. One of the main pre-processing tasks when dealing with images is to obtain a partition of the pixels into regions of interests , in which each region is homogeneous according to a given criterion (color, texture, ...). Among its several applications, one can cite object detection, recognition and tracking, image compression. In order to deal with the large amount of data currently available, one has to obtain high efficiency image pre-processing algorithms. The approach that we consider in this project is to apply the most recent advances in graph theory in order to improve the current image algorithms. In particular, grids are simple graphs that are planar and with maximum degree 4. Thus, it is expected that algorithms coming from parameterized complexity can provide interesting improvement. In this internship, the selected student is expected to : • Read and explain the basic bibliography on the two topics (images and graph theory). • Propose some ideas for improvement of the known algorithms for image segmentation using elements from graph theory. • Analyze the complexity of the given algorithm. • It is not expected for the student to implement the algorithm but this can be done if wanted. • The obtained results should be written down at the end of the internship. If the results are good enough, a scientific publication can be expected. It is expected from a candidate to have some solid notions of algorithmic. Basic notions of graph theory would be appreciated. If you are interested in this internship proposition, please send us your CV and transcripts to email@example.com and firstname.lastname@example.org. The remuneration for the internship is regulated by French’s laws and should be around 540€ a month. Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, D´aniel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015. Pedro F Felzenszwalb and Daniel P Huttenlocher. Efficient graph-based image segmentation. International journal of computer vision, 59:167–181, 2004.
OPCFW_CODE
using FluentAssertions; using Microsoft.VisualStudio.TestTools.UnitTesting; using SimaDat.Bll; using SimaDat.Models.Characters; using SimaDat.Models.Exceptions; using SimaDat.Models.Interfaces; using SimaDat.Models.Items; using System.Collections.Generic; using System.Linq; namespace SimaDat.UnitTests { [TestClass] public class ShopBllTest { private Hero _me; private IShopBll _bll; [TestInitialize] public void TestInit() { _me = new Hero(); _me.ResetTtl(); _bll = new ShopBll(); } [TestMethod] [ExpectedException(typeof(ObjectDoesNotExistException))] public void BuyGift_Exception_WhenBadGiftId() { _bll.BuyGift(_me, 666); } [TestMethod] [ExpectedException(typeof(NoMoneyException))] public void BuyGift_Exception_WhenNoMoney() { _me.SpendMoney(_me.Money); var gift = _bll.GetListOfGifts().First(); _bll.BuyGift(_me, gift.GiftId); } [TestMethod] public void BuyGift_Ok() { // Ensure that no gifts and enough money to buy var gift = _bll.GetListOfGifts().First(); _me.Gifts = new List<Gift>(); _me.SpendMoney(-gift.Price); _bll.BuyGift(_me, gift.GiftId); // Should buy this gift _me.Gifts.Single().GiftId.Should().Be(gift.GiftId); } [TestMethod] public void BuyGift_SpendMoney() { // Ensure that no gifts and enough money to buy var gift = _bll.GetListOfGifts().First(); _me.Gifts = new List<Gift>(); _me.SpendMoney(-gift.Price); int v = _me.Money; _bll.BuyGift(_me, gift.GiftId); // Money should be spent _me.Money.Should().Be(v - gift.Price); } } }
STACK_EDU
There are a few places to look and things to check in this situation. *First*, are other visualizations working on your production instance? Do you see the visualization icon/button and does the dropdown there have the 'charts' visualization on tabular files (like bed)? On the admin side for this, when your instance starts up you should see a section of logging data for visualizations that are loaded or error in the paster log. They're shown after the converter tools output logging info and before the job runners output theirs. They look something like: galaxy.web.base.pluginframework INFO 2016-01-26 09:57:37,989 VisualizationsRegistry, loaded plugin: phyloviz You should see the standard visualizations load there including 'phyloviz'. If there's a problem, then you'll see a stack trace and a message saying that it was skipped. *Second*, if visualizations are working well for other types, can you try uploading a phyloxml file and seeing if phyloviz will launch from that data? There's an example in this history: *Third*, it may be an issue with how the datatype is being referenced in the phyloviz config file (when compared with your datatypes_conf.xml file). Can you check the datatypes api on your production instance: ...and search for the 'nhx' datatype? You should see something like towards the bottom of the response: I believe, this last part of this *should* match the line in phyloviz's <test type="isinstance" test_attr="datatype" result_type="datatype"> On Mon, Jan 25, 2016 at 1:37 PM, Oksana Korol <oko...@gmail.com> wrote: > It get even trickier... I just double-checked my local install, and I > didn't have visualization_plugins_directory uncommented, and yet, it works > on local install.... > >This gets tricky, can you check if the visualisations are loacted under > >this path > If you mean phyoviz and other plugin code from github, then yes, it's > there. I doulbe-checked the files and permissions as well, just to be sure. > > and do double-check the filetypes? The icon is only displayed > > with the correct datatype. > I did... I use exactly the same newick tree file for local and for > production test. Upload it, selecting nhx as a datatype. Double-check the > datatype by clicking on a pencil icon - it's nhx on both, but it works on > local and doesn't on production. > I'll keep digging and comparing the two installs. If you think of anything > else, let me know. > Please keep all replies on the list by using "reply all" > in your mail client. To manage your subscriptions to this > and other Galaxy lists, please use the interface at: > To search Galaxy mailing lists use the unified search at: Please keep all replies on the list by using "reply all" in your mail client. To manage your subscriptions to this and other Galaxy lists, please use the interface at: To search Galaxy mailing lists use the unified search at:
OPCFW_CODE
I’ve seen repeated in various locations that ASM somehow has the ability to move data around in response to how much I/O is ocurring on each of the disks that ASM is managing. The theory goes that by doing this, ASM is able to balance the I/O amongst all the drives, thus giving your RDBMS instance(s) that are using ASM the absolute tip-top I/O performance that could possibly be achieved given your hardware limitations. Sounds great? Trouble is, it just is not true. This idea has gained a bit of traction in the community, and I’m sure many people think ASM is perhaps more clever than it actually is. Whether this is due to marketing terminological inexactitude, i’ll leave up to the reader to decide. The only metric ASM uses when determining where data should be located is the capacity of the disks in a disk group. ASM’s goal in placing data is to ensure every drive is filled to the same amount. Therefore if you have a disk group of equal size they will receive the same amount of data. The theory being that by spreading the data evenly across the drives you will achieve good I/O performance as both drives are likely to be serving the same number of I/O requests. ASM does expose some data on how many requests each disk in a disk group is performing, this is via V$ASM_DISK_STAT: SQL>select group_number, disk_number, read_time, write_time, bytes_read, bytes_written GROUP_NUMBER DISK_NUMBER READ_TIME WRITE_TIME BYTES_READ BYTES_WRITTEN ------------ ----------- ---------- ---------- ---------- ------------- 4 0 14910505.7 4626148.24 4.1821E+13 1.4998E+12 4 1 14965432.3 5324739.98 4.1833E+13 1.6264E+12 There are two disks in this disk group , which are actually of equal size. They have both read and written a similar quantity of data, though it is not exactly equal. The average write time shows a bigger discrepancy than the read times. Basically, the point is that for equal sized disks in a disk group the ASM algorithm of distributing data according to capacity works reasonably well. But consider if you had different sized disks in a disk group. A larger disk gets more data. Is a larger disk actually quicker at returning that data? Well, probably not, and the larger the discrepancy in sizes the larger the skew of I/O there will be. Maybe one day, ASM will have the ability shift data based on the I/O activity of the underlying drives but until then, make sure all the disks you have in a disk group are of the same size (oh and same performance characteristic). That way you’ll protect yourself from any I/O hot spots that ASM won’t quite save you from yet!
OPCFW_CODE
This trick is called neverinstall and it lets you stream web browsers from powerful computers with lightning-fast internet connections. It is a web app that allows you to run applications directly from your browser without having to download them. These applications include 3 widely used web browsers: - Google Chrome - Brave Browser And 6 developer applications: - VS Code – Visual Studio Code is a source-code editor made by Microsoft for Windows, Linux, and macOS. - Jupyter – Project Jupyter is a project and community whose goal is to develop open-source software, open standards, and services for interactive computing across dozens of programming languages. - Eclipse – It is an integrated development environment used in computer programming. - Android Studio – It is the official integrated development environment for Google’s Android operating system. - IntelliJ – IntelliJ IDEA is an integrated development environment written in Java for developing computer software. - PyCharm – An integrated development environment used in computer programming, specifically for the Python language. The Idea Behind neverinstall Browsers The whole idea behind neverinstall browsers is to give people with slow and moderate internet connections access to superfast internet speeds of up to 10 Gbps. You can select and create a browser of your choice and run it on a powerful computer connected to lightning-fast internet located somewhere in the United States, India, or Singapore. Once the browser is created, neverinstall will simply stream it on your computer. You can use your mouse and keyboard to interact with the streamable browser, similar to what you do while using a native one. Since It takes less computing power and internet data to browse the internet on a browser that’s running remotely, you’ll be able to enjoy the full lightning-fast internet experience irrespective of your home/office internet connection speed. How To Use neverinstall Browsers Using neverinstall browsers to stream and browse the internet at high speeds is extremely easy. - Visit www.neverinstall.com/browsers and select a browser of your choice. - Choose the region you want to access the browser from and click on the launch button. - Your browser will be created and launched in a new window giving you access to secure and super-fast internet. neverinstall Browsers As A VPN Other than the fact that neverinstall browsers can help browse the internet at ultra-high speeds, one can also use it as a VPN to bypass region-restricted content/unblock websites. Currently, you can only create your browsers in the United States, India, and Singapore. neverinstall will be introducing more regions in the near future. Are neverinstall Browsers Secure? According to neverinstall, they permanently delete all user data (browser history, login information, and other data) after every session ends and do not store them unless you, the user opt in. Your data is only accessible to you and will not be shared with any third parties. However, you should still refrain from using any streamable browser to perform any personal or financial transactions. Is neverinstall Free? Yes, neverinstall is completely free to use.
OPCFW_CODE
how to handle vehicle speed and wheel angle covariance in a kalman filter When fusing IMU, vehicle speed, and wheel angle, I am having trouble converting vehicle speed and wheel angle covariance to system state covariance. I am using a Kalman filter to fuse the readings from an IMU and a vehicle. IMUs provide linear acceleration and angular velocity. The vehicle's CAN is providing speed and wheel angle. I am using a simplified bicycle model. Assuming constant acceleration, the vehicle's state is as follows: x x_dot x_2dot y y_dot y_2dot theta theta_dot Using the bicycle model, I can calculate the following states of the vehicle given speed and wheel angle: x, x_dot, y, y_dot, theta, theta_dot, but I am not sure how to handle the covariance of the speed and wheel angle. I thought about this problem in two different ways. First, convert wheel angle and speed to the states above, then create an observation matrix H that maps the measurement to the prediction one to one, then transform the 2x2 speed and wheel angle covariance matrix into a 6x6 matrix for the calculated states x, x_dot, y, y_dot, theta, theta_dot. The second way is to feed the Kalman filter the wheel angle and speed but modify the observation matrix so that it can transform the predicted state by the Kalman filter into wheel and speed, but I could not find a way to do so. A multi-step approach comes to mind: Start with the first approach (convert wheel angle and speed into the states and fuse them using the standard Kalman filter measurement equations. The H matrix will be 1's and 0's. Don't worry about getting the measurement covariance exactly right. Just assume a reasonable fixed value for each state (e.g. 1 m^2 for position and 0.25 (m/s)^2 for velocity). See how it works. You may find it is good enough. Or, perhaps it will show you need more sensors. After all, both the IMU and wheel odometry will drift with time/distance. If everything looks good and you want a more exact measurement covariance matrix, then proceed to step 3. Take the equations you are using for mapping the measurements into states. These are likely non-linear. Linearize them and then use standard linear methods for propagating the measurement error into state errors. Thank you for your reply. The problem with this method is that I find the orientation of the robot drift much faster from reality. I think that this might be due to a high variance in the steering wheel angle measurement. This is why I think if I use the measurement variance of the steering wheel to tune the filter then this would give better results. When using a covariance matrix of the states to tune for that error it is not possible to isolate the effect of the error in the wheel angle. No problem. You mentioned "the orientation of the robot drift much faster from reality". Can you explain further? Is the filter too conservative (i.e. estimated variance > true error) or too optimistic (i.e. estimated variance < true error)? And how do you know the problem is not with the IMU part of the filter?
STACK_EXCHANGE
#!/usr/bin/env bash # This script presents the user with a menu showing the current datasets # available in the PostgreSQL database running in the asv-db container. # By selecting one of the datasets from the menu, that dataset and all # associated data is deleted from the database. # # The script have no command line opitons and takes no other arguments. # topdir=$( readlink -f "$( dirname "$0" )/.." ) # Refuse to run non-interactively. if [ ! -t 1 ]; then echo 'This script is supposed to be run interactively.' exit 1 fi >&2 #----------------------------------------------------------------------- # Perform sanity checks and read database configuration. #----------------------------------------------------------------------- if [ ! -e "$topdir/.env" ]; then printf 'Can not see "%s" to read database configuration.\n' "$topdir/.env" exit 1 fi >&2 if [ "$( docker container inspect -f '{{ .State.Running }}' asv-db )" != 'true' ] then echo 'Container "asv-db" is not available.' exit 1 fi >&2 # shellcheck disable=SC1090 . <( grep '^POSTGRES_' "$topdir/.env" ) || exit 1 tput bold cat <<'MESSAGE_END' ************************************************************************ This script deletes datasets Use with care ************************************************************************ MESSAGE_END tput sgr0 # Just a convinience function to send an SQL statement to the database. # Initiates a separate session for each call. do_dbquery () { docker exec asv-db \ psql -h localhost -U "$POSTGRES_USER" -d "$POSTGRES_DB" \ --quiet --csv \ -c "$1" } # Set up interactive prompt. printf -v PS3fmt '%s' \ '\n' \ 'Please select dataset to delete (2-%d),\n' \ 'or select 1 to quit.\n' \ '--> ' #----------------------------------------------------------------------- # Main menu loop. #----------------------------------------------------------------------- while true; do # Get list of current datasets. readarray -t datasets < <( do_dbquery "SELECT 'pid:' || pid FROM dataset" | sed 1d ) nsets=${#datasets[@]} # shellcheck disable=SC2059 printf -v PS3 "$PS3fmt" "$(( nsets + 1 ))" # Show menu, get input, validate. select dataset in QUIT "${datasets[@]}"; do if [[ "$REPLY" != *[![:digit:]]* ]] && [ "$REPLY" -ne 0 ]; then if [ "$REPLY" -eq 1 ]; then break 2 # quit elif [ "$REPLY" -le "$(( nsets + 1 ))" ] then break # other valid choice fi fi echo 'Invalid choice.' >&2 done # Perform deletion. printf 'DELETING "%s"...\n' "$dataset" dataset=${dataset//pid:/} do_dbquery 'DELETE FROM dataset WHERE pid = '"$dataset" do_dbquery 'DELETE FROM asv WHERE pid NOT IN (SELECT DISTINCT asv_pid FROM occurrence)' echo 'Done.' done echo 'Bye.'
STACK_EDU
algorithm for solving resource allocation problems Hi I am building a program wherein students are signing up for an exam which is conducted at several cities through out the country. While signing up students provide a list of three cities where they would like to give the exam in order of their preference. So a student may say his first preference for an exam centre is New York followed by Chicago followed by Boston. Now keeping in mind that as the exam centres have limited capacity they cannot accomodate each students first choice .We would however try and provide as many students either their first or second choice of centres and as far as possible avoid students having to give the third choice centre to a student Now any ideas of a sorting algorithm that would mke this process more efficent.The simple way to do this would be to first go through the list of first choice of students allot as many as possible then go through the list of second choices and allot. However this may lead to the students who are first in the list getting their first centre and the last students getting their third choice or worse none of their choices. Anything that could make this more efficient My gut feel is that a "perfect" algorithm would be NP-complete, and you'll have to settle for an approximation. Why not just give priority to the first students who signed up? You have to discrimate them anyways. The problem is that we have been specifically told by the client not to go with a first come first serve approach. The reason being that there is that the students in different locations have different dates to fill up their exam form.So its not their fault that they filled up their form later than the others. Why not picking the students randomly? No discrimination this way ;) The process of allocating resources is not called sorting. I changed the title to more closely match your problem. Sounds like a variant of the classic stable marriages problem or the college admission problem. The Wikipedia lists a linear-time (in the number of preferences, O(n²) in the number of persons) algorithm for the former; the NRMP describes an efficient algorithm for the latter. I suspect that if you randomly generate preferences of exam places for students (one Fisher–Yates shuffle per exam place) and then apply the stable marriages algorithm, you'll get a pretty fair and efficient solution. This problem could be formulated as an instance of minimum cost flow. Let N be the number of students. Let each student be a source vertex with capacity 1. Let each exam center be a sink vertex with capacity, well, its capacity. Make an arc from each student to his first, second, and third choices. Set the cost of first choice arcs to 0; the cost of second choice arcs to 1; and the cost of third choice arcs to N + 1. Find a minimum-cost flow that moves N units of flow. Assuming that your solver returns an integral solution (it should; flow LPs are totally unimodular), each student flows one unit to his assigned center. The costs minimize the number of third-choice assignments, breaking ties by the number of second-choice assignments. There are a class of algorithms that address this allocating of limited resources called auctions. Basically in this case each student would get a certain amount of money (a number they can spend), then your software would make bids between those students. You might use a formula based on preferences. An example would be for tutorial times. If you put down your preferences, then you would effectively bid more for those times and less for the times you don't want. So if you don't get your preferences you have more "money" to bid with for other tutorials.
STACK_EXCHANGE
Oct. 08, 2015 Most application connector templates contain a predefined URL. When you add the application, you can choose to save the default settings. The application is then configured for SSO. For example, when you configure the AtTask application connector, the URL appears as $$url$$/attask/home.cmd. You replace $$url$$ with the subdomain name. This is the URL with which users log on. You need to add the URL and subdomain name for an application connector, such as the Basecamp application. You must know where to locate the name of the cookie to enter the name in this field. Some application connectors require configuration in App Controller and in the application. One example is Google Apps. When you configure Google Apps in App Controller, you need to download a SAML certificate from App Controller and install the certificate in Google Apps. You also need to configure SSO settings in Google Apps to work with App Controller. For more information about downloading the SAML certificate, see xmob-appc-saml-app-certs-tsk.html#clg-appc-saml-app-certs-c-tsk. The following is a list of applications that require additional parameters. Some applications require that you download a SAML certificate from App Controller and then upload the certificate to App Controller. For more information about downloading the certificate, see xmob-appc-saml-app-certs-tsk.html#clg-appc-saml-app-certs-c-tsk. This is the web address that appears when users log off. For example, type https://appc-johndoe-151.agsag.com/mywebapps For example, type https://appc-johndoe-151.agsag.com/mywebapps $dom = "<Domain name>" $fedBrandName = "AppC" $url = "https://< AppC FQDN>/samlsp/websso.do?action=authenticateUser&app=Office365_SAML" $uri = "AppController.example.com" $logoutUrl = "https://<AppC FQDN>/samlsp/websso.do?action=logout&app=Office365_SAML" $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("certificate.pem") $certData = [system.convert]::tobase64string($cert.rawdata) Set-MsolDomainAuthentication –DomainName $dom –federationBrandName $fedBrandName -Authentication Federated -PassiveLogOnUri $url -SigningCertificate $certData -IssuerUri $uri -LogOffUri $logoutUrl -PreferredAuthenticationProtocol SAMLP
OPCFW_CODE
Tf-agent Actor/Learner: TFUniform ReplayBuffer dimensionality issue - invalid shape of Replay Buffer vs. Actor update I try to adapt the this tf-agents actor<->learner DQN Atari Pong example to my windows machine using a TFUniformReplayBuffer instead of the ReverbReplayBuffer which only works on linux machine but I face a dimensional issue. [...] ---> 67 init_buffer_actor.run() [...] InvalidArgumentError: {{function_node __wrapped__ResourceScatterUpdate_device_/job:localhost/replica:0/task:0/device:CPU:0}} Must have updates.shape = indices.shape + params.shape[1:] or updates.shape = [], got updates.shape [84,84,4], indices.shape [1], params.shape [1000,84,84,4] [Op:ResourceScatterUpdate] The problem is as follows: The tf actor tries to access the replay buffer and initialize the it with a certain number random samples of shape (84,84,4) according to this deepmind paper but the replay buffer requires samples of shape (1,84,84,4). My code is as follows: def train_pong( env_name='ALE/Pong-v5', initial_collect_steps=50000, max_episode_frames_collect=50000, batch_size=32, learning_rate=0.00025, replay_capacity=1000): # load atari environment collect_env = suite_atari.load( env_name, max_episode_steps=max_episode_frames_collect, gym_env_wrappers=suite_atari.DEFAULT_ATARI_GYM_WRAPPERS_WITH_STACKING) # create tensor specs observation_tensor_spec, action_tensor_spec, time_step_tensor_spec = ( spec_utils.get_tensor_specs(collect_env)) # create training util train_step = train_utils.create_train_step() # calculate no. of actions num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1 # create agent agent = dqn_agent.DqnAgent( time_step_tensor_spec, action_tensor_spec, q_network=create_DL_q_network(num_actions), optimizer=tf.compat.v1.train.RMSPropOptimizer(learning_rate=learning_rate)) # create uniform replay buffer replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=1, max_length=replay_capacity) # observer of replay buffer rb_observer = replay_buffer.add_batch # create batch dataset dataset = replay_buffer.as_dataset( sample_batch_size=batch_size, num_steps = 2, single_deterministic_pass=False).prefetch(3) # create callable function for actor experience_dataset_fn = lambda: dataset # create random policy for buffer init random_policy = random_py_policy.RandomPyPolicy(collect_env.time_step_spec(), collect_env.action_spec()) # create initalizer init_buffer_actor = actor.Actor( collect_env, random_policy, train_step, steps_per_run=initial_collect_steps, observers=[replay_buffer.add_batch]) # initialize buffer with random samples init_buffer_actor.run() (The approach is using the OpenAI Gym Env as well as the corresponding wrapper functions) I worked with keras-rl2 and tf-agents without actor<->learner for other atari games to create the DQN and both worked quite well afer a some adaptions. I guess my current code will also work after a few adaptions in the tf-agent libary functions, but that would obviate the purpose of the libary. My current assumption: The actor<->learner methods are not able to work with the TFUniformReplayBuffer (as I expect them to), due to the missing support of the TFPyEnvironment - or I still have some knowledge shortcomings regarding this tf-agents approach Previous (successful) attempt: from tf_agents.environments.tf_py_environment import TFPyEnvironment tf_collect_env = TFPyEnvironment(collect_env) init_driver = DynamicStepDriver( tf_collect_env, random_policy, observers=[replay_buffer.add_batch], num_steps=200) init_driver.run() I would be very grateful if someone could explain me what I'm overseeing here. It is an environment wrapper, they update output parameters I have the same problem then I use the gym.action wrapper and nother wrapper. observation, reward, done, info = env.step(action) and observation, reward, done, priority, info = env.step(action) Thx, for you suggestion. I already faced your issue many times and this was the first thing that I checked. My current assumption is that maybe I used the wrong Buffer. Because the Env. was created as an Py Env and the Buffer is TF based. -> py_uniform_replay_buffer.PyUniformReplayBuffer instead of tf_uniform_replay_buffer.TFUniformReplayBuffer. the full fix is shown below... --> The dimensionality issue was valid and should indicate the the (uploaded) batched samples are not in the correct shape --> This issue happens due to the fact that the "add_batch" method loads values with the wrong shape rb_observer = replay_buffer.add_batch Long story short, this line should be replaced by rb_observer = lambda x: replay_buffer.add_batch(batch_nested_array(x)) --> Afterwards the (replay buffer) inputs are of correct shape and the Learner Actor Setup starts training. The full replay buffer is shown below: # create buffer for storing experience replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( agent.collect_data_spec, 1, max_length=1000000) # create batch dataset dataset = replay_buffer.as_dataset( sample_batch_size=32, num_steps = 2, single_deterministic_pass=False).prefetch(4) # create batched nested array input for rb_observer rb_observer = lambda x: replay_buffer.add_batch(batch_nested_array(x)) # create batched readout of dataset experience_dataset_fn = lambda: dataset I fixed it...partly, but the next error is (in my opinion) an architectural problem. The problem is that the Actor/Learner setup is build on a PyEnvironment whereas the TFUniformReplayBuffer is using the TFPyEnvironment which ends up in the failure above... Using the PyUniformReplayBuffer with a converted py-spec solved this problem. from tf_agents.specs import tensor_spec # convert agent spec to py-data-spec py_collect_data_spec = tensor_spec.to_array_spec(agent.collect_data_spec) # create replay buffer based on the py-data-spec replay_buffer = py_uniform_replay_buffer.PyUniformReplayBuffer( data_spec= py_collect_data_spec, capacity=replay_capacity*batch_size ) This snippet solved the issue of having an incompatible buffer in the background but ends up in another issue --> The add_batch function does not work I found this approach which advises to use either a batched environment or to make the following adaptions for the replay observer (add_batch method). from tf_agents.utils.nest_utils import batch_nested_array #********* Adpations add_batch method - START *********# rb_observer = lambda x: replay_buffer.add_batch(batch_nested_array(x)) #********* Adpations add_batch method - END *********# # create batch dataset dataset = replay_buffer.as_dataset( sample_batch_size=32, single_deterministic_pass=False) experience_dataset_fn = lambda: dataset This helped me to solve the issue regarding this post but now I run into another problem where I need to ask someone of the tf-agents-team... --> It seems that the Learner/Actor structure is no able to work with another buffer than the ReverbBuffer, because the data-spec which is processed by the PyUniformReplayBuffer sets up a wrong buffer structure... For anyone who has the same problem: I just created this Github-Issue report to get further answers and/or fix my lack of knowledge.
STACK_EXCHANGE
Table of Contents 802.1x (Port Based Network Access Control) 802.1x is a LAN Security Mechanism that provides port based access control in the network devices. In 802.1x mechanism, devices needs to be authenticated before accessing the network. In 802.1x mechanism, there are three roles.These 802.1x roles are : • Authentication Server Supplicant : The device that need to be authenticated before network access Authenticator : The device that the other devices authentication needed devices are connected. A central point to come together. Authentication Server : The device that checks the supplicant and provides authentication or not. Here, let’s go through an example. Think about that you are a new employee in a company and you have just get your laptop. You would like to connect the company’s network. Here, the login client that your company has given to you, is the Supplicant. • Your Client (Supplicant) If your company is using 802.1x authentication, it has two main configurations in the company network for network users. These configurations are done by the company it staff to secure company network. These configurations are: • Switch’s 802.1x Configuration (Authenticator) • AAA Server Configuration (Authentication Server) The company switch port that you are connected to, must be configured with 802.1x. Beside, usernames and passwords of the clients(Supplicant), must be defined in the AAA Server. So, when you try to connect the company’s network, you need to be authenticated. This authentication is done by company’s Authentication Server through the switch (Authenticator) that you are connected to. Here the process works like below: 1) Network Connection Request to Authenticator (Switch) 2) Identity Request to Supplicant (Host) 3) User & Password are Sent to Authenticator (Switch) 4) Authentication Check Request to Authentication Server (AAA Server) 5) At Server Authentication will be confirmed and Authorization will be sent to Authenticator (Switch) There is an Authentication Protocol use with 802.1x that is used between Supplicant (Host) and Authentication Server (AAA Server). The name of this protocol is EAP (Extensible Authentication Protocol) . With EAP, Supplicant and Authenticator determines the Authentication method. Here, there are different other protocols and encapsulation technologies are used between Supplicant, Authenticator and Authentication Server. The technology between Supplicant and Authenticator, is Ethernet, a leyer 2 technology. Here, the EAP packets are transfered in ethernet frame with EAP over LAN (EAPol) encapsulation. The protocols used between Authenticator and Authentication Server are RADIUS or Diameter Protocols. EAP and RADIUS Messages Now, let’s think about a 802.1x process with RADIUS and check both EAP and RADIUS messages between the three components of 802.1x.
OPCFW_CODE
Is it possible to determine the bandwith of a remote server on the internet? Is it possible to determine a bandwidth estimation of a remote host on the internet. I wish to do a random scan on the internet and determine the more or less the distrbution of network speeds. Thus, in short, if I have say www.google.com, is there a way to estimate the upper bandwith that the host can handle, say 100mb/s? I don't think you can achieve what you want by measuring remotely. Any measurement that you take will be constrained by: Your own internet connection speed You will be contending for bandwidth at the remote end with 1000's of other users Many companies do not have one Internet connection, but hundreds around the world, by using Content Delivery Network techniques. Many websites will rateshape your connection to ensure that you don't hog all the bandwidth. Many websites have shared hosting, so 10 companies could be sharing a single 1Gbps link to the data centre - would you consider that 100Mbps (1Gbps / 10 sites) each, or would you consider that at 10Gbps (1Gbps x 10 sites)? Most companies have redundant internet links to different ISPs. Would you want to measure each of these links individually and add them up? Let's approach the problem from a different angle - why do you want to determine the distribution of network speeds? Are you trying to guesstimate how much bandwidth your own site needs or is it for something else? I'm trying to generate the distribution for a research project. I'm not as focused on websites than I am simply trying to estimate the host bandwidth distribution for servers on the internet. @LopLop - simply put, that is not possible. When you say host bandwidth, do you mean per-server? Most servers are connected at 100Mbps/1Gbps nowadays; but then end up contending for bandwidth somewhere along the line. I'm still not sure I understand what you are researching. Take a look at pathchar if you want a tool to estimate the characteristics of different network links between two sites. Having said that, I believe that "random" sites you might select to probe may view your attempts as hostile and respond with complaints to your upstream provider(s), counterattacks, modify the results that you get, and otherwise interfere with your project. Also, you've got no good way to know (without asking) if the host/network in question is going to be injured or inconvenienced by your efforts - either because they don't have a flat-rate connection, and/or because they actually need the bandwidth they pay for, and subsidizing your "research" isn't something they've agreed to do. In sum, I don't think your idea is a good one, and I don't think you'll learn very much from the effort, other than that other people don't like it when you screw around with their networks for no good reason. If you really want to know about the characteristics of network connections, you can probably find this in discussions occuring at a business/budgetary level for providers or consumers. If you really want to play around with measuring network characteristics, that's cool, but get permission first or do it on hardware that you legitimately control. Don't just arbitrarily bother other people on the Internet.
STACK_EXCHANGE
Update: Robocode tournament continues on this Friday (7/1) and the next few fridays from 5-7 PM. I am cutting and pasting this information as I physically sit in this amazing facility. I have found myself the expert at sourcing computer science programs for children, and as I watch my offspring completely at home with like-minded kids, and wonderful adults facilitating, I’ll put some basics into this post, just to get the word out. So we are physically at the Howard Area Community Center’s Computer Clubhouse, which, if you click on that link, will tell you: The Intel Computer Clubhouse Network is an international community of 100 Computer Clubhouses located in 20 different countries around the world. The Computer Clubhouse provides a creative and safe out-of-school learning environment where young people from underserved communities work with adult mentors to explore their own ideas, develop skills, and build confidence in themselves through the use of technology. What we’re attending is part of the Game Maker Academy. Today they’re installing Robocode. The information of that includes: What is Robocode? Robocode is a programming competition where the goal is to code a robot to compete against other robots in a battle arena. The player must write the AI of the robot telling it how to behave and react to events occurring in the battle arena. Robocode is designed to help you learn Java, and have fun doing it. Workshops and Competitions Free Robocode workshops and mini-competitions will begin on Saturday June 18, and will continue on Saturdays through the end of July. All workshops will be informal, and are designed to introduce the basic functional coding strategies you’ll need to create a successful robot. The Robocode CHICAGO tournament will be held on Saturday, July 30 at the Howard Area Computer Clubhouse, 1527 W. Morse, Chicago 60626. The tournament structure, rules, and prizes will be announced on this page in the coming weeks. Robocode CHICAGO is hosted by Game Maker Academy and the Game Design Club. So the final cut and paste I’ll do is about Game Maker Academy, from their “About” page: Game Maker Academy began in the mid-2000s as series of workshops developed to nurture STEM thinking, foundational programming concepts and digital literacy skills within a computer game design context. The initial workshops were hosted at the Wilmette (IL) Public Library and the Park Ridge Public Library. These programs were organized as informal, learner-centered workshops within the constructionist tradition, and were inspired in part by the example of seminal hacker spaces such as the People’s Computer Center, the Homebrew Computer Club, and by the Computer Clubhouse Network established by The Computer Museum (now part of the Museum of Science, Boston) and the MIT Media Lab. The earliest programs focused upon the Game Maker platform created by Mark Overmars of Utrecht University. Additional programs were soon offered, using platforms such as Scratch, Alice, Robocode, Greenfoot, Starlogo, and a variety of open source media editing tools. Soon, workshops were being offered at youth centers and libraries through north Chicago and its suburbs. The Origins of the Game Design Club Seeking to sustain the creative atmosphere and hacker spirit of our earliest workshops and open labs, a group of participants proposed the creation of an informal club that would take responsibility for organizing and hosting monthly meetings, as well as additional workshops, competitions and design jams. Entirely self-funded, the group established the Game Maker Academy website, and commenced hosting a cycle of retro gaming tournaments and design workshops focusing alternatively on game design, animation, and digital storytelling, as well as the ever-popular Robocode melee tournaments.
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Windows.Forms; namespace AAInfo { /*AAInfo -InfoDisplayer- Version 1.0 * Created: 10/22/2020 * Updated: 10/23/2020 * Designed by: Kevin Sherman at Acrelec America * Contact at: Kevin@Metadevdigital.com * * Copyright Copyright MIT Liscenece - Enjoy boys, keep updating without me. Fork to your hearts content */ public class InfoDisplayer { private string toolName, companyName, toolLicence, toolDesc; private frmAbout aboutForm; /// <summary> /// Default constructor of InfoDisplayer /// Fills fields with pretty generic information that is assumed based off of the projects this has been associated with thusfar. /// Creates an about form using the default constructor, which allows the form to create its own default ErrorReporter rather than handling it inside this class /// <see cref="aboutForm"/> /// </summary> public InfoDisplayer() { toolName = "This software"; companyName = "Acrelec"; toolLicence = "MIT Licence"; toolDesc = "help fascilitate our operations in a streamlined manner"; aboutForm = new frmAbout(); } /// <summary> /// InfoDisplayer constructor for customizable about screens /// Creates an about form using the frmAbout(ErrorReporter) constructor allowing the insertion of the custom software name but default email settings /// </summary> /// <param name="software">String of software name</param> /// <param name="company">String of company name</param> /// <param name="licence">String of licence type</param> /// <param name="desc">String of software description</param> /// <see cref="ErrorReporter"/> /// <see cref="aboutForm"/> public InfoDisplayer(string software, string company, string licence, string desc) { toolName = software; companyName = company; toolLicence = licence; toolDesc = desc; aboutForm = new frmAbout(new ErrorReporter(toolName)); } /// <summary> /// InfoDisplayer constructor for customizable about screens and a preconstructed ErrorReporter for custom email settings /// </summary> /// <param name="software">String of software name</param> /// <param name="company">String of company name</param> /// <param name="licence">String of licence type</param> /// <param name="desc">String of software description</param> /// <param name="error">Preconstructed ErrorReporter that was made with the non-default constructor</param> /// <see cref="ErrorReporter"/> /// <see cref="aboutForm"/> public InfoDisplayer(string software, string company, string licence, string desc, ErrorReporter error) { toolName = software; companyName = company; toolLicence = licence; toolDesc = desc; aboutForm = new frmAbout(error); } /// <summary> /// Formats the text and displays the dialog box /// Used to interface with a constructed InfoDisplayer /// </summary> public void showForm() { formatText(); aboutForm.ShowDialog(); } /// <summary> /// Gets default text(string[]) and keys(string[]) from aboutForm, references what was input during construction (string[4]), and formats /// the text by itterating through all string[]'s and calling replaceFields() to actually replace. /// </summary> private void formatText() { string[] transformText = aboutForm.getText(); string[] replaceKeys = aboutForm.getTextKeys(); string[] descriptors = new string[] { toolName, companyName, toolDesc, toolLicence }; for (int j = 0; j < transformText.Length; j++) { for (int i=0; i<descriptors.Length; i++) { transformText[j] = replaceFields(transformText[j], replaceKeys[i], descriptors[i]); } } aboutForm.setText(transformText); } /// <summary> /// Checkes a string for the existance of a key, if found it is replaced with the new text and differeing formatting depending on key that was sent /// /// NOTE: The text sent in is encapsulated in "<<"&">>". This is setup by default from where the strings originated. /// The final step of processing this text will add special formatting depending on the encapsulating characters /// BOLD = "|" /// </summary> /// <param name="original">Original string</param> /// <param name="key">Key that will be searched for and replaced, if found</param> /// <param name="replacer">Text that the key will be replaced with, if found</param> /// <returns>Original string with the key replaced with the replacer text, if found</returns> private static string replaceFields(string original, string key, string replacer) { if (original.IndexOf(key)>=0) { if (key == "<<DESC>>") { return original.Replace(key, replacer); } else { return original.Replace(key, "|" + replacer + "|"); } } else { return original; } } } }
STACK_EDU
Support processes and variables Here's what I expect I'll need, based on a simple test example: architecture tb_arch of some_testbench is signal t_clk: std_logic; signal scope_a_step_1, scope_a_a_step_1, scope_a_b_step_1 : boolean := false; -- [...] begin scope_a_step_1 <= scope_a_a_step_1 and scope_a_b_step_1; clk_gen: process is begin t_clk <= '1'; wait for 1 ns; t_clk <= '0'; wait for 1 ns; end process clk_gen; -- Port mapping [...] scope_a_a_test_proc: process(clk, scope_a_step_1) is begin sink_port_a_valid <= '1'; -- Start sequence transfer \\ = sink_port_a_data(2 downto 0) <= "001"; -- Lane 0 \\ [ [ "001" sink_port_a_data(5 downto 3) <= "100"; -- Lane 1 \\ , "100" sink_port_a_endi <= "01"; -- Lane 2 is unused, so end index = 1 sink_port_a_last <= "01"; -- Last in dimension 0 \\ ] sink_port_a_strb <= '1'; -- Data is active wait until rising_edge(clk) and sink_port_a_ready = '1'; -- Wait for a rising clock edge and for the receiving interface to acknowledge ready, new transfer sink_port_a_data(2 downto 0) <= "010"; -- Lane 0 \\ , [ "010" sink_port_a_endi <= "00"; -- Lane 1 and 2 are inactive, so end index = 0 sink_port_a_last <= "01"; -- Last in dimension 0 \\ ] sink_port_a_strb <= '1'; -- Data is active wait until rising_edge(clk) and sink_port_a_ready = '1'; -- Wait for a rising clock edge and for the receiving interface to acknowledge ready, new transfer sink_port_a_last <= "01"; -- Last in dimension 0 \\ , [ ] sink_port_a_strb <= '0'; -- Data is inactive (^ empty sequence) wait until rising_edge(clk) and sink_port_a_ready = '1'; -- Wait for a rising clock edge and for the receiving interface to acknowledge ready, new transfer sink_port_a_data(2 downto 0) <= "001"; -- Lane 0 \\ , [ "001" sink_port_a_data(5 downto 3) <= "000"; -- Lane 1 \\ , "000" sink_port_a_data(8 downto 6) <= "111"; -- Lane 2 \\ , "111" sink_port_a_endi <= "10"; -- All lanes active, so end index = 2 sink_port_a_last <= "00"; -- Last in no dimensions sink_port_a_strb <= '1'; -- Data is active wait until rising_edge(clk) and sink_port_a_ready = '1'; -- Wait for a rising clock edge and for the receiving interface to acknowledge ready, new transfer sink_port_a_data(2 downto 0) <= "001"; -- Lane 0 \\ , "001" sink_port_a_endi <= "00"; -- Lane 1 and 2 are inactive, so end index = 0 sink_port_a_last <= "11"; -- Last in dimensions 0 and 1 \\ ] ] sink_port_a_strb <= '1'; -- Data is active wait until rising_edge(clk) and sink_port_a_ready = '1'; -- Wait for a rising clock edge and for the receiving interface to acknowledge ready, new transfer sink_port_a_last <= "01"; -- Last in dimension 0 \\ , [ [ ] sink_port_a_strb <= '0'; -- Data is inactive (^ empty sequence) wait until rising_edge(clk) and sink_port_a_ready = '1'; -- Wait for a rising clock edge and for the receiving interface to acknowledge ready, new transfer sink_port_a_last <= "11"; -- Last in dimensions 0 and 1 \\ , [ ] ] sink_port_a_strb <= '0'; -- Data is inactive (^ empty sequence) wait until rising_edge(clk); -- Wait for a rising clock edge sink_port_a_valid <= '0'; -- End sequence transfer. \\ ; scope_a_a_step_1 <= true; wait until scope_a_step_1; -- [...] end process scope_a_a_test_proc; scope_a_b_test_proc: process(clk, scope_a_step_1) is begin source_port_a_valid <= '1'; wait until rising_edge(clk) and source_port_a_valid = '1'; assert source_port_a_data(2 downto 0) = "001" report "some error message" severity error; assert source_port_a_data(5 downto 3) = "100" report "some error message" severity error; assert source_port_a_endi = "01" report "some error message" severity error; assert source_port_a_last = "01" report "some error message" severity error; assert source_port_a_strb = '1' report "some error message" severity error; wait until rising_edge(clk) and source_port_a_valid = '1'; -- [...] wait until rising_edge(clk); source_port_a_ready <= '0'; scope_a_b_step_1 <= true; wait until scope_a_step_1; -- [...] end process scope_a_b_test_proc; end tb_arch; Create a (labelled) process with a sensitivity list a. For creating the parallel “streams” b. For creating a clock generator Declare and assign boolean variables and signals a. For locking scopes (using a global signal, indicate that step X of scope Y has succeeded) Creating conditions/logical operators (converts to bool): a. Equals i. For assertions ii. For waiting on ready/valid = ‘1’ b. rising_edge/falling_edge i. For waiting on a clock c. “And” i. For waiting on a clock AND ready/valid ii. For combining scope conditions Time objects and values a. For creating a clock generator b. (Optionally) for timing out Control flow: Wait (until) a. For waiting on rising edge and ready/valid Testing: Assert Report, and Report a. Assert Report for verifying the output of sources b. Optionally, Report for indicating test names/stages. In the future, having a "to string" ('image) for the Assert Reports would also be very useful. So the tests can output "Expected: ... Actual: ...". Unfortunately, I'm pretty sure std_logic_vector doesn't support 'image by default (only works for scalars, not collections), so I'd also have to write a to_string function of sorts (or use VHDL-2008's, but that's probably too much to ask).
GITHUB_ARCHIVE